Most frequently asked Android core questions in 2025-26 interviews at Google, Flipkart, Swiggy & top startups.
The Activity lifecycle has 7 callbacks. Understanding what triggers each is critical for managing resources correctly.
// Activity Lifecycle flow: // onCreate → onStart → onResume → [RUNNING] // [RUNNING] → onPause → onStop → onDestroy class MainActivity : AppCompatActivity() { override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) // Initialize UI, ViewModels, one-time setup } override fun onResume() { super.onResume() // Activity is visible & interactive — start sensors, animations } override fun onPause() { super.onPause() // Partially hidden — pause sensors, save quick state } override fun onStop() { super.onStop() // Fully hidden — release heavy resources } override fun onDestroy() { super.onDestroy() // Final cleanup } }
- Home button: onPause → onStop (Activity stays in memory, NOT destroyed)
- Back button: onPause → onStop → onDestroy (Activity is destroyed)
- Screen rotation: onPause → onStop → onDestroy → onCreate → onStart → onResume
- Another Activity opens: current Activity gets onPause → onStop
- Use
onSaveInstanceState()to save UI state before destruction
Interviewers often ask: "What's the difference between onPause and onStop?" — onPause = Activity partially visible, onStop = completely hidden. Get this distinction right.
Fragments have their own lifecycle that is tightly coupled to the host Activity. In 2024+, the recommended approach is to observe lifecycle in onViewCreated(), never onCreate().
class HomeFragment : Fragment(R.layout.fragment_home) { // viewLifecycleOwner vs this (fragment) — KEY difference override fun onViewCreated(view: View, savedInstanceState: Bundle?) { super.onViewCreated(view, savedInstanceState) // ✅ Use viewLifecycleOwner for UI observations viewModel.data.observe(viewLifecycleOwner) { data -> updateUi(data) } // ❌ Never use 'this' — causes memory leaks! // viewModel.data.observe(this) { ... } } override fun onDestroyView() { super.onDestroyView() // View is destroyed but Fragment instance lives on // Clean up view binding here! _binding = null } }
- Fragment has extra callbacks:
onCreateView,onViewCreated,onDestroyView viewLifecycleOwnertracks the view's lifecycle, not the Fragment's — use it for UI observations- Fragment can survive its view being destroyed (back stack) — always null binding in
onDestroyView - When Activity stops → Fragment also gets onStop; when Activity destroys → Fragment destroys too
The viewLifecycleOwner vs this distinction is one of the most common Android memory leak sources. Mentioning it proactively shows senior-level awareness.
Intents are messaging objects used to request an action. Explicit intents target a specific component; implicit intents declare an action and let the system find the right component.
// EXPLICIT INTENT — you know exactly which component to start val explicitIntent = Intent(this, DetailActivity::class.java).apply { putExtra("userId", "123") putExtra("userName", "Rahul") } startActivity(explicitIntent) // IMPLICIT INTENT — declare action, system decides handler val shareIntent = Intent(Intent.ACTION_SEND).apply { type = "text/plain" putExtra(Intent.EXTRA_TEXT, "Check out Droidly!") } startActivity(Intent.createChooser(shareIntent, "Share via")) // PENDING INTENT — for future execution (notifications, widgets) val pendingIntent = PendingIntent.getActivity( context, 0, explicitIntent, PendingIntent.FLAG_UPDATE_CURRENT or PendingIntent.FLAG_IMMUTABLE ) // INTENT FLAGS — control back stack behavior Intent(this, HomeActivity::class.java).apply { flags = Intent.FLAG_ACTIVITY_NEW_TASK or Intent.FLAG_ACTIVITY_CLEAR_TASK }
- Explicit: startActivity, start Service — within your own app
- Implicit: share, dial, open URL — uses intent filters to match
- PendingIntent: wraps an Intent for future use — used in notifications, AlarmManager, widgets
- FLAG_IMMUTABLE: required from Android 12+ for PendingIntents — always add it
- From Android 13+: explicit intents required to start services from other apps
PendingIntent.FLAG_IMMUTABLE is mandatory from Android 12+. Forgetting it causes a crash. Mentioning this Android 12 change shows you stay up to date.
Background work has evolved significantly in Android. WorkManager is now the recommended solution for most background tasks.
// WorkManager — RECOMMENDED for guaranteed background work class SyncWorker(ctx: Context, params: WorkerParameters) : CoroutineWorker(ctx, params) { override suspend fun doWork(): Result { return try { syncDataWithServer() Result.success() } catch (e: Exception) { Result.retry() // automatic retry with backoff } } } // Schedule with constraints val constraints = Constraints.Builder() .setRequiredNetworkType(NetworkType.CONNECTED) .setRequiresBatteryNotLow(true) .build() val workRequest = PeriodicWorkRequestBuilder<SyncWorker>(15, TimeUnit.MINUTES) .setConstraints(constraints) .build() WorkManager.getInstance(context).enqueueUniquePeriodicWork( "sync", ExistingPeriodicWorkPolicy.KEEP, workRequest ) // Foreground Service — for ongoing user-visible work class MusicService : Service() { override fun onStartCommand(intent: Intent?, flags: Int, startId: Int): Int { startForeground(NOTIFICATION_ID, buildNotification()) return START_STICKY } }
- Service: runs on main thread — needs manual threading, use for ongoing tasks
- IntentService: deprecated in API 30 — use WorkManager or CoroutineWorker instead
- WorkManager: guaranteed execution, survives app kills and reboots, constraint-aware
- Foreground Service: for user-visible long-running work (music, navigation) — requires notification
- From Android 14: must declare foreground service type in manifest
Always say WorkManager for background tasks. If asked about music playback or GPS tracking, say Foreground Service. IntentService being deprecated is a key 2024 point to mention.
Context is the gateway to Android system resources. Using the wrong Context type is a common cause of memory leaks.
// Activity Context — tied to Activity lifecycle // ✅ Use for: UI operations, dialogs, layouts, startActivity val dialog = AlertDialog.Builder(this) // 'this' = Activity context startActivity(Intent(this, DetailActivity::class.java)) // Application Context — lives for the entire app lifetime // ✅ Use for: singletons, databases, repos, long-lived objects class AppDatabase { companion object { fun create(context: Context) = Room.databaseBuilder( context.applicationContext, // ✅ not context directly! AppDatabase::class.java, "app.db" ).build() } } // ❌ MEMORY LEAK — storing Activity context in singleton object BadSingleton { lateinit var context: Context // Never do this with Activity context! } // ✅ CORRECT — use applicationContext in singletons object GoodSingleton { lateinit var appContext: Context fun init(context: Context) { appContext = context.applicationContext } }
- Activity Context: use for UI — dialogs, layouts, themes, startActivity
- Application Context: use in singletons, repositories, Room, Retrofit, WorkManager
- Storing Activity context in a long-lived object = memory leak (Activity can't be GC'd)
applicationContexthas no access to UI theming — dialogs will crash if created with it
Rule of thumb: "If it outlives the screen, use applicationContext." This simple rule prevents most context-related memory leaks.
Android's permission model has evolved significantly. Runtime permissions (API 23+), one-time permissions, and partial access permissions are the current landscape.
// Request runtime permission — modern approach with ActivityResult API val requestPermission = registerForActivityResult( ActivityResultContracts.RequestPermission() ) { isGranted -> if (isGranted) { accessCamera() } else { // Check if user said "Don't ask again" if (!shouldShowRequestPermissionRationale(Manifest.permission.CAMERA)) { showSettingsDialog() // Direct user to app settings } else { showRationale() // Explain why you need it } } } // Android 13+ — granular media permissions // READ_EXTERNAL_STORAGE replaced by: // READ_MEDIA_IMAGES, READ_MEDIA_VIDEO, READ_MEDIA_AUDIO // Android 14 — Photo picker partial access // READ_MEDIA_VISUAL_USER_SELECTED — user selects specific photos // Android 13 — POST_NOTIFICATIONS permission required if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.TIRAMISU) { requestPermission.launch(Manifest.permission.POST_NOTIFICATIONS) }
- Android 13: Granular media permissions — READ_MEDIA_IMAGES, READ_MEDIA_VIDEO, READ_MEDIA_AUDIO instead of READ_EXTERNAL_STORAGE
- Android 13: POST_NOTIFICATIONS is now a runtime permission — must request it
- Android 14: Photo Picker partial access — READ_MEDIA_VISUAL_USER_SELECTED
- Android 14: SCHEDULE_EXACT_ALARM moved to SCHEDULE_EXACT_ALARM permission group
- Never request permissions at app start — always request at the point of use
Mention the ActivityResultContracts API — it's the modern replacement for the deprecated onRequestPermissionsResult(). Also mention POST_NOTIFICATIONS for Android 13+ — it's a very recent change that catches many candidates off guard.
BroadcastReceiver listens for system-wide or app-specific events. How you register it determines its lifecycle and when it receives broadcasts.
// Dynamic registration — registered in code, tied to component lifecycle class NetworkActivity : AppCompatActivity() { private val networkReceiver = object : BroadcastReceiver() { override fun onReceive(context: Context, intent: Intent) { val isConnected = intent.getBooleanExtra("isConnected", false) updateNetworkUI(isConnected) } } override fun onResume() { super.onResume() val filter = IntentFilter("com.app.NETWORK_CHANGED") registerReceiver(networkReceiver, filter, RECEIVER_NOT_EXPORTED) // Android 14+ } override fun onPause() { super.onPause() unregisterReceiver(networkReceiver) // ALWAYS unregister! } } // Static registration — in AndroidManifest.xml // <receiver android:name=".BootReceiver" android:exported="false"> // <intent-filter> // <action android:name="android.intent.action.BOOT_COMPLETED"/> // </intent-filter> // </receiver>
- Dynamic: register in code — only active while component is alive; must unregister
- Static (manifest): receives broadcasts even when app is not running — but limited by Android 8+ background restrictions
- Most implicit broadcasts blocked for static receivers since Android 8.0 — use dynamic registration
- Android 14: must pass RECEIVER_EXPORTED or RECEIVER_NOT_EXPORTED flag when registering
- BroadcastReceiver runs on the main thread — do no heavy work inside onReceive()
The Android 14 RECEIVER_NOT_EXPORTED flag requirement is very recent and catches most candidates. Mentioning it shows you're up to date with latest API changes.
Android can kill your app's process at any time when it's in the background. There are 3 layers of state survival, each with different scope and capacity.
// Layer 1: ViewModel — survives config changes ONLY @HiltViewModel class SearchViewModel @Inject constructor( private val savedState: SavedStateHandle // Layer 2 ) : ViewModel() { // Layer 2: SavedStateHandle — survives process death // Small data only — backed by Bundle val searchQuery = savedState.getStateFlow("query", "") fun updateQuery(q: String) { savedState["query"] = q // automatically persisted } } // Layer 3: onSaveInstanceState — last resort for Activity/Fragment UI state override fun onSaveInstanceState(outState: Bundle) { super.onSaveInstanceState(outState) outState.putString("key", value) // small primitives only } override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) val restored = savedInstanceState?.getString("key") }
- ViewModel: survives rotation only — lost on process death; good for large data
- SavedStateHandle: survives process death — backed by Bundle; small primitives only (<50KB)
- onSaveInstanceState: Activity/Fragment-level, called before kill; emergency backup for tiny state
- Room/DataStore: for data that must always survive — disk persistence
- Test process death: Developer Options → "Don't keep activities" or
adb shell am kill
This is a very common senior-level question. Draw the 3-layer diagram mentally: ViewModel (config) → SavedStateHandle (process death) → Room (always). That structure alone will impress the interviewer.
Predictive Back (Android 13+) gives users a preview of where the back gesture will take them before they complete the gesture. It requires opting in and updating your back handling code.
// Step 1: Opt-in in AndroidManifest.xml // <application android:enableOnBackInvokedCallback="true"> // Step 2: Use OnBackPressedDispatcher — NOT deprecated onBackPressed() class FormActivity : AppCompatActivity() { override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) // Modern back handling with lifecycle awareness onBackPressedDispatcher.addCallback(this) { if (hasUnsavedChanges) { showDiscardDialog() } else { isEnabled = false onBackPressedDispatcher.onBackPressed() // let system handle } } } } // In Compose — BackHandler composable @Composable fun FormScreen(hasUnsavedChanges: Boolean) { BackHandler(enabled = hasUnsavedChanges) { showDiscardDialog() } }
- Predictive Back shows a live preview animation before the back gesture completes
- Must opt in via
android:enableOnBackInvokedCallback="true"in manifest onBackPressed()is deprecated — useOnBackPressedDispatcheror ComposeBackHandler- Android 14 enforces predictive back on all apps targeting API 34+
- Custom animations possible with
OnBackAnimationCallback
onBackPressed() being deprecated is a very recent change. Knowing OnBackPressedDispatcher and BackHandler in Compose as the replacements immediately signals you're coding to modern standards.
Android 12 introduced the official SplashScreen API that replaces the old SplashActivity/WindowBackground hack. It's mandatory for apps targeting API 31+ and shows instantly even before your app code runs.
// Step 1: Add dependency // implementation("androidx.core:core-splashscreen:1.0.1") // Step 2: Define theme in res/values/themes.xml // <style name="Theme.App.Starting" parent="Theme.SplashScreen"> // <item name="windowSplashScreenBackground">@color/green</item> // <item name="windowSplashScreenAnimatedIcon">@drawable/ic_logo</item> // <item name="postSplashScreenTheme">@style/Theme.App</item> // </style> // Step 3: Set as activity theme in manifest // android:theme="@style/Theme.App.Starting" // Step 4: Install in Activity.onCreate() BEFORE setContentView class MainActivity : AppCompatActivity() { override fun onCreate(savedInstanceState: Bundle?) { val splashScreen = installSplashScreen() // MUST be before super.onCreate() super.onCreate(savedInstanceState) // Keep splash visible until data is ready splashScreen.setKeepOnScreenCondition { !viewModel.isDataReady.value } // Add custom exit animation splashScreen.setOnExitAnimationListener { splashView -> splashView.iconView.animate() .scaleX(0f).scaleY(0f) .setDuration(300) .withEndAction { splashView.remove() } .start() } setContentView(R.layout.activity_main) } }
- SplashScreen API shows before any app code runs — no more white flash on cold start
installSplashScreen()must be called beforesuper.onCreate()setKeepOnScreenCondition— keep splash visible while loading initial datasetOnExitAnimationListener— custom exit animations when splash dismisses- Old SplashActivity pattern adds extra Activity hop — slower and deprecated
The SplashScreen API was introduced in Android 12 and is now mandatory. Knowing that installSplashScreen() must come before super.onCreate() is the kind of specific detail that shows real hands-on experience.
Both are used to pass data between Android components, but they differ significantly in performance and implementation. In 2025, the recommended approach is to use the Kotlin Parcelize plugin.
// ❌ Serializable — uses Java reflection, slow, lots of temp objects data class User(val id: String, val name: String) : Serializable // ✅ Parcelable — fast, no reflection, Android-optimized // Manual implementation (verbose) class User(val id: String, val name: String) : Parcelable { constructor(parcel: Parcel) : this(parcel.readString()!!, parcel.readString()!!) override fun writeToParcel(parcel: Parcel, flags: Int) { parcel.writeString(id); parcel.writeString(name) } override fun describeContents() = 0 companion object CREATOR : Parcelable.Creator<User> { override fun createFromParcel(p: Parcel) = User(p) override fun newArray(size: Int) = arrayOfNulls<User>(size) } } // ✅✅ @Parcelize — best of both worlds, zero boilerplate (RECOMMENDED) @Parcelize data class User(val id: String, val name: String) : Parcelable // Pass via Intent putExtra("user", user) // sending getParcelableExtra("user", User::class.java) // receiving (API 33+)
- Serializable: Java interface, uses reflection — ~10x slower than Parcelable
- Parcelable: Android-specific, no reflection, written to shared memory — very fast
- @Parcelize: Kotlin annotation that auto-generates Parcelable code — use this always
- Enable with
id("kotlin-parcelize")plugin in build.gradle - From API 33: use
getParcelableExtra(key, Class)— old version is deprecated
Never say "I use Serializable because it's easier." Always say @Parcelize — same ease, 10x better performance. Also mention the API 33 typed getParcelableExtra to show you're current.
Launch modes control how Activities are instantiated and placed in the back stack. Getting this wrong leads to unexpected navigation behavior.
// Defined in AndroidManifest.xml // <activity android:launchMode="singleTop" /> // STANDARD (default) — new instance always created // Stack: A → B → B → B (pressing B 3 times = 3 instances) // SINGLE TOP — reuses top instance if already at top // Stack: A → B (pressing B again calls onNewIntent(), not new instance) // Stack: A → B → A → B (B not at top → new instance created) // SINGLE TASK — one instance per task, clears above it // Stack: A → B → C → (launch A) → A (B and C are destroyed!) // Use for: MainActivity, deep link entry points // SINGLE INSTANCE — own task, no other activities // Isolated in its own back stack // Use for: Launcher activities, maps, camera // Handle reuse in onNewIntent() override fun onNewIntent(intent: Intent) { super.onNewIntent(intent) setIntent(intent) // update getIntent() to return new one val data = intent.getStringExtra("key") handleNewData(data) }
- standard: default — every launch creates a new instance, every time
- singleTop: reuses if already on top of stack — great for notifications
- singleTask: one instance per task, brings to front and clears above it
- singleInstance: completely isolated in its own task stack
- Always override
onNewIntent()when using singleTop or singleTask
Draw the back stack on paper during the interview — it instantly makes your explanation clear. singleTask clearing the stack above it surprises most candidates.
Deep linking allows external sources to navigate directly into your app. Android 2025 best practice is App Links (HTTP/HTTPS) over custom schemes.
// 1. Custom URI scheme — unreliable, any app can intercept // myapp://product/123 ← bad, can be hijacked // 2. App Links (Verified HTTP) — RECOMMENDED // https://myapp.com/product/123 ← verified, only your app opens it // AndroidManifest.xml — declare intent filter // <intent-filter android:autoVerify="true"> // <action android:name="android.intent.action.VIEW"/> // <category android:name="android.intent.category.DEFAULT"/> // <category android:name="android.intent.category.BROWSABLE"/> // <data android:scheme="https" android:host="myapp.com"/> // </intent-filter> // Host assetlinks.json at: // https://myapp.com/.well-known/assetlinks.json // Handle in Activity override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) val action = intent?.action val data: Uri? = intent?.data if (Intent.ACTION_VIEW == action && data != null) { val productId = data.lastPathSegment // "123" from /product/123 navigateToProduct(productId) } } // Navigation Compose — deep link support built in composable( route = "product/{id}", deepLinks = listOf(navDeepLink { uriPattern = "https://myapp.com/product/{id}" }) ) { backStackEntry -> ProductScreen(id = backStackEntry.arguments?.getString("id")) }
- Custom scheme (myapp://): no verification, any app can handle it — avoid
- App Links (https://): verified via assetlinks.json — only your app handles it
- android:autoVerify="true": triggers verification at install time
- Navigation Compose has built-in deep link support with
navDeepLink { } - Test with:
adb shell am start -W -a android.intent.action.VIEW -d "https://myapp.com/product/123"
assetlinks.json is the key to verified App Links. Hosting it at /.well-known/assetlinks.json with your app's SHA-256 fingerprint is what makes Android trust your deep links exclusively.
ANR (Application Not Responding) occurs when the main thread is blocked for too long. Android shows a dialog giving users the option to wait or force-close the app.
// ANR triggers: // Input dispatch timeout → 5 seconds (key/touch events) // BroadcastReceiver → 10 seconds (foreground), 60s (background) // Service start/bind → 20 seconds // ❌ Common ANR causes on main thread fun onCreate(...) { val data = readFile() // disk IO on main thread val user = api.getUser().execute() // network on main thread Thread.sleep(6000) // sleeping main thread db.query("SELECT * FROM users") // DB query on main thread } // ✅ Fix — move to coroutines fun onCreate(...) { lifecycleScope.launch { val data = withContext(Dispatchers.IO) { readFile() } updateUi(data) // back on Main automatically } } // Enable StrictMode to catch violations in development if (BuildConfig.DEBUG) { StrictMode.setThreadPolicy( StrictMode.ThreadPolicy.Builder() .detectDiskReads() .detectNetwork() .penaltyLog() .build() ) }
- Input timeout: 5s — most common; touch/key events blocked on main thread
- Use StrictMode in debug builds to catch accidental main-thread IO
- Diagnose via: Play Console → Android Vitals → ANRs, or pull traces.txt from device
- Use
adb shell dumpsys activityto check what the main thread is doing - Deadlocks between coroutines and main thread are a sneaky ANR cause
Mention Android Vitals in Play Console — it shows real ANR rates from production users. Saying "I monitor ANR rate in Vitals and keep it below 0.47%" shows you think about production quality.
These three SDK settings control compatibility and which APIs you can use. Getting them wrong causes crashes, broken features, or Play Store rejections.
// build.gradle.kts android { compileSdk = 35 // API level used to COMPILE your code // Must be ≥ targetSdk // Use latest stable always defaultConfig { minSdk = 24 // Minimum Android version your app supports // API 24 = Android 7.0 = ~97% of devices targetSdk = 35 // API level your app is TESTED against // Android applies compatibility behaviors based on this // Google Play requires ≥ 34 for new apps (2024) // Google Play requires ≥ 35 for new apps (Aug 2025) } } // What targetSdk affects: // targetSdk 31+ → PendingIntent.FLAG_IMMUTABLE required // targetSdk 33+ → POST_NOTIFICATIONS runtime permission // targetSdk 34+ → Foreground service type required in manifest // targetSdk 35+ → Edge-to-edge enforced by default // Runtime check for API level if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.TIRAMISU) { // API 33+ only code }
- compileSdk: which APIs you can reference in code — use latest always
- minSdk: lowest Android version your app installs on — affects device reach
- targetSdk: what Android version you've tested against — enables new OS behaviors
- Google Play requires targetSdk ≥ 34 for existing apps and ≥ 35 for new apps from Aug 2025
- Raising targetSdk without testing can break your app with new OS behavior changes
targetSdk 35 enforcing edge-to-edge by default is the biggest breaking change in 2025. Apps that haven't handled window insets will have overlapping UI. Mentioning this is very impressive.
Edge-to-edge means your app draws behind the status bar and navigation bar. From Android 15 (targetSdk 35), this is enforced by default — apps must handle insets or UI will be overlapped.
// MainActivity — enable edge-to-edge override fun onCreate(savedInstanceState: Bundle?) { enableEdgeToEdge() // androidx.activity 1.8.0+ super.onCreate(savedInstanceState) setContentView(R.layout.activity_main) } // Handle insets in XML views // ViewCompat.setOnApplyWindowInsetsListener(view) { v, insets -> // val bars = insets.getInsets(WindowInsetsCompat.Type.systemBars()) // v.updatePadding(bottom = bars.bottom) // insets // } // In Jetpack Compose — use WindowInsets directly @Composable fun MainScreen() { Scaffold( // Scaffold handles insets automatically contentWindowInsets = WindowInsets.safeDrawing ) { paddingValues -> LazyColumn( contentPadding = paddingValues // avoids system bars ) { /* items */ } } } // For specific insets in Compose val statusBarHeight = WindowInsets.statusBars.asPaddingValues().calculateTopPadding() val navBarHeight = WindowInsets.navigationBars.asPaddingValues().calculateBottomPadding()
- From targetSdk 35: edge-to-edge is enforced — you can no longer opt out
- Use
enableEdgeToEdge()from androidx.activity — replaces manual window flag setting - In Compose: Scaffold handles most insets automatically if used correctly
- Watch out for: bottom sheets, FABs, and BottomNavigation overlapping nav bar
- Use
WindowInsets.safeDrawingfor content that needs to avoid all system UI
This is the hottest Android topic of 2025. Many existing apps broke when targeting API 35 because of edge-to-edge enforcement. Showing you understand insets handling in both Views and Compose is impressive.
Push notifications in 2025 use FCM (Firebase Cloud Messaging). The flow covers token management, foreground/background handling, and Android 13+ permission requirements.
// Step 1: Request POST_NOTIFICATIONS permission (Android 13+) val requestPermission = registerForActivityResult( ActivityResultContracts.RequestPermission() ) { granted -> if (!granted) showPermissionRationale() } if (Build.VERSION.SDK_INT >= 33) { requestPermission.launch(Manifest.permission.POST_NOTIFICATIONS) } // Step 2: FCM Service — handle messages class MyFCMService : FirebaseMessagingService() { // Called when new token generated (first launch, token refresh) override fun onNewToken(token: String) { // Send to your backend immediately CoroutineScope(Dispatchers.IO).launch { api.updateFcmToken(token) } } // Called for FOREGROUND messages (data or notification) override fun onMessageReceived(message: RemoteMessage) { val title = message.notification?.title ?: message.data["title"] val body = message.notification?.body ?: message.data["body"] showNotification(title, body, message.data) } } // Step 3: Build notification with channel fun showNotification(title: String?, body: String?, data: Map<String, String>) { val channelId = "orders" // Create channel (required API 26+) val channel = NotificationChannel(channelId, "Order Updates", NotificationManager.IMPORTANCE_HIGH) getSystemService(NotificationManager::class.java).createNotificationChannel(channel) val notification = NotificationCompat.Builder(this, channelId) .setSmallIcon(R.drawable.ic_notification) .setContentTitle(title) .setContentText(body) .setAutoCancel(true) .setContentIntent(buildPendingIntent(data)) .build() NotificationManagerCompat.from(this).notify(1, notification) }
- Android 13+: POST_NOTIFICATIONS is a runtime permission — must request it
- FCM data messages: always delivered to onMessageReceived() — full control
- FCM notification messages: shown automatically by system when app is in background
- Use data messages for full control; notification messages for simplicity
- NotificationChannels are required from API 26 — without them, notifications are silently dropped
Distinguish data messages vs notification messages — this trips up many candidates. Data messages always go to onMessageReceived(); notification messages only go there when the app is in foreground.
App Startup is a Jetpack library that provides a straightforward way to initialize components at app startup, replacing multiple ContentProviders with a single one and allowing parallel initialization.
// Problem: each library's ContentProvider adds ~2ms to startup // Firebase, WorkManager, Timber each register ContentProviders // 10 libraries = ~20ms extra cold start time // Solution: App Startup merges all into ONE ContentProvider // 1. Implement Initializer for each library class TimberInitializer : Initializer<Unit> { override fun create(context: Context) { if (BuildConfig.DEBUG) Timber.plant(Timber.DebugTree()) } override fun dependencies(): List<Class<out Initializer<*>>> = emptyList() // no dependencies } class AnalyticsInitializer : Initializer<Unit> { override fun create(context: Context) { Analytics.init(context) } // Declare dependencies — Analytics needs Timber first override fun dependencies() = listOf(TimberInitializer::class.java) } // 2. Lazy initialization — don't init until needed val heavyComponent by lazy { HeavyComponent() // only created on first access } // 3. Measure startup with Macrobenchmark // adb shell am start -W -n com.app/.MainActivity // Look for: TotalTime in output
- Each ContentProvider adds ~2ms to app startup — App Startup consolidates them into one
- Dependency ordering: declare what each initializer depends on
- Lazy init: only initialize when first needed, not at app start
- Use
lazy { }for ViewModels, repositories, and non-critical singletons - Measure with Macrobenchmark library or
adb shell am start -W
Mentioning that each ContentProvider costs ~2ms is a concrete, memorable detail. Saying "I reduced cold start by 300ms by consolidating 15 library ContentProviders using App Startup" is the kind of specific impact statement that lands in interviews.
These three visibility states affect both rendering and layout measurement differently — a common source of layout bugs when not understood correctly.
// VISIBLE — drawn and takes up space (default) view.visibility = View.VISIBLE // INVISIBLE — NOT drawn but STILL takes up space // Other views position themselves as if it's there view.visibility = View.INVISIBLE // GONE — NOT drawn AND does NOT take up space // Layout recalculates as if view doesn't exist view.visibility = View.GONE // Compose equivalent @Composable fun MyComponent(isVisible: Boolean) { // VISIBLE / GONE equivalent if (isVisible) { Text("Hello") // not rendered when false (like GONE) } // INVISIBLE equivalent in Compose Box(modifier = Modifier.alpha(if (isVisible) 1f else 0f)) { Text("Hello") // invisible but still takes space } // AnimatedVisibility — smooth transition AnimatedVisibility(visible = isVisible) { Text("Hello") } } // Performance tip: prefer GONE over INVISIBLE // INVISIBLE still triggers measure/layout passes // GONE skips them entirely → better performance
- VISIBLE: drawn + takes space — normal state
- INVISIBLE: not drawn but still occupies space — use for placeholder animations
- GONE: not drawn and no space — sibling views reposition — use for show/hide
- INVISIBLE still runs measure/layout — GONE is more performant in lists
- Compose equivalent:
if(condition)= GONE behavior;alpha(0f)= INVISIBLE behavior
The INVISIBLE vs GONE performance difference in RecyclerView is a great detail to mention — using INVISIBLE in list items means measure/layout still runs for hidden views, causing unnecessary work.
Baseline Profiles allow you to pre-compile critical code paths ahead of time, dramatically reducing startup time and improving runtime performance. In 2025, Google strongly recommends them for all production apps.
// Baseline Profiles — pre-compile critical code paths // Reduces: startup time by up to 40%, jank, and JIT compilation overhead // 1. Add dependencies // implementation("androidx.profileinstaller:profileinstaller:1.3.1") // androidTestImplementation("androidx.benchmark:benchmark-macro-junit4:1.2.3") // 2. Create BaselineProfileGenerator test @RunWith(AndroidJUnit4::class) class BaselineProfileGenerator { @get:Rule val rule = BaselineProfileRule() @Test fun generate() = rule.collect(packageName = "com.myapp") { // Simulate critical user journey pressHome() startActivityAndWait() // cold start device.waitForIdle() // Navigate through main flows findObject(By.text("Browse")).click() device.waitForIdle() } } // 3. Generate profile // ./gradlew generateBaselineProfile // Profile saved to: src/main/baseline-prof.txt // 4. Profile is bundled with AAB and installed by Play Store // ART pre-compiles critical methods before first launch // Startup Profiles — even faster startup (subset of Baseline) // Mark critical classes with StartupProfileRule
- ART normally JIT-compiles code at runtime — first launch is slow
- Baseline Profiles pre-compile critical paths — up to 40% faster startup
- Delivered via Play Store with your AAB — users get pre-compiled code from first install
- Generate by running real user journeys in a Macrobenchmark test
- Google Play also generates Cloud Profiles from real user data automatically
Baseline Profiles are one of the highest-impact optimizations for 2025. Very few developers have actually implemented them — saying "I've set up Baseline Profile generation in our CI pipeline" immediately sets you apart from other candidates.
ContentProvider is one of Android's four core components. It manages access to a structured set of data and allows sharing data between apps securely using a URI-based interface.
// ContentProvider — expose your data to other apps class UserContentProvider : ContentProvider() { override fun onCreate(): Boolean { // Initialize database return true } override fun query( uri: Uri, projection: Array<String>?, selection: String?, selectionArgs: Array<String>?, sortOrder: String? ): Cursor? { return db.query("users", projection, selection, selectionArgs, null, null, sortOrder) } override fun getType(uri: Uri) = "vnd.android.cursor.dir/users" override fun insert(uri: Uri, values: ContentValues?) = null override fun delete(uri: Uri, s: String?, a: Array<String>?) = 0 override fun update(uri: Uri, v: ContentValues?, s: String?, a: Array<String>?) = 0 } // Access another app's ContentProvider via ContentResolver val cursor = contentResolver.query( ContactsContract.Contacts.CONTENT_URI, null, null, null, null ) cursor?.use { while (it.moveToNext()) { val name = it.getString(it.getColumnIndex(ContactsContract.Contacts.DISPLAY_NAME)) } } // Declare in manifest // <provider android:name=".UserContentProvider" // android:authorities="com.myapp.provider" // android:exported="false" />
- ContentProvider exposes data to other apps via a standardized URI interface
- System providers: Contacts, MediaStore, Calendar, SMS — all use ContentProvider
- For internal app data sharing — use Room directly, not ContentProvider
android:exported="false"— keep private unless intentionally sharing with other apps- FileProvider is a special ContentProvider for securely sharing files between apps (camera, downloads)
FileProvider is the most commonly used ContentProvider in modern apps — for sharing files with the camera or email. Mentioning it shows real-world usage beyond the textbook definition.
LiveData is lifecycle-aware but Android-platform dependent. StateFlow is a Kotlin-first, coroutine-based alternative that works everywhere and has better performance. In 2025, StateFlow is the recommended choice.
// LiveData — lifecycle-aware, Android-only class UserViewModel : ViewModel() { private val _user = MutableLiveData<User>() val user: LiveData<User> = _user // ❌ LiveData issues: // - Android-only, not testable without Android framework // - No initial value required // - setValue() must be called on main thread // - No built-in operators (map, filter, combine) } // StateFlow — Kotlin-first, coroutine-native (RECOMMENDED 2025) class UserViewModel : ViewModel() { private val _user = MutableStateFlow<User?>(null) val user: StateFlow<User?> = _user.asStateFlow() // ✅ StateFlow benefits: // - Works in pure Kotlin/KMM — no Android dependency // - Always has a current value (.value) // - Rich operators: map, filter, combine, flatMapLatest // - Thread-safe — can update from any thread } // Collect in Fragment (both are lifecycle-safe) // LiveData viewModel.user.observe(viewLifecycleOwner) { user -> updateUi(user) } // StateFlow — use repeatOnLifecycle to avoid collecting in background viewLifecycleOwner.lifecycleScope.launch { viewLifecycleOwner.repeatOnLifecycle(Lifecycle.State.STARTED) { viewModel.user.collect { user -> updateUi(user) } } } // Convert LiveData → Flow if migrating val userFlow = viewModel.user.asFlow()
- LiveData: Android-only, simpler, good for quick prototypes — being phased out
- StateFlow: Kotlin-first, works in KMM, better operators, testable without Android — recommended
- Always use
repeatOnLifecycle(STARTED)to stop collecting when app is backgrounded - Never use
lifecycleScope.launch { flow.collect { } }without repeatOnLifecycle — leaks in background - Google's official guidance since 2022: prefer StateFlow over LiveData for new code
The repeatOnLifecycle mistake is very common — collecting without it means your UI still processes updates when the app is in the background. Mentioning this shows you understand lifecycle-safe collection.
RecyclerView is the View-system list component. LazyColumn is its Compose equivalent. They solve the same problem — efficiently rendering large lists — but with different APIs and paradigms.
// RecyclerView — View system (XML) class UserAdapter : ListAdapter<User, UserAdapter.ViewHolder>(DiffCallback()) { class ViewHolder(view: View) : RecyclerView.ViewHolder(view) { fun bind(user: User) { itemView.findViewById<TextView>(R.id.name).text = user.name } } override fun onCreateViewHolder(parent: ViewGroup, type: Int) = ViewHolder(LayoutInflater.from(parent.context).inflate(R.layout.item_user, parent, false)) override fun onBindViewHolder(holder: ViewHolder, position: Int) = holder.bind(getItem(position)) class DiffCallback : DiffUtil.ItemCallback<User>() { override fun areItemsTheSame(old: User, new: User) = old.id == new.id override fun areContentsTheSame(old: User, new: User) = old == new } } // LazyColumn — Jetpack Compose equivalent @Composable fun UserList(users: List<User>) { LazyColumn( contentPadding = PaddingValues(16.dp), verticalArrangement = Arrangement.spacedBy(8.dp) ) { items(users, key = { it.id }) { user -> // key = stable identity UserCard(user) } item { LoadMoreButton() } // easily add headers/footers } } // LazyColumn with Paging 3 val users: LazyPagingItems<User> = viewModel.usersPaged.collectAsLazyPagingItems() LazyColumn { items(users, key = { it.id }) { user -> UserCard(user) } }
- RecyclerView: View system — needs Adapter, ViewHolder, DiffUtil — more boilerplate
- LazyColumn: Compose — declarative, much less code, key parameter handles diffing
- Both only compose/render visible items — efficient for large lists
- Always provide a
keyin LazyColumn items — prevents recomposition and animation glitches - Use
ListAdapter(not basic RecyclerView.Adapter) — built-in DiffUtil on background thread
If asked to compare them in an interview, say: "LazyColumn is the modern answer. For View-based apps still using RecyclerView, always use ListAdapter with DiffUtil — never submitList without it."
Doze mode kicks in when the device is stationary, screen off, and unplugged — it restricts network, wakelocks, GPS, alarms, and syncs to periodic maintenance windows. App Standby buckets (Android 9+) restrict background work based on how recently the user interacted with your app. WorkManager is the correct API because it schedules around both restrictions transparently.
val work = OneTimeWorkRequestBuilder<SyncWorker>() .setConstraints(Constraints.Builder() .setRequiredNetworkType(NetworkType.CONNECTED) .setRequiresBatteryNotLow(true) .build()) .build() WorkManager.getInstance(context).enqueue(work) // AlarmManager.setExactAndAllowWhileIdle() — fires in Doze, use sparingly alarmManager.setExactAndAllowWhileIdle( AlarmManager.RTC_WAKEUP, triggerMs, pendingIntent ) // FCM high-priority messages bypass Doze and wake the device // Foreground services are exempt from Doze network restrictions
- Doze blocks: network, wakelocks, alarms, JobScheduler — until the next maintenance window
- App Standby buckets: ACTIVE → WORKING → FREQUENT → RARE → RESTRICTED — more restrictions as usage drops
- WorkManager is Doze-aware — it reschedules work for the next maintenance window automatically
- FCM high-priority messages pierce Doze — the only reliable way to wake a Dozed device
- setExactAndAllowWhileIdle(): fires during Doze but Android limits call frequency — use for urgent alarms only
Test your background work with adb shell dumpsys deviceidle force-idle to simulate Doze. Saying you test this in development immediately signals engineering maturity.
These three classes form Android's thread messaging system. While modern code uses coroutines, understanding Handler/Looper is still tested heavily in interviews because the Android framework itself uses them internally.
// Looper — message loop attached to a thread // Main thread has a Looper by default // Background threads need Looper.prepare() manually // MessageQueue — FIFO queue of Messages/Runnables // Each Looper has exactly one MessageQueue // Handler — posts/processes messages to a Looper's queue class MyWorkerThread : Thread() { lateinit var handler: Handler override fun run() { Looper.prepare() // create Looper for this thread handler = Handler(Looper.myLooper()!!) Looper.loop() // start processing messages } } // Post to main thread from background val mainHandler = Handler(Looper.getMainLooper()) mainHandler.post { updateUi() // runs on main thread } // Post delayed (use sparingly — prefer coroutines + delay()) mainHandler.postDelayed({ hideToast() }, 2000) // Modern equivalent with coroutines lifecycleScope.launch { delay(2000) hideToast() // runs on Main dispatcher } // How Android uses Handler internally: // ViewRootImpl posts draw calls via Handler // Activity callbacks are delivered via Handler // AsyncTask used Handler internally (now deprecated)
- Looper: runs an infinite message loop on a thread — main thread has one by default
- MessageQueue: the FIFO queue attached to each Looper
- Handler: posts Messages or Runnables to a Looper's queue for execution on that thread
- Use Handler for: posting to main thread, delayed execution, inter-thread communication
- Modern preference: coroutines + Dispatchers — cleaner and cancellable
Interviewers ask this to test fundamentals — even if you use coroutines daily, knowing that Looper drives the main thread's event loop shows you understand the platform at a deep level.
ViewBinding is the modern, type-safe way to access views in XML layouts. It replaces both the error-prone findViewById and the heavier DataBinding for most use cases.
// Enable in build.gradle.kts android { buildFeatures { viewBinding = true } } // ViewBinding — generates a binding class per layout file class HomeFragment : Fragment(R.layout.fragment_home) { private var _binding: FragmentHomeBinding? = null private val binding get() = _binding!! override fun onViewCreated(view: View, savedInstanceState: Bundle?) { _binding = FragmentHomeBinding.bind(view) binding.tvTitle.text = "Hello" // type-safe, no cast needed binding.btnSubmit.setOnClickListener { submit() } } override fun onDestroyView() { super.onDestroyView() _binding = null // IMPORTANT: prevent memory leak } } // vs DataBinding — more powerful but heavier // Supports: two-way binding, binding expressions in XML, ObservableFields // <layout><data><variable name="vm" type="UserViewModel"/></data></layout> // android:text="@{vm.userName}" — expression evaluated in XML // vs findViewById — error prone val title = findViewById<TextView>(R.id.tvTitle) // ClassCastException risk // No null safety — NullPointerException if ID doesn't exist in layout
- findViewById: no type safety, no null safety, no performance benefit
- ViewBinding: type-safe, null-safe, fast (compile-time generation) — recommended
- DataBinding: superset of ViewBinding — adds XML expressions and two-way binding — use when needed
- ViewBinding compiles faster than DataBinding (no annotation processing)
- Always null the binding in
onDestroyView()to avoid Fragment memory leaks
The _binding = null in onDestroyView is critical — forgetting it causes the Fragment to hold a reference to the view hierarchy even after the view is destroyed. Always mention this.
Foldable and large screen support became a major focus for Google in 2023-25. Apps must handle dynamic window size classes, multi-pane layouts, and configuration changes from folding/unfolding.
// WindowSizeClass — categorizes screen size // implementation("androidx.window:window:1.2.0") @Composable fun AdaptiveLayout() { val windowSizeClass = calculateWindowSizeClass(LocalContext.current as Activity) when (windowSizeClass.widthSizeClass) { WindowWidthSizeClass.COMPACT -> SinglePaneLayout() // phone portrait WindowWidthSizeClass.MEDIUM -> TabletLayout() // foldable unfolded WindowWidthSizeClass.EXPANDED -> TwoPaneLayout() // tablet/desktop } } // Jetpack Adaptive — list-detail layout // implementation("androidx.compose.material3.adaptive:adaptive:1.0.0") @Composable fun EmailApp() { ListDetailPaneScaffold( directive = calculatePaneScaffoldDirective(currentWindowAdaptiveInfo()), value = rememberListDetailPaneScaffoldNavigator<Email>(), listPane = { EmailList() }, detailPane = { EmailDetail() } ) } // Handle fold state changes val foldingFeature = WindowInfoTracker .getOrCreate(this) .windowLayoutInfo (this) .map { it.displayFeatures.filterIsInstance <FoldingFeature>().firstOrNull () } // Manifest — support resizable activity // android:resizeableActivity="true" // android:configChanges="screenSize|smallestScreenSize|screenLayout|orientation"
- WindowSizeClass: COMPACT (phone), MEDIUM (foldable/small tablet), EXPANDED (large tablet)
- Use adaptive layouts that respond to size class changes, not fixed screen size checks
- Jetpack's ListDetailPaneScaffold: handles list-detail pattern automatically across sizes
- FoldingFeature: detect hinge position — use for "table-top" mode in camera apps
- android:resizeableActivity="true" required for multi-window and foldable support
Google Play now flags apps that don't handle large screens well. Mentioning WindowSizeClass and ListDetailPaneScaffold (Material 3 Adaptive) shows you're building for the modern device ecosystem.
Jetpack Navigation Component provides a framework for navigating between screens, managing the back stack, and handling deep links consistently — in both Fragment-based and Compose apps.
// Navigation Compose (2025 recommendation) @Composable fun AppNavGraph() { val navController = rememberNavController() NavHost(navController, startDestination = "home") { composable("home") { HomeScreen(navController) } composable( route = "detail/{userId}", arguments = listOf(navArgument("userId") { type = NavType.StringType }), deepLinks = listOf(navDeepLink { uriPattern = "app://detail/{userId}" }) ) { entry -> DetailScreen(userId = entry.arguments?.getString("userId")) } composable("settings") { SettingsScreen() } } } // Navigate with popUpTo to control back stack navController.navigate("home") { popUpTo("login") { inclusive = true } // remove login from back stack launchSingleTop = true // avoid duplicate destinations } // Type-safe navigation (Navigation 2.8+ with Kotlin Serialization) @Serializable object HomeRoute @Serializable data class DetailRoute(val userId: String) navController.navigate(DetailRoute(userId = "123")) // type-safe!
- Single Activity pattern — Navigation manages Fragment/Compose destination back stack
popUpTo + inclusive: controls which destinations are removed from back stack on navigatelaunchSingleTop: prevents duplicate destinations (like pressing home tab twice)- Navigation 2.8+ introduced type-safe routes using @Serializable — recommended for new code
- NavController automatically handles deep links declared in composable() destinations
Type-safe navigation with @Serializable (Navigation 2.8+) is the 2025 best practice. Mentioning it over string-based routes shows you're current with the latest Jetpack APIs.
Android security is multi-layered. A production app must protect data at rest, in transit, and at the code level. This is a common senior-level interview topic in 2025-26.
// 1. Network Security — Certificate Pinning // res/xml/network_security_config.xml // <network-security-config> // <domain-config> // <domain includeSubdomains="true">api.myapp.com</domain> // <pin-set> // <pin digest="SHA-256">base64EncodedPin</pin> // </pin-set> // </domain-config> // </network-security-config> // Or via OkHttp CertificatePinner val pinner = CertificatePinner.Builder() .add("api.myapp.com", "sha256/AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=") .build() val client = OkHttpClient.Builder().certificatePinner(pinner).build() // 2. Secure Storage — Android Keystore val masterKey = MasterKey.Builder(context) .setKeyScheme(MasterKey.KeyScheme.AES256_GCM) .build() val securePrefs = EncryptedSharedPreferences.create( context, "secure", masterKey, EncryptedSharedPreferences.PrefKeyEncryptionScheme.AES256_SIV, EncryptedSharedPreferences.PrefValueEncryptionScheme.AES256_GCM ) // 3. Code obfuscation — R8/ProGuard // build.gradle.kts // isMinifyEnabled = true → shrinks + obfuscates // 4. Root/Emulator detection (for banking apps) // Use SafetyNet Attestation API or Play Integrity API // Play Integrity API (replaces SafetyNet from 2024) // 5. Prevent screenshots on sensitive screens override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) window.addFlags(WindowManager.LayoutParams.FLAG_SECURE) }
- SSL/Certificate Pinning: prevents MITM attacks — verify server cert against known pin
- Android Keystore: hardware-backed key storage — keys never leave the device
- R8/ProGuard: obfuscates class/method names — makes reverse engineering harder
- Play Integrity API: replaces SafetyNet — verifies app/device integrity (banking apps)
- FLAG_SECURE: prevents screenshots and screen recording on sensitive screens
Play Integrity API replacing SafetyNet in 2024 is a very recent change. Knowing this immediately shows you stay current. For fintech/banking interviews, security is often a dedicated interview round.
ConstraintLayout allows building complex UIs with a flat view hierarchy — no nesting needed. This dramatically improves rendering performance compared to nested LinearLayouts or RelativeLayouts.
<!-- ConstraintLayout — flat hierarchy, no nesting needed --> <androidx.constraintlayout.widget.ConstraintLayout> <!-- Chain — distribute views horizontally/vertically --> <TextView android:id="@+id/tvTitle" app:layout_constraintStart_toStartOf="parent" app:layout_constraintTop_toTopOf="parent" app:layout_constraintEnd_toEndOf="parent" /> <!-- Guideline — invisible positioning reference --> <androidx.constraintlayout.widget.Guideline android:orientation="vertical" app:layout_constraintGuide_percent="0.5" /> <!-- Barrier — dynamic constraint based on multiple views --> <androidx.constraintlayout.widget.Barrier app:barrierDirection="end" app:constraint_referenced_ids="tvLabel1,tvLabel2" /> </androidx.constraintlayout.widget.ConstraintLayout> // MotionLayout — subclass for animations // Animate between two ConstraintSets // Define in MotionScene XML file motionLayout.transitionToEnd() // trigger animation // In Compose: ConstraintLayout also available // implementation("androidx.constraintlayout:constraintlayout-compose:1.0.1") @Composable fun ProfileCard() { ConstraintLayout { val (image, name, bio) = createRefs() Image(modifier = Modifier.constrainAs (image) { top.linkTo (parent.top) }) Text(modifier = Modifier.constrainAs (name) { top.linkTo (image.bottom) }) } }
- Flat hierarchy: one ConstraintLayout replaces many nested LinearLayouts — fewer measure passes
- Guideline: invisible helper for percentage-based positioning
- Barrier: dynamic constraint that adjusts based on the largest of multiple views
- Chain: distribute multiple views evenly (spread, packed, weighted)
- MotionLayout: subclass for declarative animations between two constraint states
In Compose-based apps, ConstraintLayout is less needed because Compose's layout system is already flat. Mention that you use it primarily for complex View-based screens or MotionLayout animations.
AndroidManifest.xml is the app's blueprint — it tells the Android system everything about your app before any code runs. Every component, permission, and configuration must be declared here.
<manifest xmlns:android="http://schemas.android.com/apk/res/android"> <!-- Permissions --> <uses-permission android:name="android.permission.INTERNET" /> <uses-permission android:name="android.permission.CAMERA" /> <!-- Hardware features --> <uses-feature android:name="android.hardware.camera" android:required="false" /> <application android:name=".MyApp" <!-- Custom Application class --> android:theme="@style/AppTheme" android:networkSecurityConfig="@xml/network_security_config" android:enableOnBackInvokedCallback="true" <!-- Predictive Back --> android:largeHeap="false"> <!-- avoid unless necessary --> <!-- Entry point activity --> <activity android:name=".MainActivity" android:exported="true" android:launchMode="singleTop"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> <!-- android:exported required for all components targeting API 31+ --> <service android:name=".MyFCMService" android:exported="false" /> <receiver android:name=".BootReceiver" android:exported="false"> <provider android:name=".MyProvider" android:authorities="com.app.provider" /> </application> </manifest>
- Declares all 4 components: Activity, Service, BroadcastReceiver, ContentProvider
android:exportedrequired for all components targeting API 31+ — crash if missinguses-feature android:required="false"— makes feature optional (don't block installs)- Merge manifests: libraries inject their own manifest entries — check merged manifest in AS
tools:removeandtools:replacefor overriding library manifest entries
Manifest merging is a common source of bugs — library manifests can add permissions or components you don't want. Mention that you always check the merged manifest in Android Studio before release.
Paging 3 is the Jetpack library for loading large datasets in pages. It handles loading states, error handling, retry, and integrates with Room and Retrofit out of the box.
// 1. PagingSource — defines how to load data class UserPagingSource( private val api: UserApi ) : PagingSource<Int, User>() { override suspend fun load(params: LoadParams<Int>): LoadResult<Int, User> { val page = params.key ?: 1 return try { val response = api.getUsers(page = page, size = params.loadSize) LoadResult.Page( data = response.users, prevKey = if (page == 1) null else page - 1, nextKey = if (response.users.isEmpty()) null else page + 1 ) } catch (e: Exception) { LoadResult.Error(e) } } override fun getRefreshKey(state: PagingState<Int, User>) = state.anchorPosition?.let { state.closestPageToPosition (it)?.prevKey?.plus(1) } } // 2. ViewModel — create Pager val users = Pager(PagingConfig(pageSize = 20)) { UserPagingSource(api) }.flow .cachedIn (viewModelScope) // 3. Compose UI val lazyPagingItems = viewModel.users.collectAsLazyPagingItems () LazyColumn { items(lazyPagingItems, key = { it.id }) { user -> if (user != null) UserCard(user) } if (lazyPagingItems.loadState.append is LoadState.Loading) { item { CircularProgressIndicator() } } }
- PagingSource: defines data loading logic — handles pages and keys
- Pager: creates a Flow of PagingData from PagingSource
cachedIn(viewModelScope): caches pages so they survive recomposition/rotation- RemoteMediator: combines network + Room for offline-first paging
- Built-in loading states: refresh, prepend, append — handle in UI easily
cachedIn(viewModelScope) is critical — without it, every recomposition re-fetches from page 1. Also mention RemoteMediator for the offline-first paging pattern — it's what separates senior candidates from juniors.
Accessibility ensures your app is usable by people with disabilities. Google Play now audits accessibility and it's asked in interviews at companies like Google and Flipkart.
// Views — content descriptions for screen readers imageView.contentDescription = "Profile photo of Rahul" iconButton.contentDescription = "Share this post" // Decorative images — hide from accessibility decorativeImage.importantForAccessibility = View.IMPORTANT_FOR_ACCESSIBILITY_NO // Group related views for TalkBack // android:focusable="true" on parent // android:importantForAccessibility="yes" // Compose — semantics @Composable fun LikeButton(isLiked: Boolean, onClick: () -> Unit) { IconButton( onClick = onClick, modifier = Modifier.semantics { contentDescription = if (isLiked) "Unlike post" else "Like post" role = Role.Button stateDescription = if (isLiked) "Liked" else "Not liked" } ) { Icon(imageVector = if (isLiked) Icons.Filled.Favorite else Icons.Outlined.FavoriteBorder) } } // Merge semantics — treat group as one accessible element Row(modifier = Modifier.semantics (mergeDescendants = true) {}) { Image(/*...*/) Text("Rahul — Android Developer") } // Test accessibility // Enable TalkBack in Accessibility Settings // Use Accessibility Scanner app from Google // Run: ./gradlew connectedAndroidTest with AccessibilityChecks
- Always add contentDescription to images, icons, and non-text interactive elements
- Minimum touch target size: 48x48dp — required by Material Design guidelines
- In Compose: use
semantics { }modifier to describe UI to accessibility services mergeDescendants = true: groups child semantics into one accessible unit- Test with TalkBack enabled — navigate your app without looking at the screen
Google Play's policy now requires apps to meet basic accessibility standards. Mentioning that you run Accessibility Scanner and test with TalkBack before every release shows production-level quality mindset.
Android 15 and 16 continue tightening security, privacy, and predictive-back. The biggest developer-facing changes are edge-to-edge enforcement, health-connect updates, and stricter broadcast receiver rules.
// Edge-to-edge enforcement (Android 15) -- window insets required ViewCompat.setOnApplyWindowInsetsListener(rootView) { v, insets -> val bars = insets.getInsets(WindowInsetsCompat.Type.systemBars()) v.setPadding(bars.left, bars.top, bars.right, bars.bottom) insets } // Predictive Back -- must opt-in via manifest // android:enableOnBackInvokedCallback="true" onBackPressedDispatcher.addCallback(this) { // custom back logic -- runs with predictive back animation } // Foreground service types now mandatory for specific use cases // android:foregroundServiceType="dataSync|mediaPlayback|location"
- Edge-to-edge is enforced in Android 15 -- apps that don't handle window insets will have UI clipped behind system bars
- Predictive back gesture: opt-in via android:enableOnBackInvokedCallback=true in manifest, then use OnBackPressedDispatcher
- Foreground service types: must declare the type in manifest and pass it to startForeground() -- missing type = crash on Android 14+
- Health Connect 2.0: new data types, background read permissions tightened
- Stricter broadcast receivers: dynamically registered receivers must explicitly declare RECEIVER_EXPORTED or RECEIVER_NOT_EXPORTED
Knowing that Android 15 codename is "Vanilla Ice Cream" and Android 16 is "Baklava" with API 36 shows you track platform releases. Edge-to-edge enforcement being the biggest Android 15 breaking change is the key fact to highlight.
KMM (now called Kotlin Multiplatform) allows sharing business logic between Android, iOS, and other platforms while keeping native UI for each. It became stable in 2023 and is increasingly asked in 2025-26 interviews.
// KMP Project Structure // ├── shared/ // │ ├── commonMain/ ← shared Kotlin code (business logic) // │ ├── androidMain/ ← Android-specific implementations // │ └── iosMain/ ← iOS-specific implementations // ├── androidApp/ ← Android UI (Jetpack Compose) // └── iosApp/ ← iOS UI (SwiftUI) // shared/commonMain — pure Kotlin, no platform APIs class UserRepository( private val api: UserApi, private val db: UserDatabase ) { suspend fun getUsers(): List<User> { return try { val users = api.fetchUsers() db.insertAll(users) users } catch (e: Exception) { db.getAllUsers() } } } // expect/actual — platform-specific implementations // commonMain expect fun getPlatformName(): String // androidMain actual fun getPlatformName() = "Android ${Build.VERSION.SDK_INT}" // iosMain actual fun getPlatformName() = UIDevice.currentDevice.systemName() // KMP libraries (multiplatform-ready) // Ktor → networking // SQLDelight → database // kotlinx.serialization → JSON // kotlinx.coroutines → async // Koin → dependency injection
- Share: business logic, data models, repositories, use cases, networking
- Keep native: UI (Compose for Android, SwiftUI for iOS)
expect/actual: declare an interface in common, implement per platform- KMP is stable since 1.9.20 (Nov 2023) — production-ready
- Companies using KMM: Netflix, Philips, VMware, McDonald's, Cash App
Since you already know KMM, this is your biggest competitive advantage. Frame it as: "I can deliver Android + iOS from a single shared codebase, reducing development time by up to 40% for business logic." That's a very compelling pitch for any company.
IPC (Inter-Process Communication) allows different processes to communicate. Android provides several IPC mechanisms — AIDL is the most powerful but also the most complex.
// IPC mechanisms in Android (simplest → most complex): // 1. Intent/Bundle → between components, same/different apps // 2. ContentProvider → structured data sharing // 3. Messenger → simple message passing between processes // 4. AIDL → full bidirectional, multi-threaded IPC // AIDL — Android Interface Definition Language // Step 1: Define interface in .aidl file // IUserService.aidl // interface IUserService { // User getUser(String id); // List<User> getAllUsers(); // } // Step 2: Implement in Service class UserService : Service() { private val binder = object : IUserService.Stub() { override fun getUser(id: String): User { // runs on Binder thread pool — NOT main thread return db.findUser(id) } override fun getAllUsers(): List<User> = db.getAll() } override fun onBind(intent: Intent) = binder } // Step 3: Bind from client private var userService: IUserService? = null val connection = object : ServiceConnection { override fun onServiceConnected(name: ComponentName, binder: IBinder) { userService = IUserService.Stub.asInterface(binder) } override fun onServiceDisconnected(name: ComponentName) { userService = null } } bindService(Intent(this, UserService::class.java), connection, BIND_AUTO_CREATE)
- AIDL generates boilerplate for Binder IPC — methods run on Binder thread pool, not main thread
- Use AIDL when: multiple clients call service concurrently, complex data types needed
- Use Messenger when: simple sequential messages, no concurrent calls needed
- AIDL supported types: primitives, String, CharSequence, List, Map, Parcelable
- Always handle RemoteException when calling AIDL methods from client
AIDL runs on Binder thread pool — not main thread. This means AIDL service implementations must be thread-safe. Mentioning this shows deep understanding of Android's IPC architecture.
Scoped Storage (Android 10+) restricts apps to their own files and specific media collections. The old READ_EXTERNAL_STORAGE approach no longer works for most use cases in Android 13+.
// Android 13+ — granular media permissions // READ_MEDIA_IMAGES → access photos // READ_MEDIA_VIDEO → access videos // READ_MEDIA_AUDIO → access audio // READ_MEDIA_VISUAL_USER_SELECTED → partial access (Android 14+) // Query images with MediaStore suspend fun getAllImages(context: Context): List<Uri> = withContext(Dispatchers.IO) { val images = mutableListOf<Uri>() val collection = MediaStore.Images.Media.getContentUri(MediaStore.VOLUME_EXTERNAL) val projection = arrayOf(MediaStore.Images.Media._ID, MediaStore.Images.Media.DISPLAY_NAME) context.contentResolver.query(collection, projection, null, null, "date_added DESC") ?.use { cursor -> val idCol = cursor.getColumnIndex(MediaStore.Images.Media._ID) while (cursor.moveToNext()) { val id = cursor.getLong(idCol) images.add(ContentUris.withAppendedId(MediaStore.Images.Media.EXTERNAL_CONTENT_URI, id)) } } images } // Write a file to app-specific storage (no permissions needed) val file = File(context.getExternalFilesDir(Environment.DIRECTORY_PICTURES), "photo.jpg") // Save to shared MediaStore (needs WRITE permission on API < 29) val values = ContentValues().apply { put(MediaStore.Images.Media.DISPLAY_NAME, "photo.jpg") put(MediaStore.Images.Media.MIME_TYPE, "image/jpeg") put(MediaStore.Images.Media.RELATIVE_PATH, "Pictures/MyApp") } val uri = context.contentResolver.insert(MediaStore.Images.Media.EXTERNAL_CONTENT_URI, values)
- Scoped Storage: apps can only access their own files + media via MediaStore
- App-specific dirs (
getFilesDir(),getExternalFilesDir()): no permissions needed - Android 13+: READ_EXTERNAL_STORAGE replaced by granular media permissions
- Photo Picker API: best UX — no permissions needed, user selects photos directly
- MANAGE_EXTERNAL_STORAGE: only for file managers — Google Play heavily restricts it
Recommend the Photo Picker API first — it requires zero permissions and gives users full control. Only use MediaStore directly when you need to query existing media programmatically.
BiometricPrompt is the modern, unified API for fingerprint, face, and iris authentication. It replaces the deprecated FingerprintManager and handles all biometric types automatically.
// Add dependency // implementation("androidx.biometric:biometric:1.1.0") // Step 1: Check biometric availability val biometricManager = BiometricManager.from(context) when (biometricManager.canAuthenticate(BiometricManager.Authenticators.BIOMETRIC_STRONG)) { BiometricManager.BIOMETRIC_SUCCESS -> showBiometricPrompt() BiometricManager.BIOMETRIC_ERROR_NO_HARDWARE -> showPasswordFallback() BiometricManager.BIOMETRIC_ERROR_NONE_ENROLLED -> promptUserToEnroll() } // Step 2: Build and show prompt fun showBiometricPrompt(activity: FragmentActivity) { val promptInfo = BiometricPrompt.PromptInfo.Builder() .setTitle("Unlock App") .setSubtitle("Use your fingerprint or face to authenticate") .setAllowedAuthenticators( BiometricManager.Authenticators.BIOMETRIC_STRONG or BiometricManager.Authenticators.DEVICE_CREDENTIAL // PIN fallback ) .build() val biometricPrompt = BiometricPrompt(activity, object : BiometricPrompt.AuthenticationCallback() { override fun onAuthenticationSucceeded(result: BiometricPrompt.AuthenticationResult) { // result.cryptoObject — use for crypto operations if needed unlockApp() } override fun onAuthenticationError(errorCode: Int, errString: CharSequence) { showError(errString.toString()) } override fun onAuthenticationFailed() { // Called on failed attempt — don't show error yet (system handles retries) } } ) biometricPrompt.authenticate(promptInfo) }
- BiometricPrompt handles fingerprint, face, and iris — no need to differentiate
- BIOMETRIC_STRONG: hardware-backed biometrics (Class 3) — for banking apps
- DEVICE_CREDENTIAL: PIN/pattern/password fallback — always include this
- CryptoObject: combine with Keystore for cryptographically-bound authentication
- FingerprintManager is deprecated since API 28 — never use it in new code
For banking/fintech apps, combine BiometricPrompt with CryptoObject — the authentication is cryptographically bound to the Keystore key. Without CryptoObject, biometric auth can theoretically be bypassed via accessibility services.
Custom Views let you draw anything on the canvas. Understanding the three phases — measure, layout, draw — is essential for building reusable UI components and is a common senior interview question.
class CircularProgressView @JvmOverloads constructor( context: Context, attrs: AttributeSet? = null, defStyleAttr: Int = 0 ) : View(context, attrs, defStyleAttr) { private val paint = Paint(Paint.ANTI_ALIAS_FLAG).apply { style = Paint.Style.STROKE strokeWidth = 12f color = 0xFF4CAF50.toInt() } private val oval = RectF() var progress: Float = 0f set(value) { field = value.coerceIn(0f, 100f); invalidate() } // trigger redraw // Phase 1: MEASURE — determine size override fun onMeasure(widthSpec: Int, heightSpec: Int) { val desiredSize = 200 val width = MeasureSpec.getSize(widthSpec).coerceAtLeast(desiredSize) setMeasuredDimension(width, width) // must call this! } // Phase 2: LAYOUT — called after measure, gives final size override fun onSizeChanged(w: Int, h: Int, oldW: Int, oldH: Int) { val padding = paint.strokeWidth / 2 oval.set(padding, padding, w - padding, h - padding) } // Phase 3: DRAW — paint on canvas override fun onDraw(canvas: Canvas) { // Draw background track paint.color = 0xFFE0E0E0.toInt() canvas.drawArc(oval, 0f, 360f, false, paint) // Draw progress arc paint.color = 0xFF4CAF50.toInt() canvas.drawArc(oval, -90f, progress * 3.6f, false, paint) } } // Never do heavy work in onDraw — called 60fps // Never allocate objects in onDraw — causes GC pressure // Use invalidate() to trigger redraw // Use postInvalidateOnAnimation() for smooth animations
- onMeasure: determine view dimensions — must call
setMeasuredDimension() - onSizeChanged/onLayout: final size known — pre-calculate positions here
- onDraw: paint on canvas — called every frame, must be fast
- Never allocate objects in onDraw (no
Paint(),RectF()) — declare as fields - Use
invalidate()to request a redraw;requestLayout()to re-measure
The biggest custom view mistake: allocating objects inside onDraw(). This triggers GC every frame causing jank. Always pre-allocate Paint, RectF, and Path objects as class fields.
The JVM is Oracle's Java Virtual Machine running on desktop and server -- it executes Java bytecode. DVM (Dalvik) was Android's original runtime, optimised for constrained devices using register-based architecture and DEX files. ART (Android Runtime) replaced Dalvik in Android 5.0 and compiles code ahead-of-time at install, dramatically improving performance. Today all Android devices use ART.
// Build pipeline: Kotlin source → bytecode (.class) → DEX (.dex) → APK/AAB // kotlinc compiles .kt → .class, then D8/R8 converts .class → .dex // ART AOT compilation at install time: // dex2oat converts DEX → native machine code (.oat) for the device CPU // Subsequent launches execute pre-compiled native code -- no JIT warmup // Profile-guided compilation (Android 7+): // First run: interpreted + JIT profiling // Background: dex2oat compiles hot methods from profile // Later runs: hot paths execute as native code // Baseline Profiles force AOT for your declared critical paths at install
- JVM: runs on servers/desktop, class files, large heap -- not designed for mobile constraints
- DVM: register-based (faster than JVM's stack-based), DEX format (smaller than class files), one process per app
- ART: replaced DVM in Android 5.0 -- AOT compiles DEX to native at install, faster startup and execution
- D8: the dex compiler -- converts Java bytecode to DEX. R8 extends D8 with shrinking and obfuscation
- Profile-guided JIT (Android 7+): hot methods profiled at runtime then compiled to native in the background
Connect DVM→ART to Baseline Profiles: "ART's profile-guided AOT takes days to optimize. Baseline Profiles front-load this on install — that's why startup is 40% faster from day one."
AlarmManager triggers actions at specific times, even when your app isn't running. WorkManager handles deferrable background work. They serve different purposes and are often confused.
// AlarmManager — time-specific execution // Use when: exact time matters, user-scheduled reminders, calendar events val alarmManager = getSystemService(AlarmManager::class.java) val intent = PendingIntent.getBroadcast( context, 0, Intent(context, ReminderReceiver::class.java), PendingIntent.FLAG_UPDATE_CURRENT or PendingIntent.FLAG_IMMUTABLE ) // Exact alarm — requires SCHEDULE_EXACT_ALARM permission (API 31+) if (alarmManager?.canScheduleExactAlarms () == true) { alarmManager.setExactAndAllowWhileIdle ( AlarmManager.RTC_WAKEUP, triggerTimeMs, intent ) } // WorkManager — constraint-based, guaranteed execution // Use when: network sync, file upload, data backup val workRequest = OneTimeWorkRequestBuilder<SyncWorker>() .setInitialDelay(15, TimeUnit.MINUTES) // NOT exact timing .setConstraints (Constraints.Builder ().setRequiredNetworkType (NetworkType.CONNECTED).build ()) .build () // Decision table: // Exact time reminder (alarm clock) → AlarmManager.setExact() // Periodic sync (15+ min intervals) → WorkManager PeriodicWorkRequest // Upload on WiFi when charging → WorkManager with Constraints // Timer/countdown in app → Handler.postDelayed() or delay()
- AlarmManager: exact time — alarm clocks, calendar reminders, time-sensitive notifications
- WorkManager: constraint-based, deferrable — syncs, uploads, background tasks
- SCHEDULE_EXACT_ALARM: runtime permission required from Android 12+
- AlarmManager resets on device reboot — must re-schedule in BOOT_COMPLETED receiver
- WorkManager survives reboots automatically — preferred for most background tasks
Key rule: "Does the user care about the exact time? → AlarmManager. Does the task just need to complete eventually? → WorkManager." AlarmManager needing re-scheduling after reboot is a classic gotcha question.
SparseArray is an Android-specific data structure that maps integer keys to Object values, avoiding the autoboxing overhead of HashMap<Int, Any> when keys are integers.
// HashMap<Int, User> — autoboxes int to Integer every access // Creates Integer objects → GC pressure → avoid in performance-critical code val hashMap = HashMap<Int, User>() hashMap[1] = user // int 1 gets autoboxed to Integer(1) // SparseArray — no autoboxing, uses binary search // Efficient for small-to-medium maps (< ~1000 items) val sparseArray = SparseArray<User>() sparseArray.put (1, user) // no boxing val user = sparseArray.get (1) // no unboxing sparseArray.delete (1) // Kotlin variants val sparseIntArray = SparseIntArray() // Int → Int val sparseBoolArray = SparseBooleanArray() // Int → Boolean val sparseLongArray = SparseLongArray() // Int → Long // Use SparseArray when: // ✅ Keys are integers (view IDs, item positions, user IDs) // ✅ Small-to-medium data sets (<1000 items) // ✅ Performance critical code (RecyclerView, custom views) // Use HashMap when: // ✅ Keys are non-integer types (String, Enum, complex objects) // ✅ Large datasets (>1000 items) — HashMap O(1) vs SparseArray O(log n) // ✅ Need map iteration with entrySet()
- SparseArray avoids Integer autoboxing — key performance benefit for integer keys
- Uses binary search (O(log n)) — HashMap is O(1) for large sets
- Android Lint warns when you use
HashMap<Int, *>— suggests SparseArray instead - For
Map<Int, Int>use SparseIntArray; forMap<Int, Boolean>use SparseBooleanArray - Not thread-safe — same as HashMap, needs external synchronization
Android Lint flags HashMap<Integer,*> and suggests SparseArray. Knowing this shows you pay attention to platform-specific optimizations rather than just writing Java-style Android code.
Bitmaps are the largest source of OOM (OutOfMemory) errors in Android. Understanding bitmap memory allocation and reuse is critical for building smooth image-heavy apps.
// Bitmap memory size = width × height × bytes per pixel // ARGB_8888 (default): 4 bytes/pixel // RGB_565: 2 bytes/pixel (no alpha, half memory) // 1080×1920 ARGB_8888 = ~8MB per image! // ❌ Never load full bitmap — sample it down first fun decodeSampledBitmap(path: String, reqWidth: Int, reqHeight: Int): Bitmap { return BitmapFactory.Options().run { inJustDecodeBounds = true // decode size only BitmapFactory.decodeFile (path, this) inSampleSize = calculateInSampleSize(this, reqWidth, reqHeight) inJustDecodeBounds = false // now decode actual pixels BitmapFactory.decodeFile (path, this) } } // ✅ BitmapPool — reuse existing bitmap memory // Instead of allocating new bitmap: reuse one of same dimensions // Coil/Glide maintain a LruCache of unused bitmaps val options = BitmapFactory.Options().apply { inBitmap = pooledBitmap // reuse existing memory! inMutable = true } // ✅ Use Coil (recommended 2025) for auto bitmap management @Composable fun UserAvatar(url: String) { AsyncImage( model = ImageRequest.Builder(LocalContext.current) .data(url) .crossfade (true) .size (200, 200) // downsample to display size .build (), contentDescription = null ) } // Bitmap configs // ARGB_8888 → full quality (default) // RGB_565 → no alpha, 2x memory saving (icons, backgrounds) // HARDWARE → stored in GPU memory, fastest rendering (API 26+)
- Always decode with
inSampleSize— never load full-res into memory unnecessarily - BitmapPool: reuse bitmap memory allocations — prevents GC thrashing in lists
- Coil/Glide handle pooling, sampling, caching automatically — prefer them over manual loading
- HARDWARE bitmaps (API 26+): stored in GPU memory — fastest rendering but immutable
- Use
Bitmap.recycle()only if you manually manage bitmaps — not needed with Coil/Glide
The inSampleSize trick is a classic Android question. Bonus: mention that Coil is Kotlin-first and Coroutine-native — the recommended library for 2025, preferred over Glide for new Compose projects.
App Widgets are mini views that live on the home screen. Android 12 introduced Glance (Jetpack Compose-style) and new widget APIs that make widgets more dynamic and interactive.
// Modern App Widget with Glance (Compose-style) — RECOMMENDED 2025 // implementation("androidx.glance:glance-appwidget:1.0.0") class WeatherWidget : GlanceAppWidget() { override suspend fun provideGlance(context: Context, id: GlanceId) { val prefs = currentState<Preferences>() val temp = prefs[intPreferencesKey("temperature")] ?: 0 provideContent { Column(modifier = GlanceModifier.fillMaxSize ().background (Color.White)) { Text(text = "${temp}°C", style = TextStyle(fontSize = 24.sp)) Button(text = "Refresh", onClick = actionRunCallback<RefreshAction>()) } } } } // Update widget data class RefreshAction : ActionCallback { override suspend fun onAction(context: Context, glanceId: GlanceId, parameters: ActionParameters) { val temp = api.getTemperature () updateAppWidgetState(context, glanceId) { prefs -> prefs[intPreferencesKey("temperature")] = temp } WeatherWidget().update (context, glanceId) } } // Declare in manifest // <receiver android:name=".WeatherWidgetReceiver"> // <intent-filter> // <action android:name="android.appwidget.action.APPWIDGET_UPDATE" /> // </intent-filter> // <meta-data android:name="android.appwidget.provider" android:resource="@xml/widget_info" /> // </receiver>
- Glance: Jetpack Compose-style API for widgets — recommended over RemoteViews for new code
- Android 12: responsive layouts, dynamic colors, rounded corners in widgets
- Widget updates limited by system — use WorkManager to fetch data, then update widget
- RemoteViews: old API — still needed for non-Glance cases and lock screen widgets
- Widget size: specify targetCellWidth/Height in AppWidgetProviderInfo for Android 12+
Glance is the 2025 answer to "how do you build widgets?" — it brings Compose-like syntax to home screen widgets. Mentioning it immediately shows you're not stuck on the old RemoteViews XML approach.
With Android fragmentation (minSdk vs latest API), backward compatibility requires runtime checks, AndroidX libraries, and proper API guarding to avoid crashes on older devices.
// Runtime API level check if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.S) { // API 31+ // Use API 31+ feature safely window.setDecorFitsSystemWindows (false) } else { // Fallback for older devices } // @RequiresApi — lint annotation, NOT a runtime check @RequiresApi(Build.VERSION_CODES.S) fun useBlurEffect(view: View) { view.setRenderEffect (RenderEffect.createBlurEffect (10f, 10f, Shader.TileMode.CLAMP)) } // Caller must guard with SDK_INT check — @RequiresApi just suppresses lint // AndroidX — backported APIs with consistent behavior // ✅ Use ActivityResultContracts instead of onActivityResult (deprecated) // ✅ Use WindowCompat instead of Window flags directly // ✅ Use NotificationCompat instead of Notification // ✅ Use ContextCompat.checkSelfPermission instead of checkSelfPermission // Example: WindowCompat handles SDK differences internally WindowCompat.setDecorFitsSystemWindows (window, false) // works API 17+ // BuildConfig for conditional features if (BuildConfig.DEBUG) { StrictMode.enableDefaults () Timber.plant (Timber.DebugTree()) } // @SuppressLint for known safe usages @SuppressLint("NewApi") fun guardedApiCall() { if (Build.VERSION.SDK_INT >= 33) { /* safe */ } }
- Always check
Build.VERSION.SDK_INTbefore using APIs above minSdk @RequiresApisuppresses lint — it's NOT a runtime guard, must still check SDK_INT- AndroidX libraries handle SDK differences internally — prefer Compat classes always
- Use
ContextCompat,NotificationCompat,WindowCompatfor consistent behavior - Test on your minSdk device/emulator — the most common source of Play Store crashes
A common trap: confusing @RequiresApi with a runtime guard. @RequiresApi only tells lint the method needs a minimum API — you still need the SDK_INT check. Getting this right shows real platform knowledge.
StrictMode is a developer tool that detects accidental disk reads, network calls on the main thread, and leaked resources — and can crash or log your app when these occur during development.
class MyApp : Application() { override fun onCreate() { super.onCreate () if (BuildConfig.DEBUG) { // ThreadPolicy — main thread violations StrictMode.setThreadPolicy ( StrictMode.ThreadPolicy.Builder() .detectDiskReads () // SharedPreferences on main thread .detectDiskWrites () .detectNetwork() // network on main thread . detectCustomSlowCalls () // StrictMode.noteSlowCall() .penaltyLog () // log to Logcat .penaltyDeath () // crash the app (strict!) .build () ) // VmPolicy — memory & resource violations StrictMode.setVmPolicy ( StrictMode.VmPolicy.Builder() .detectLeakedSqlLiteObjects () // unclosed Cursor/DB .detectLeakedClosableObjects () // unclosed streams .detectActivityLeaks () // Activity not GC'd .detectFileUriExposure () // file:// URI exposure .penaltyLog () .build () ) } } } // Mark a slow operation for StrictMode tracking fun loadConfig() { val token = StrictMode.allowThreadDiskReads () // temporarily allow try { readConfigFile() } finally { StrictMode.setThreadPolicy (token) } // restore policy }
- ThreadPolicy: catches disk/network on main thread — most common ANR causes
- VmPolicy: catches leaked cursors, unclosed streams, activity leaks
- Always wrap in
BuildConfig.DEBUG— never ship StrictMode to production - Use
penaltyLog()first, switch topenaltyDeath()when cleaning up violations allowThreadDiskReads(): temporarily suppress for known-safe operations
Saying "I enable StrictMode with penaltyDeath in debug builds and fix every violation before merging" signals a high-quality engineering culture. Most teams don't bother — standing out is easy here.
WebView embeds a web browser in your app. It's powerful but introduces significant security risks — especially JavaScript interface bridges — that are frequently exploited and asked about in security-focused interviews.
val webView: WebView = findViewById(R.id.webView) // ✅ Secure WebView setup webView.settings.apply { javaScriptEnabled = true // only if needed allowFileAccess = false // ✅ disable file access allowContentAccess = false // ✅ disable content provider access setSupportZoom(false) mixedContentMode = WebSettings.MIXED_CONTENT_NEVER_ALLOW // ✅ no HTTP in HTTPS } // ❌ DANGEROUS — exposes entire Java bridge to ALL JS on page webView.addJavascriptInterface (myObject, "Android") // Any JS on the page can call Android.sensitiveMethod() // XSS attack can steal data or call dangerous methods // ✅ SAFE — validate origin before executing JS bridge calls class SafeBridge { @JavascriptInterface fun postMessage(data: String) { // Validate data, never trust JS input if (isValidData (data)) {handleMessage (data) } } } // ✅ Only load trusted URLs webView.webViewClient = object : WebViewClient() { override fun shouldOverrideUrlLoading(view: WebView, req: WebResourceRequest): Boolean { return if (req.url.host == "trusted.myapp.com") { false // allow } else { true // block external URLs } } } // ✅ Update WebView — it's separately updatable via Play Store // Keep it updated — most security patches come through WebView updates
- JavaScript bridge (
addJavascriptInterface): exposes Java to ALL JS — XSS can exploit it - Always validate data received from JS bridge — treat it as untrusted input
- Disable file/content access unless absolutely needed — major attack vectors
- MIXED_CONTENT_NEVER_ALLOW: prevent HTTP content loading in HTTPS pages
- WebView is updated separately via Play Store — encourage users to keep it updated
addJavascriptInterface is a notorious security hole — XSS vulnerabilities on the loaded page can call any @JavascriptInterface method. For fintech/banking interviews, saying "we avoid WebView for sensitive flows and use native UI instead" is the right answer.
App Shortcuts appear when users long-press your app icon on the home screen. They provide quick access to common actions and can significantly improve user engagement.
// 1. STATIC shortcuts — defined in XML, never change // res/xml/shortcuts.xml // <shortcuts> // <shortcut android:shortcutId="compose" // android:shortcutShortLabel="@string/compose_shortcut_short_label" // android:icon="@drawable/ic_compose"> // <intent android:action="android.intent.action.VIEW" // android:targetPackage="com.myapp" // android:targetClass="com.myapp.ComposeActivity" /> // </shortcut> // </shortcuts> // 2. DYNAMIC shortcuts — created at runtime, user-specific val shortcutManager =getSystemService (ShortcutManager::class.java) val shortcut = ShortcutInfo.Builder(this, "recent_chat_rahul") .setShortLabel ("Rahul") .setLongLabel ("Chat with Rahul") .setIcon (Icon.createWithResource (this, R.drawable.ic_chat)) .setIntent (Intent(this, ChatActivity::class.java).apply { action = Intent.ACTION_VIEWputExtra ("userId", "rahul_123") }) .build () shortcutManager?.setDynamicShortcuts (listOf (shortcut)) // 3. PINNED shortcuts — user manually pins to home screen if (shortcutManager?.isRequestPinShortcutSupported == true) { shortcutManager.requestPinShortcut (shortcut, null) } // Report usage — system learns and surfaces relevant shortcuts shortcutManager?.reportShortcutUsed ("recent_chat_rahul")
- Static: defined in XML, same for all users — max 4-5 shortcuts
- Dynamic: created in code at runtime — personalized (recent contacts, items)
- Pinned: user explicitly pins to home screen — persists even if app removes dynamic shortcut
- Max 5 shortcuts combined (static + dynamic) — system enforces this
- Call
reportShortcutUsed()to help launcher learn user habits
WhatsApp's "recent conversations" on long-press is a perfect example of dynamic shortcuts. Mentioning a real-world example during the interview makes your answer much more memorable.
Android apps normally run in a single process. Running components in separate processes provides isolation, independent lifecycle, and crash containment — but adds complexity and IPC overhead.
// Declare separate process in manifest // android:process=":remote" → private process (prefixed with package name) // android:process="com.myapp.sync" → global process (shared between apps) // <service android:name=".SyncService" // android:process=":sync" /> ← runs in separate process // Implications of multi-process: // - Each process has its own Application instance onCreate() // - Singletons NOT shared between processes // - SharedPreferences NOT safe across processes (use ContentProvider or AIDL) // - Room database: use multi-process safe database setup class MyApp : Application() { override fun onCreate() { super.onCreate () // Check which process we're in before initializing val processName = ProcessPhoenix.getProcessName (this) // or use ActivityManager if (processName == packageName) { initMainProcess() // only init heavy stuff in main process } } } // Use cases for multi-process: // 1. Crash isolation — buggy library in :remote won't kill main UI // 2. Memory isolation — push parser/renderer to separate process // 3. Security isolation — sensitive operations in :secure process // 4. SyncAdapter — always runs in separate process // Modern alternative: WorkManager in separate process // implementation("androidx.work:work-multiprocess:2.8.1") val config = Configuration.Builder() .setDefaultProcessName ("$packageName:worker") .build ()
- Each process gets its own Application.onCreate() — initialize carefully per process
- Singletons are NOT shared across processes — common source of bugs
- SharedPreferences is NOT multi-process safe — use ContentProvider or AIDL
- WorkManager supports multi-process via
work-multiprocessartifact - Use sparingly — adds complexity, IPC overhead, and debugging difficulty
The classic trap: developer adds android:process to a Service, then wonders why their singleton is null in that Service. Knowing that each process gets its own Application instance is the key insight here.
Android Vitals in Play Console provides real-world performance data from production users. Monitoring and improving these metrics directly impacts your app's Play Store ranking and user retention.
// Android Vitals key metrics (Play Console): // ANR rate → target < 0.47% (bad behavior threshold) // Crash rate → target < 1.09% // Slow startup → cold start > 5s = slow // Slow frames → > 50ms frames (jank) // Frozen frames → > 700ms frames (very bad) // Permission denial rate, wake lock time, battery usage // Monitor in code with Firebase Performance val trace = Firebase.performance.newTrace ("checkout_flow") trace.start () // ... perform operation trace.stop () trace.putMetric ("items_count", cart.size.toLong ()) // Custom metrics for ANR prevention StrictMode.noteSlowCall ("slowDatabaseQuery") // marks slow call for StrictMode // Crash reporting — Firebase Crashlytics try { riskyOperation() } catch (e: Exception) { Firebase.crashlytics.recordException (e) // non-fatal Firebase.crashlytics.setCustomKey ("userId", currentUserId) } // Macrobenchmark for measuring startup in CI // @Test fun coldStart() { // benchmarkRule.measureRepeated( // packageName = "com.myapp", // metrics = listOf(StartupTimingMetric()), // startupMode = StartupMode.COLDclass="cm">// ) { pressHome(); startActivityAndWait() } // } // Bad behavior thresholds (Play Store flags apps exceeding these): // ANR rate > 0.47% of daily sessions // Crash rate > 1.09% of daily sessions // Excessive wakeups > 10/hour
- Android Vitals: real production data — Play Store ranking affected if thresholds exceeded
- ANR rate target: below 0.47%; Crash rate target: below 1.09%
- Firebase Crashlytics: non-fatal exception tracking with custom keys for context
- Firebase Performance: custom traces for business-critical flows
- Macrobenchmark: automated startup/scroll performance testing in CI pipeline
Saying "I track our ANR rate in Android Vitals and alert when it exceeds 0.3% — below the 0.47% Play Store threshold" shows you care about production quality, not just feature delivery. This mindset is what distinguishes senior from junior developers.
Comprehensive Kotlin questions covering null safety, generics, delegation, coroutines, and advanced features — asked at Google, Flipkart, Swiggy & top startups in 2025-26.
Kotlin's type system distinguishes between nullable and non-nullable types at compile time, eliminating most NullPointerExceptions before your code even runs.
// Non-nullable — compiler guarantees never null var name: String = "Rahul" name = null // ❌ Compile error // Nullable — can hold null var name: String? = "Rahul" name = null // ✅ fine // ?. Safe call — only calls if not null val length = name?.length // returns null if name is null name?.uppercase()?.trim() // chain safe calls // ?: Elvis operator — provide default if null val length = name?.length ?: 0 // 0 if name is null val user = findUser() ?: throw IllegalStateException("User not found") // !! Not-null assertion — throws NPE if null (use sparingly!) val length = name!!.length // throws KotlinNullPointerException if null // let — execute block only if not null name?.let { nonNullName -> println("Name is: $nonNullName") // nonNullName is String, not String? } // Nullable return type check fun getUser(): User? = db.findUser() // Smart cast after null check if (name != null) { println(name.length) // name is smart-cast to String }
- ? — nullable type; value can be null
- ?. — safe call; skips operation if null, returns null
- ?: — Elvis; returns right side if left side is null
- !! — force unwrap; throws KotlinNullPointerException if null — avoid in production
- let — executes lambda only when value is non-null; smart-casts inside block
!! should be a code smell — if you're using it, you either have a design flaw or need a null check. Interviewers specifically look for whether you know when NOT to use !!.
Kotlin provides several ways to declare variables depending on mutability, timing of initialization, and compile vs runtime behavior.
// val — immutable reference, set once (like Java final) val name = "Rahul" // cannot reassign, object content can still change val list = mutableListOf(1, 2) list.add(3) // ✅ list reference is val, content is mutable // var — mutable reference, can be reassigned var count = 0 count = 5 // ✅ // const val — compile-time constant // Must be: top-level or in object/companion object, primitive or String, no custom getter const val BASE_URL = "https://api.myapp.com" // inlined at compile time const val MAX_RETRY = 3 // val BASE_URL = "..." → evaluated at runtime, slower for annotations // lateinit var — defer initialization, non-null type // Use for: Dependency Injection, @Before setup in tests, View Binding lateinit var binding: ActivityMainBinding lateinit var viewModel: UserViewModel // Check before access to avoid UninitializedPropertyAccessException if (::binding.isInitialized) { binding.doSomething() } // by lazy — initialize on first access, thread-safe by default // Can only be used with val val heavyObject by lazy { println("Initialized!") // only runs once, on first access HeavyObject() } // lazy modes val obj by lazy(LazyThreadSafetyMode.SYNCHRONIZED) { /* default, thread-safe */ } val obj by lazy(LazyThreadSafetyMode.NONE) { /* single-thread, no lock overhead */ }
- val: immutable reference — like Java final; can still mutate object content
- var: mutable reference — can reassign freely
- const val: compile-time constant — inlined at each usage site; must be primitive/String at top-level or in object
- lateinit var: non-null var initialized later — throws UninitializedPropertyAccessException if accessed before init
- by lazy: val initialized on first access — thread-safe by default, runs initializer only once
const val vs val: const val is inlined by the compiler — important for annotations where you can only use compile-time constants. val is evaluated at runtime. This distinction is a common trick question.
Data classes are designed for holding data. The compiler auto-generates several standard functions based on the primary constructor properties, saving significant boilerplate.
data class User( val id: String, val name: String, val age: Int = 0 // default value ) // Auto-generated functions: val u1 = User("1", "Rahul", 25) val u2 = User("1", "Rahul", 25) // equals() — compares all primary constructor properties u1 == u2 // true (structural equality) u1 === u2 // false (referential equality — different objects) // hashCode() — consistent with equals() println(u1.hashCode() == u2.hashCode()) // true // toString() — readable output println(u1) // User(id=1, name=Rahul, age=25) // copy() — shallow copy with optional field overrides val u3 = u1.copy(name = "Priya") // User(id=1, name=Priya, age=25) // componentN() — destructuring support val (id, name, age) = u1 println("$id $name $age") // 1 Rahul 25 // Limitations of data class: // ❌ Cannot be abstract, open, sealed, or inner // ❌ Only primary constructor properties included in equals/hashCode/toString // ❌ copy() is SHALLOW — nested objects share references data class Team(val members: MutableList<User>) val team1 = Team(mutableListOf(u1)) val team2 = team1.copy() team2.members.add(u2) println(team1.members.size) // 2! — shallow copy, same list reference
- Auto-generates: equals(), hashCode(), toString(), copy(), componentN() functions
- Only primary constructor properties are included in generated functions
- copy() is shallow — nested mutable objects are shared between copies
- Cannot be abstract, open, sealed, or inner classes
- Use @Parcelize with data class for Android IPC — auto-generates Parcelable code too
The shallow copy trap is a classic interview question: team1.copy() shares the same list reference as team1. To deep copy, you must do team1.copy(members = team1.members.toMutableList()).
All three restrict possible types/values, but they differ in flexibility, state-holding capability, and inheritance rules — making each suitable for different scenarios.
// Enum class — fixed set of constants, same type, no state variation enum class Direction { NORTH, SOUTH, EAST, WEST } enum class Status(val code: Int) { SUCCESS(200), NOT_FOUND(404), ERROR(500) } // All instances are singletons — cannot have different state per instance // Sealed class — restricted class hierarchy, each subclass can have different state sealed class UiState<out T> { object Loading : UiState<Nothing>() data class Success<T>(val data: T) : UiState<T>() data class Error(val message: String, val code: Int) : UiState<Nothing>() } // Sealed interface — like sealed class but allows multiple inheritance sealed interface NetworkResult sealed interface CacheResult data class Success(val data: String) : NetworkResult, CacheResult // implements both! data class NetworkError(val code: Int) : NetworkResult data class CacheError(val reason: String) : CacheResult // Exhaustive when — compiler enforces all cases when (getState()) { is UiState.Loading -> showLoader() is UiState.Success -> showData(it.data) is UiState.Error -> showError(it.message) // No else needed — compiler knows all cases covered }
- enum: fixed singleton constants — same structure, no per-instance state variation
- sealed class: restricted class hierarchy — subclasses can have different properties and state
- sealed interface: like sealed class but a subclass can implement multiple sealed interfaces
- Kotlin 1.9+: sealed class subclasses can be in different files within the same package
- All enable exhaustive
whenexpressions without an else branch
Use enum for fixed constants (Direction, Day), sealed class for state machines with different data per state (UiState.Loading, UiState.Success(data)), sealed interface when you need multiple inheritance of sealed hierarchies.
Scope functions execute a block of code in the context of an object. They differ in how the object is referenced (it vs this) and what they return (object vs lambda result).
// Decision matrix: // Function | Reference | Returns | Use for // let | it | lambda result | null check, transform // run | this | lambda result | compute result, null check // with | this | lambda result | group calls (non-nullable only) // also | it | object itself | side effects (logging, debugging) // apply | this | object itself | object configuration/initialization // let — transform or null-check val upperName = user?.name?.let { it.uppercase() } // only if not null val result = numbers.let { list -> list.filter { it > 0 }.sum() } // apply — configure an object, returns the object val intent = Intent(this, MainActivity::class.java).apply { putExtra("userId", "123") // this = Intent flags = Intent.FLAG_ACTIVITY_SINGLE_TOP } // also — side effects without changing object val user = User("Rahul") .also { log("Created user: ${it.name}") } // it = User, returns User .also { analytics.track("user_created") } // run — compute something with context val isValid = user?.run { name.isNotEmpty() && age > 0 // this = User } ?: false // with — group multiple calls on an object val result = with(textView) { text = "Hello" // this = TextView textSize = 16f setTextColor(Color.RED) visibility = View.VISIBLE }
- apply: object config — returns object, use
thisinside; perfect for builder pattern - also: side effects — returns object, use
it; for logging, validation - let: transform — returns lambda result, use
it; great for null-safety chains - run: compute — returns lambda result, use
this; like apply but returns result - with: group calls — not extension function, use for non-nullable objects
Simple rule: Need the object back? → apply/also. Need a result? → let/run/with. Need null safety? → let or run with ?.. Using 'this' or 'it' is the secondary differentiator.
Higher-order functions take functions as parameters or return functions. Lambdas are anonymous function literals. Together they enable functional programming patterns in Kotlin.
// Higher-order function — takes a function as parameter fun performOperation(x: Int, y: Int, operation: (Int, Int) -> Int): Int { return operation(x, y) } // Lambda expressions val sum = performOperation(3, 4) { a, b -> a + b } // 7 val multiply = performOperation(3, 4) { a, b -> a * b } // 12 // Function type syntax val greet: (String) -> String = { name -> "Hello, $name!" } val double: (Int) -> Int = { it * 2 } // 'it' for single param // Return a function (function factory) fun multiplier(factor: Int): (Int) -> Int = { number -> number * factor } val triple = multiplier(3) println(triple(5)) // 15 — closure captures 'factor' // Function references fun isEven(n: Int) = n % 2 == 0 val evens = listOf(1, 2, 3, 4).filter(::isEven) // [2, 4] val lengths = listOf("a", "bb").map(String::length) // [1, 2] // Under the hood: lambda creates anonymous class implementing Function1<Int,Int> // This means: each lambda = object allocation = GC pressure // Solution: inline functions (see Q8) // Common HOF in Kotlin stdlib val doubled = listOf(1,2,3).map { it * 2 } // [2, 4, 6] val evens2 = listOf(1,2,3).filter { it % 2 == 0 } // [2] val sum2 = listOf(1,2,3).reduce { acc, n -> acc + n } // 6
- Higher-order functions accept or return functions — enables functional programming patterns
- Lambdas compile to anonymous classes implementing Function0, Function1, etc. — object allocations
- Closures: lambdas capture variables from outer scope — be careful with mutable captures
- Function references (::) are more efficient than equivalent lambdas in some cases
- Use
inlineto avoid lambda object creation overhead in performance-critical paths
Mentioning that lambdas create anonymous Function objects under the hood — and that inline functions eliminate this overhead — immediately shows you understand Kotlin beyond surface-level syntax.
Extension functions let you add new functions to existing classes without modifying them or subclassing. They're syntactic sugar — compiled to static methods with the receiver as the first parameter.
// Extension function fun String.toTitleCase(): String = split(" ").joinToString(" ") { it.lowercase().replaceFirstChar { c -> c.uppercase() } } println("hello world".toTitleCase()) // "Hello World" // Extension property val String.wordCount: Int get() = trim().split(Regex("""\s+""")).size println("Hello World".wordCount) // 2 // Extension on nullable type fun String?.orEmpty(): String = this ?: "" // Compile output — static method with receiver // Java equivalent: // public static String toTitleCase(String $this) { ... } // KEY: Extensions resolved STATICALLY at compile time open class Animal class Dog : Animal() fun Animal.speak() = println("Animal speaks") fun Dog.speak() = println("Dog barks") val dog: Animal = Dog() dog.speak() // "Animal speaks" — resolved by DECLARED type, not runtime type! // Member functions always win over extensions with same name // Extensions cannot access private members // Extensions cannot be overridden polymorphically
- Compiled to static methods — no actual modification to the class
- Resolved statically at compile time based on declared type — NOT runtime type
- Member functions always win over extension functions with the same signature
- Cannot access private or protected members of the receiver class
- Can be defined on nullable types:
fun String?.orEmpty()
The static resolution is the key gotcha: if you declare a variable as Animal but it holds a Dog, calling an extension on it uses Animal's extension — not Dog's. Extensions cannot be polymorphic.
Inline functions copy their body and all lambda parameters to the call site at compile time, eliminating lambda object allocation and enabling reified type parameters and non-local returns.
// Without inline — lambda creates an anonymous object every call fun measure(block: () -> Unit) { val start = System.nanoTime () block() println(System.nanoTime () - start) } // With inline — block body copied to call site, no object allocation inline fun measure(block: () -> Unit) { val start = System.nanoTime () block() println(System.nanoTime () - start) } // Enables NON-LOCAL returns (return from outer function) inline fun findFirst(list: List<Int>, predicate: (Int) -> Boolean): Int? { list.forEach { if (predicate(it)) return it } // returns from outer function! return null } // noinline — don't inline a specific lambda parameter // Use when: lambda is stored, passed to another function inline fun doSomething(inlined: () -> Unit, noinline stored: () -> Unit) { val ref = stored // ✅ can store noinline lambda as variable // val ref2 = inlined ❌ cannot store inlined lambda inlined() stored() } // crossinline — allow inlining but prevent non-local returns inline fun runLater(crossinline action: () -> Unit) { Handler(Looper.getMainLooper ()).post { action() // crossinline: inlined but no non-local return allowed } } // reified — only possible with inline functions inline fun <reified T> startActivity(context: Context) { context.startActivity (Intent(context, T::class.java)) } startActivity<HomeActivity>(context) // no Class parameter needed!
- inline: copies function body and lambdas to call site — eliminates object allocation
- Enables non-local returns (return from enclosing function inside lambda)
- noinline: exclude specific lambda from inlining — needed when lambda is stored/passed around
- crossinline: inline but disallow non-local returns — for lambdas passed to other contexts
- reified: only works with inline — allows type parameter access at runtime
Don't inline large functions — it copies the entire body to every call site, increasing bytecode size. Inline is best for small utility functions with lambda parameters used frequently.
Generics provide type safety and code reusability. Variance controls how subtyping relates to generic types — a fundamental concept for writing correct generic APIs.
// Generic class — works with any type T class Box<T>(val value: T) val intBox = Box(42) // Box<Int> val strBox = Box("hello") // Box<String> // Invariant (default) — Box<Dog> is NOT a Box<Animal> fun feed(box: Box<Animal>) { } // feed(Box<Dog>()) ❌ compile error — invariant! // out (covariant) — Producer, only returns T, never consumes it // Box<Dog> IS-A Box<Animal> class Producer<out T>(val value: T) { fun get(): T = value // ✅ can return T // fun set(v: T) { } ❌ cannot accept T as input } val dogs: Producer<Dog> = Producer(Dog()) val animals: Producer<Animal> = dogs // ✅ Dog is Animal, so Producer<Dog> is Producer<Animal> // Real example: List<out E> — you can read, not write // in (contravariant) — Consumer, only accepts T, never produces it // Box<Animal> IS-A Box<Dog> (reversed!) class Consumer<in T> { fun process(value: T) { } // ✅ can accept T // fun get(): T { } ❌ cannot return T } val animalProcessor: Consumer<Animal> = Consumer() val dogProcessor: Consumer<Dog> = animalProcessor // ✅ Consumer<Animal> is Consumer<Dog> // Real example: Comparator<in T> // Star projection — unknown type, read-only fun printSize(list: List<*>) { // accepts List<Any>, List<String>, etc. println(list.size) // can read, elements are Any? }
- Invariant (default): Box<Dog> is NOT a Box<Animal> — neither can substitute the other
- out (covariant): Producer — can only return T; Box<Dog> IS a Box<Animal>
- in (contravariant): Consumer — can only accept T; Box<Animal> IS a Box<Dog>
- Star projection (*): unknown type — like Java's wildcard, read-only access
- Kotlin's List is List<out E> (covariant); MutableList is invariant
Remember: "out = Producer = return only", "in = Consumer = accept only." PECS from Java — Producer Extends, Consumer Super — maps to Kotlin's out and in respectively.
reified allows access to generic type information at runtime, bypassing Java's type erasure. It's only possible with inline functions because inlining copies the function body to the call site, where the actual type is known.
// Problem: Type erasure — T is unknown at runtime in regular generics fun <T> isType(value: Any): Boolean { return value is T // ❌ ERROR: Cannot check for erased type T } // Solution: reified + inline — T is real type at call site inline fun <reified T> isType(value: Any): Boolean { return value is T // ✅ works! T is concrete at each call site } isType<String>("hello") // true isType<Int>("hello") // false // Real-world uses of reified: // 1. Start Activity without passing Class inline fun <reified T : Activity> Context.startActivity() { startActivity(Intent(this, T::class.java)) } startActivity<HomeActivity>() // clean, no ::class.java needed // 2. Gson/Retrofit type-safe parsing inline fun <reified T> Gson.fromJson(json: String): T = fromJson(json, T::class.java) val user: User = gson.fromJson(jsonString) // no Class parameter! // 3. Find fragment by type inline fun <reified T : Fragment> FragmentManager.findFragment(): T? = fragments.filterIsInstance <T>().firstOrNull () // Why only inline: The compiler inlines the function body at each call site // At each call site the actual type (String, User, etc.) is known at compile time // The compiler replaces T with the actual class reference
- Type erasure: generic type T is erased at runtime — normally can't use T::class or is T
- reified + inline: compiler copies function body to each call site where type is known
- Enables:
is T,as T,T::class,T::class.javainside inline functions - Common uses: startActivity, type-safe JSON parsing, filterIsInstance
- Cannot be used in non-inline functions — type must be concretely known at compilation
Explain WHY reified needs inline: "The function is copied to each call site where the compiler knows the actual type. Without inlining, the function is compiled once — and at that point, T is erased."
Kotlin's by keyword provides first-class support for the delegation pattern — both for class implementation and property get/set logic — eliminating massive amounts of boilerplate.
// CLASS DELEGATION — delegate interface implementation to another object interface Printer { fun print(text: String) } class ConsolePrinter : Printer { override fun print(text: String) = println(text) } // Without delegation — boilerplate class LoggingPrinter(private val delegate: Printer) : Printer { override fun print(text: String) { log(text); delegate.print(text) } } // With delegation — by keyword delegates to printer object class LoggingPrinter(printer: Printer) : Printer by printer { // All Printer methods auto-delegated to printer // Override only what you need to change } // PROPERTY DELEGATION — delegate get/set to another object // lazy — initialize on first access val db by lazy { Room.databaseBuilder (...).build () } // observable — callback on value change var name by Delegates.observable("initial") { property, old, new -> println("$old → $new") } name = "Rahul" // prints: "initial → Rahul" // vetoable — can reject value change var age by Delegates. vetoable (0) { _, _, new -> new >= 0 } // reject negative age = 25 // ✅ accepted age = -1 // ❌ rejected, age stays 25 // Custom property delegate class SharedPrefsDelegate(private val key: String, private val default: String) { operator fun getValue(thisRef: Any?, property: KProperty<*>): String = prefs.getString(key, default) ?: default operator fun setValue(thisRef: Any?, property: KProperty<*>, value: String) = prefs.edit().putString(key, value). apply () } var userName by SharedPrefsDelegate("user_name", "")
- Class delegation:
byforwards interface implementation to another object — composition over inheritance - lazy: thread-safe by default, initializer runs only once on first access
- observable: callback fires after every assignment — good for UI state changes
- vetoable: callback fires before assignment, can reject the new value
- Custom delegates: implement getValue/setValue operators — used for SharedPrefs, SavedStateHandle
Android uses delegation heavily: by viewModels(), by activityViewModels(), by navArgs() — all are Kotlin property delegates. Mentioning these Android-specific examples shows you connect language features to real usage.
Kotlin allows overloading a predefined set of operators by providing implementations of corresponding functions with the operator modifier. This makes custom classes feel like built-in types.
data class Vector(val x: Double, val y: Double) { // Arithmetic operators operator fun plus(other: Vector) = Vector(x + other.x, y + other.y) // v1 + v2 operator fun minus(other: Vector) = Vector(x - other.x, y - other.y) // v1 - v2 operator fun times(scalar: Double) = Vector(x * scalar, y * scalar) // v1 * 2.0 operator fun unaryMinus() = Vector(-x, -y) // -v1 // Comparison operator fun compareTo(other: Vector): Int { val mag1 = Math.sqrt (x * x + y * y) val mag2 = Math.sqrt (other.x * other.x + other.y * other.y) return mag1.compareTo (mag2) } // v1 > v2, v1 < v2 // Index operator operator fun get(index: Int) = when (index) { 0 -> x; 1 -> y; else -> throw IndexOutOfBoundsException() } // v[0] → x, v[1] → y } val v1 = Vector(1.0, 2.0) val v2 = Vector(3.0, 4.0) println(v1 + v2) // Vector(x=4.0, y=6.0) println(-v1) // Vector(x=-1.0, y=-2.0) println(v1 > v2) // false println(v1[0]) // 1.0 // invoke operator — makes object callable like a function class Adder(val base: Int) { operator fun invoke(x: Int) = base + x } val add5 = Adder(5) println(add5(3)) // 8 — object called like a function!
- Arithmetic: plus, minus, times, div, rem, unaryMinus, unaryPlus
- Comparison: compareTo (enables >, <, >=, <=)
- Indexing: get, set (enables obj[i])
- invoke: makes instances callable as functions
- in: contains (enables the
inoperator)
The invoke operator is the most surprising one — it makes instances callable like functions. This is how Kotlin lambda syntax works internally: lambdas are objects with an invoke operator.
Kotlin separates read-only and mutable collections at the type system level. Sequences provide lazy evaluation — critical for large datasets and chained operations.
// Immutable collection interfaces — read-only view val list: List<Int> = listOf(1, 2, 3) // cannot add/remove val set: Set<Int> = setOf(1, 2, 3) // unique, unordered val map: Map<String, Int> = mapOf("a" to 1) // read-only key-value // Mutable versions val mList: MutableList<Int> = mutableListOf(1, 2, 3) mList.add(4); mList.remove(1) // SEQUENCE — lazy evaluation, processes one element at a time // EAGER (List): processes entire collection at each step val eagerResult = (1..1_000_000) .filter { it % 2 == 0 } // processes ALL 1M → creates new list of 500K .map { it * 2 } // processes ALL 500K → creates new list . first () // gets first element // LAZY (Sequence): processes elements one at a time until done val lazyResult = (1..1_000_000).asSequence () .filter { it % 2 == 0 } // lazy — doesn't execute yet .map { it * 2 } // lazy — doesn't execute yet .first () // executes: checks 1,2 → returns 4. Stops! // Common collection operations val names = listOf("Alice", "Bob", "Charlie") names.groupBy { it.first () } // {A=[Alice], B=[Bob], C=[Charlie]} names.partition { it. length > 3 } // Pair([Alice, Charlie], [Bob]) names.associate { it to it.length } // {Alice=5, Bob=3, Charlie=7} names.zip (listOf(1, 2, 3)) // [(Alice,1), (Bob,2), (Charlie,3)] names.flatMap { it.toList () } // [A,l,i,c,e,B,o,b,...]
- List/Set/Map: read-only interfaces — MutableList/MutableSet/MutableMap for mutation
- Read-only ≠ immutable — the underlying implementation can still be mutable
- Sequences: lazy — each element flows through the entire pipeline before the next one
- Use Sequence for: large collections, multiple chained operations, early termination (first, take)
- Use List for: small collections, single operations — Sequence has overhead for small sizes
Rule of thumb: if your pipeline has 3+ operations OR the collection has 1000+ elements, use asSequence(). For small collections, the overhead of Sequence object creation outweighs the benefit.
when is Kotlin's powerful replacement for Java's switch statement. It's an expression (returns a value), supports any type (not just ints/strings), handles ranges, types, and conditions.
// Basic when — replaces switch when (x) { 1 -> println("one") 2, 3 -> println("two or three") // multiple values in 4..10 -> println("four to ten") // range check else -> println("other") } // when as an expression (returns value) val description = when (score) { in 90..100 -> "Excellent" in 70..89 -> "Good" in 50..69 -> "Average" else -> "Below average" } // when with type checking (smart cast) fun describe(obj: Any): String = when (obj) { is String -> "String of length ${obj.length}" // smart cast to String is Int -> "Int: $obj" is List<*> -> "List of ${obj.size} elements" else -> "Unknown" } // when without argument — replaces if-else chain val result = when { x < 0 -> "negative" x == 0 -> "zero" x > 0 -> "positive" else -> "unreachable" } // Exhaustive when with sealed class (no else needed!) when (val state = viewModel.state.value) { is UiState.Loading -> showLoader() is UiState.Success -> showData(state.data) // state smart-cast is UiState.Error -> showError(state.message) }
- when is an expression — can return a value and be used in assignments
- Supports: value matching, range checks (in), type checks (is), conditions without argument
- Smart cast: when (obj) is String → obj is automatically String in that branch
- With sealed classes: no else needed — compiler verifies exhaustiveness
- Key differences from Java switch: expression (not statement), no fallthrough, any type
Using when as an expression (assigning its result to val) is idiomatic Kotlin — it's concise and immutable. Mention that unlike Java switch, when has no fallthrough behavior — each branch is independent.
Destructuring declarations let you unpack an object into multiple variables simultaneously. They work via componentN() convention functions auto-generated by data classes.
// data class auto-generates component1(), component2(), etc. data class User(val name: String, val age: Int, val city: String) val user = User("Rahul", 25, "Delhi") // Destructuring — desugars to component calls val (name, age, city) = user // Equivalent to: // val name = user.component1() → "Rahul" // val age = user.component2() → 25 // val city = user.component3() → "Delhi" // Skip a component with _ val (nameOnly, _, cityOnly) = user // skip age // Destructuring in for loops val users = listOf(User("Alice", 30, "Mumbai"), User("Bob", 25, "Pune")) for ((name, age) in users) { println("$name is $age years old") } // Destructuring Map entries val map = mapOf("one" to 1, "two" to 2) for ((key, value) in map) { println("$key = $value") } // Destructuring lambda parameters val pairs = listOf(1 to "one", 2 to "two") pairs.forEach { (num, word) -> println("$num = $word") } // Custom componentN functions class Point(val x: Int, val y: Int) { operator fun component1() = x operator fun component2() = y } val (x, y) = Point(3, 4) // works! // Pair and Triple support destructuring val (first, second) = Pair("Hello", 42)
- Destructuring desugars to component1(), component2(), etc. calls
- Data classes auto-generate componentN() for all primary constructor properties in order
- Use _ to skip components you don't need
- Works in: variable declarations, for loops, lambda parameters
- Any class can support destructuring by declaring operator fun componentN()
Property ORDER matters for destructuring — components are numbered by position in the primary constructor, not by name. Reordering data class properties can silently break destructuring code.
Kotlin's object keyword serves three distinct purposes — each solving a different problem in a cleaner way than Java's static methods or anonymous inner classes.
// 1. Object declaration — Singleton object AppConfig { val baseUrl = "https://api.myapp.com" var debugMode = false fun getHeaders() = mapOf("Accept" to "application/json") } AppConfig.baseUrl // ✅ thread-safe singleton, lazy initialized AppConfig.debugMode = true // 2. Companion object — static members in a class class User(val name: String) { companion object { // can have a name: companion object Factory const val MAX_NAME_LENGTH = 50 fun create(name: String): User? { return if (name.isNotBlank()) User(name) else null } } } User.create("Rahul") // called on class, not instance User.MAX_NAME_LENGTH // 50 // @JvmStatic — makes companion method a true Java static companion object { @JvmStatic fun newInstance() = MyFragment() } // Java: MyFragment.newInstance() ← needs @JvmStatic // 3. Object expression — anonymous object (Java's anonymous inner class) val clickListener = object : View.OnClickListener { override fun onClick(v: View) { handleClick() } } // Anonymous object implementing multiple interfaces val obj = object : Runnable, Closeable { override fun run() { } override fun close() { } } // Object with no supertype val counter = object { var count = 0 } counter.count++
- object declaration: singleton — thread-safe, lazy initialized, no constructor
- companion object: class-level members — Java's static equivalent; one per class
- object expression: anonymous object — like Java's anonymous inner class; for one-off implementations
- @JvmStatic: makes companion method a true static in bytecode — needed for Java interop
- Companion object can implement interfaces — used in factory pattern and type class pattern
Object declarations are lazy and thread-safe by default — Kotlin handles the double-checked locking for you. In Java, implementing a thread-safe singleton requires careful synchronized code. Mention this advantage.
Kotlin has no checked exceptions — all exceptions are unchecked. This eliminates the verbose try-catch boilerplate Java forces, but requires disciplined error handling design.
// No checked exceptions — no throws declaration needed fun readFile(path: String): String { return File(path).readText () // IOException — no @throws needed } // try-catch-finally — same as Java try { val result = riskyOperation() } catch (e: IOException) { log("IO error: ${e.message}") } catch (e: Exception) { log("General error") } finally { cleanup() } // try as an expression (returns value) val number = try { parseInt(input) } catch (e: NumberFormatException) { 0 // default on parse error } // throw as an expression val name = user?.name ?: throw IllegalStateException("User has no name") // use() — auto-closeable, replaces try-with-resources File("data.txt").bufferedReader ().use { reader -> reader.readLines ().forEach { println(it) } } // reader.close() called automatically, even on exception // runCatching — functional exception handling val result = runCatching { riskyOperation() } . getOrDefault ("fallback") .onFailure { e -> log(e) } // @Throws — for Java interoperability @Throws(IOException::class) fun readData(): String { return File("data.txt").readText () }
- No checked exceptions — no forced try-catch; more concise but requires discipline
- try is an expression — can return a value from both try and catch blocks
- throw is an expression — can be used in Elvis operator and assignments
- use() extension — Kotlin's replacement for Java's try-with-resources
- runCatching — functional alternative returning Result<T> — good for clean error handling
Prefer Result/runCatching over try-catch in business logic layers — it's more composable and forces callers to explicitly handle the error case. Reserve try-catch for infrastructure code at boundaries.
Contracts let you provide the compiler with information about how a function behaves — enabling smart casts and definite assignment analysis in cases the compiler couldn't figure out alone.
// Problem: compiler doesn't know that after isValid(), user is non-null fun isValid(user: User?): Boolean = user != null && user.name.isNotEmpty () val user: User? = getUser() if (isValid(user)) { user.doSomething () // ❌ Error: user is still User? } // Contract — tell the compiler: "if I return true, user is not null" @OptIn(ExperimentalContracts::class) fun isValid(user: User?): Boolean { contract { returns(true) implies (user != null) } return user != null && user.name.isNotEmpty () } if (isValid(user)) { user.doSomething () // ✅ compiler knows user is User now } // callsInPlace — tells compiler lambda runs exactly once @OptIn(ExperimentalContracts::class) inline fun runOnce(block: () -> Unit) { contract { callsInPlace(block, InvocationKind.EXACTLY_ONCE) } block() } // Without contract: val must be initialized before use val result: String runOnce { result = "hello" } // ✅ compiler knows this runs exactly once println(result) // ✅ compiler knows result is initialized // Stdlib uses contracts internally: // also, apply, let, run, with — all have callsInPlace contracts val x: Int run { x = 42 } // ✅ run has callsInPlace(EXACTLY_ONCE) contract println(x) // ✅ compiler knows x is definitely initialized
- Contracts provide the compiler with behavioral guarantees about function execution
- returns(true) implies: enables smart cast based on return value
- callsInPlace(EXACTLY_ONCE): enables definite assignment in lambdas
- Stdlib scope functions (let, run, apply, also, with) all use callsInPlace internally
- Still experimental API but very stable — likely to become stable soon
Contracts are a lesser-known advanced feature — mentioning them immediately distinguishes you from 99% of candidates. The callsInPlace contract is why you can initialize a val inside a run { } block.
typealias provides an alternative name for an existing type. It improves code readability for complex types without creating new types or affecting performance.
// Simplify complex types typealias UserId = String typealias UserMap = Map<UserId, User> typealias Callback<T> = (Result<T>) -> Unit // Function types typealias OnClick = (View) -> Unit typealias Predicate<T> = (T) -> Boolean fun setClickListener(listener: OnClick) { } // cleaner than (View) -> Unit // Android use cases typealias ViewClickListener = View.OnClickListener typealias NetworkCallback = (Result<User>) -> Unit // Avoid class name conflicts import com.myapp.Date as AppDate typealias JavaDate = java.util.Date // IMPORTANT: typealias is NOT a new type typealias UserId = String val userId: UserId = "123" val name: String = userId // ✅ interchangeable — same type // For true new types (prevents mixing), use inline/value classes: @JvmInline value class UserId(val value: String) // TRUE new type, not interchangeable with String
- typealias is purely a compile-time renaming — no runtime overhead, no new type created
- Useful for: long generic types, function types, avoiding name conflicts
- NOT a new type — UserId and String are interchangeable; type safety is not improved
- For true type safety (no accidental mixing), use value/inline classes instead
- Can have generic type parameters:
typealias Predicate<T> = (T) -> Boolean
typealias vs value class is a nuanced distinction. typealias = alias (no safety), value class = new type (compile-time safety, no runtime overhead). Mentioning value/inline classes as the alternative shows deeper knowledge.
Value classes wrap a single value with a distinct type, providing type safety without runtime overhead. The compiler inlines the underlying value at most call sites, so no object is created.
// @JvmInline value class — wraps a single value @JvmInline value class UserId(val value: String) @JvmInline value class Email(val value: String) { init { require(value.contains ("@")) { "Invalid email" } } val domain: String get() = value.substringAfter ("@") } // Type safety — prevents mixing String with UserId/Email fun sendEmail(userId: UserId, email: Email) { } val id = UserId("user_123") val mail = Email("[email protected]") sendEmail(id, mail) // ✅ // sendEmail(mail, id) ❌ compile error — types don't match! // sendEmail("user_123", "[email protected]") ❌ must wrap in value class // No runtime overhead — compiled to underlying type // val id: UserId = UserId("123") → compiled as val id: String = "123" // Wrapper object only created when boxing is required (generics, nullable) // Stable since Kotlin 1.5 — use in production // Restrictions: // ❌ Cannot have init block modifying the value (can validate) // ❌ Cannot have backing fields other than the primary property // ❌ Cannot inherit from classes (can implement interfaces) // Real Android use case — avoid primitive type confusion @JvmInline value class Dp(val value: Float) @JvmInline value class Sp(val value: Float) fun setTextSize(size: Sp) { } fun setMargin(margin: Dp) { } // Can't accidentally pass Sp where Dp is expected!
- Value classes add type safety without creating objects at runtime (inlined by compiler)
- Prevents "stringly typed" APIs — UserId and Email are distinct types even though both wrap String
- Box only when needed: generics, nullable value class — then wrapper object is created
- Can have computed properties, methods, and implement interfaces
- Jetpack Compose uses value classes extensively for Dp, Sp, Color, etc.
Value classes are how Jetpack Compose defines Color, Dp, Sp — they're zero-overhead wrappers with type safety. This is a great example that shows you understand the library you use every day at a deeper level.
Kotlin has four visibility modifiers. The default is public (unlike Java's package-private). The internal modifier is module-scoped — a key concept for library development and multi-module Android apps.
// Kotlin visibility modifiers (top-level declarations) public // visible everywhere (default if not specified) internal // visible within the same MODULE only private // visible in the same file only (for top-level) // For class members public // visible everywhere protected // visible in class and subclasses (not top-level) internal // visible in same module private // visible in the same class only // INTERNAL — key difference from Java // Java: package-private = visible within same PACKAGE // Kotlin: internal = visible within same MODULE (Gradle module) // In a multi-module Android app: // :feature:home module internal class HomeRepositoryImpl : HomeRepository // can't leak to :app internal fun parseHomeData(json: String): HomeData { } // :core:network module class ApiClient { internal val httpClient = OkHttpClient() // hidden from :feature modules fun getUser(id: String): User { } // public API } // private class member class UserViewModel { private val _state = MutableStateFlow(UiState.Loading) val state: StateFlow<UiState> = _state.asStateFlow () // expose read-only } // Smart: internal constructor with public factory class Database internal constructor() { companion object { fun create(): Database = Database() // factory controls creation } }
- Default modifier is public — unlike Java's package-private default
- internal: module-scoped — key for library development and multi-module apps
- Java has no module-level visibility; Java's closest is package-private (same package)
- Use internal to hide implementation details between Gradle modules
- The pattern
private val _state, val stateexposes immutable view of mutable state
internal is Kotlin's most underused modifier. In multi-module apps, marking repository implementations as internal prevents other modules from depending on concrete implementations — enforcing Clean Architecture boundaries.
Kotlin DSLs (Domain Specific Languages) create readable, type-safe APIs that look like declarative config code. They use lambdas with receivers, operator overloading, and extension functions.
// DSL for building HTML class HtmlBuilder { private val children = StringBuilder() fun div(block: HtmlBuilder.() -> Unit) { // lambda with receiver children.append ("<div>") HtmlBuilder().apply (block).also { children.append (it.build ()) } children.append ("</div>") } fun text(value: String) { children.append (value) } fun build() = children.toString () } fun html(block: HtmlBuilder.() -> Unit): String = HtmlBuilder().apply (block).build () // Usage — reads like declarative config val page = html { div { text("Hello World") } } // Android DSL examples you use every day: // Gradle Kotlin DSL dependencies { implementation("androidx.core:core-ktx:1.12.0") testImplementation("junit:junit:4.13") } // NavGraph DSL NavHost(navController, startDestination = "home") { composable("home") { HomeScreen() } composable("detail/{id}") { DetailScreen() } } // Jetpack Compose IS a DSL! @Composable fun MyScreen() { Column { // lambda with receiver: ColumnScope Text("Hello") Button(onClick = {}) { Text("Click") } } } // Key features enabling Kotlin DSLs: // 1. Lambdas with receivers: Type.() -> Unit // 2. Extension functions: add methods to existing types // 3. Operator overloading: custom operators // 4. infix functions: natural language syntax // 5. Named/default arguments: cleaner call sites
- Lambda with receiver
Type.() -> Unit: inside the lambda,thisis Type — enables DSL builder pattern - Extension functions: add DSL methods to any type without subclassing
- Real DSLs you already use: Gradle build files, Jetpack Compose, NavGraph, Ktor routes
- Jetpack Compose is fundamentally a DSL — understanding this unlocks deeper understanding of Compose
- @DslMarker annotation: prevents accidentally calling outer DSL scope inside inner — avoids confusing DSLs
The key insight: "Lambda with receiver is the foundation of Kotlin DSLs." Then say: "Jetpack Compose, Gradle Kotlin DSL, and Ktor routing are all built on this single language feature." That connection alone is impressive.
Kotlin's type system is more expressive than Java's — it has explicit types for the top and bottom of the type hierarchy, and every type is an object (no primitives at the language level).
// Any — top type, supertype of ALL Kotlin types // Like Java's Object but with better semantics fun process(value: Any) { } // accepts any non-null type fun process(value: Any?) { } // accepts anything including null // Any vs Object: // Any doesn't have: wait(), notify(), notifyAll(), getClass() // Any has: equals(), hashCode(), toString() (same as Object) // Unit — return type for functions that return no meaningful value // Like Java's void BUT Unit is a real type with a single value: Unit fun logMessage(msg: String): Unit { println(msg) } // : Unit is optional // Unit can be used as generic argument (void cannot) val callback: () -> Unit = { println("done") } fun runCallback(cb: () -> Unit) { cb() } // Nothing — bottom type, subtype of ALL types // Functions that NEVER return normally fun fail(message: String): Nothing { throw IllegalStateException(message) } fun loop(): Nothing { while (true) { } } // Nothing enables smart analysis: val name: String = user?.name ?: throw Exception("null") // throw returns Nothing (subtype of String) → assignment type-checks! // Nothing? — the only value is null val nothing: Nothing? = null // Kotlin primitive types — look like objects, compiled to JVM primitives val x: Int = 5 // compiled to int x = 5 (JVM primitive) val y: Int? = null // compiled to Integer y = null (boxed) val list: List<Int> // compiled to List<Integer> (boxing required for generics)
- Any: top type — every type is a subtype; like Java Object but cleaner
- Unit: the "no meaningful value" type — a real singleton object; enables use in generics unlike void
- Nothing: bottom type — return type of functions that never complete; subtype of everything
- Nothing enables type-safe throw in expressions (Elvis operator, when branches)
- Kotlin Int compiles to JVM int (primitive); Int? compiles to Integer (boxed) — no explicit boxing needed
Nothing is the most misunderstood type in Kotlin. The key insight: "Nothing is a subtype of every type — that's why throw expression is valid wherever any type is expected, including String assignments."
Kotlin was chosen as the preferred Android language for well-founded reasons — it solves Java's biggest pain points while adding powerful modern language features.
// 1. NULL SAFETY — eliminates NPE at compile time // Java: NullPointerException is the most common runtime crash val name: String? = getUserName() // explicit nullable val length = name?.length ?: 0 // safe, no NPE // 2. CONCISENESS — data class vs Java POJO // Java: ~50 lines with constructor, getters, equals, hashCode, toString data class User(val id: String, val name: String) // 1 line! // 3. COROUTINES — sequential async code // Java: nested callbacks (callback hell) // Kotlin: sequential, readable async code suspend fun loadData(): Data { val user = fetchUser() // async, looks sync val posts = fetchPosts(user) return Data(user, posts) } // 4. EXTENSION FUNCTIONS — no more Utils classes fun Context.showToast(msg: String) = Toast.makeText (this, msg, Toast.LENGTH_SHORT).show () // Java: ToastUtils.showToast(context, message); // 5. SMART CASTS — no explicit casting if (animal is Dog) { animal.bark() // ✅ auto-cast to Dog, no (Dog) cast needed } // 6. NO CHECKED EXCEPTIONS — cleaner code // Java: throws IOException, InterruptedException everywhere fun readFile() = File("data.txt"). readText () // no throws needed // 7. DEFAULT PARAMETERS — no overloading pyramid fun connect(host: String, port: Int = 80, timeout: Int = 5000) { } connect("example.com") // uses defaults connect("example.com", timeout = 3000) // named args // 8. 100% JAVA INTEROPERABILITY // Can use all Java libraries, call Java from Kotlin and vice versa
- Null safety: compiler-enforced — eliminates the #1 cause of Android crashes
- Conciseness: data classes, extension functions, scope functions — less boilerplate
- Coroutines: built into language — sequential async code, no callback hell
- Smart casts: no explicit casts needed after type checks
- Full Java interop: use all existing Java libraries, no migration cliff
Google announced Kotlin-first in 2019 — all new Jetpack APIs are Kotlin-first. Mentioning that Jetpack Compose, Kotlin Coroutines, and KMM are fundamentally Kotlin features (no Java equivalent) is the strongest argument.
Kotlin Serialization is a compiler plugin-based serialization library from JetBrains. It's the recommended choice for new Android projects in 2025 — especially with Retrofit and KMM.
// Setup // plugins { id("org.jetbrains.kotlin.plugin.serialization") } // implementation("org.jetbrains.kotlinx:kotlinx-serialization-json:1.6.3") // @Serializable — annotation-based, compile-time code generation @Serializable data class User( val id: Int, val name: String, @SerialName("email_address") // custom JSON key name val email: String, val role: String = "user" // optional with default ) // Encode to JSON val user = User(1, "Rahul", "[email protected]") val json = Json.encodeToString (user) // {"id":1,"name":"Rahul","email_address":"[email protected]","role":"user"} // Decode from JSON val decoded: User = Json.decodeFromString (json) // Configure JSON behavior val json = Json { ignoreUnknownKeys = true // don't crash on extra fields isLenient = true // allow unquoted strings prettyPrint = true // formatted output coerceInputValues = true // use defaults for nulls } // Use with Retrofit val retrofit = Retrofit.Builder () .addConverterFactory (Json.asConverterFactory ("application/json".toMediaType ())) .build () // Sealed class serialization @Serializable sealed class Shape { @Serializable data class Circle(val radius: Double) : Shape() @Serializable data class Rectangle(val w: Double, val h: Double) : Shape() }
- Compile-time code generation — no reflection at runtime → faster and R8-friendly
- Null safety aware — won't parse null into non-nullable field without explicit handling
- Works with KMM — shared across Android, iOS, and JS (Gson/Moshi are JVM-only)
- Supports: sealed classes, generics, nested serialization, custom serializers
- Gson: reflection-based (slow, R8 issues); Moshi: code-gen optional; Kotlin Serialization: always compile-time
The KMM argument is the strongest: Gson and Moshi only work on JVM. Kotlin Serialization works on Android, iOS, JS, and server — making it the only choice for shared KMM code. This shows strategic thinking beyond just Android.
Kotlin has two equality operators that serve very different purposes — a common interview trap for developers coming from Java.
// == Structural equality — calls equals() val a = String("hello".toCharArray()) val b = String("hello".toCharArray()) println(a == b) // true — same content println(a === b) // false — different objects // === Referential equality — same object in memory val c = a println(a === c) // true — same reference // data class uses == for structural equality data class User(val name: String) val u1 = User("Rahul") val u2 = User("Rahul") println(u1 == u2) // true — data class equals() println(u1 === u2) // false — different objects // null-safe: a == null compiles to a?.equals(null) ?: (null === null) val x: String? = null println(x == null) // true — safe, no NPE
- == calls equals() — structural/content equality
- === checks reference — same object in memory
- data classes override equals() to compare all primary constructor properties
- == is null-safe — never throws NPE unlike Java's .equals()
In Java, == compares references and .equals() compares content. Kotlin flips this intuitively — == is content, === is reference. The null-safe behavior of == is a bonus that prevents NPE.
Kotlin provides powerful string features that eliminate messy concatenation and make working with structured text much cleaner than Java.
// String templates — embed expressions with $ val name = "Rahul" val age = 25 println("Name: $name, Age: $age") println("Next year: ${age + 1}") // expressions in {} println("Length: ${name.length}") // Multiline strings — trimIndent() removes common indent val json = """ { "name": "$name", "age": $age } """.trimIndent() // trimMargin() — custom margin prefix val query = """ |SELECT * |FROM users |WHERE age > $age """.trimMargin() // Raw strings — no escape sequences needed val regex = """\d{3}-\d{4}""" // no \\ needed val path = """C:\Users\Rahul\Documents""" // String functions "hello world".capitalize() // "Hello world" " hello ".trim() // "hello" "hello".repeat(3) // "hellohellohello" "hello".reversed() // "olleh" "a,b,c".split(",") // [a, b, c]
- String templates: $ for variables, ${} for expressions — no concatenation needed
- Triple-quoted strings: multiline, no escape sequences needed
- trimIndent(): removes common leading whitespace from multiline strings
- Raw strings: great for regex patterns — no double-backslash needed
Raw strings with trimIndent() are heavily used in Android for JSON test fixtures, SQL queries, and XML templates. Showing you use them in tests demonstrates clean code habits.
Both hold ordered elements but differ in mutability model, performance, and API richness. Knowing when to use each is a common interview question.
// Array — fixed size, mutable content, maps to JVM array val array = arrayOf(1, 2, 3) array[0] = 10 // ✅ can mutate elements // array size is fixed — cannot add/remove println(array.size) // 3 // Primitive arrays — no boxing, better performance val ints = intArrayOf(1, 2, 3) // int[] in JVM val longs = longArrayOf(1L, 2L) // long[] in JVM // List — immutable interface, rich API val list = listOf(1, 2, 3) // list[0] = 10 ❌ read-only // MutableList — dynamic size, full mutation val mList = mutableListOf(1, 2, 3) mList.add (4) // ✅ grow mList.remove (1) // ✅ shrink mList[0] = 10 // ✅ mutate // Key differences: // Array: fixed size, mutable elements, maps to JVM array[] // List: immutable interface, rich functional API (map/filter/etc) // MutableList: dynamic size, mutable, still has full collection API // Convert between them val fromArray = array.toList () val fromList = list.toTypedArray ()
- Array: fixed size, mutable elements, compiles to JVM array — use for performance-critical code
- List: immutable interface — use for most business logic
- MutableList: dynamic sizing — use when you need to add/remove elements
- IntArray/LongArray etc: no boxing overhead — prefer over Array<Int> for primitives
Array<Int> compiles to Integer[] (boxed). IntArray compiles to int[] (primitive). For large numerical collections, intArrayOf() gives significantly better performance due to no boxing overhead.
Kotlin is 100% interoperable with Java but some Kotlin features need annotations to work cleanly from Java code. These annotations are commonly asked in Android interviews.
// @JvmStatic — expose companion function as true Java static class MyFragment : Fragment() { companion object { @JvmStatic funnewInstance (id: String) = MyFragment().apply { arguments =bundleOf ("id" to id) } } } // Java: MyFragment.newInstance("123") ← needs @JvmStatic // Without it: MyFragment.Companion.newInstance("123") // @JvmField — expose property as Java field (no getter/setter) class Config { @JvmField var timeout = 5000 } // Java: config.timeout = 3000 ← direct field access // Without it: config.setTimeout(3000) ← through getter/setter // @JvmOverloads — generate Java overloads for default params @JvmOverloads funconnect (host: String, port: Int = 80, timeout: Int = 5000) { } // Java gets: connect(host), connect(host,port), connect(host,port,timeout) // @Throws — declare checked exceptions for Java callers @Throws(IOException::class) funreadFile (): String = File("data.txt").readText () // Calling Java from Kotlin — seamless val list = ArrayList<String>() // Java class list.add ("hello") // Java method
- @JvmStatic: makes companion object method a true static — needed for Java callers
- @JvmField: exposes property as field — skips getter/setter generation
- @JvmOverloads: generates overloaded methods for each default parameter combination
- @Throws: declares checked exceptions — Java callers need to handle them
- Kotlin can call Java seamlessly — no annotations needed in that direction
@JvmOverloads on custom Views is essential — View constructors have 3 overloads. Without it, your custom view crashes when inflated from XML. This is a real-world gotcha that shows practical experience.
Named and default arguments dramatically reduce function overloads and make call sites self-documenting. They're one of the most practical Kotlin features for Android development.
// Default arguments — reduce overload pyramid funcreateUser ( name: String, age: Int = 0, role: String = "user", active: Boolean = true ): User = User(name, age, role, active) // Call with any combinationcreateUser ("Rahul") // all defaultscreateUser ("Rahul", age = 25) // skip role, activecreateUser ("Rahul", role = "admin") // skip agecreateUser ("Rahul", 25, "admin", false) // all explicit // Named arguments — self-documenting call sites showDialog( title = "Confirm", message = "Are you sure?", positiveText = "Yes", negativeText = "Cancel", cancelable = false ) // Named args can be reorderedcreateUser (role = "admin", name = "Rahul", age = 25) // Java equivalent would need 4 overloads: // createUser(String name) // createUser(String name, int age) // createUser(String name, int age, String role) // createUser(String name, int age, String role, boolean active) // Builder pattern replacement in Kotlin data class Config( val baseUrl: String, val timeout: Int = 5000, val retries: Int = 3, val debug: Boolean = false )
- Default arguments eliminate overload pyramids — one function replaces many
- Named arguments make call sites self-documenting — no need to remember parameter order
- Named arguments can be reordered freely
- Replaces the Builder pattern in most cases — data class with defaults is cleaner
- Use @JvmOverloads for Java interop when default arguments are used
Named arguments replace the Builder pattern in Kotlin. Instead of UserBuilder().name("Rahul").age(25).build(), just write User(name="Rahul", age=25). Less code, same readability, zero boilerplate.
Kotlin's compiler infers types at compile time, eliminating redundant type declarations while maintaining full static type safety. Understanding its limits is important for senior developers.
// Local variable inference val name = "Rahul" // inferred: String val age = 25 // inferred: Int val list = listOf(1,2,3) // inferred: List<Int> val map = mapOf("a" to 1) // inferred: Map<String, Int> // Function return type inference fundouble (x: Int) = x * 2 // inferred return: Int fungreet (name: String) = "Hi $name" // inferred return: String // Generic type inference fun <T>identity (x: T) = x val s =identity ("hello") // T inferred as String val i =identity (42) // T inferred as Int // Smart cast — type narrowed after check funprocess (x: Any) { if (x is String) {println (x.length ) // x smart-cast to String } val len = (x as? String)?.length ?: 0 } // Where inference doesn't work — must be explicit val empty =emptyList <String>() // must specify type val result: Number = 42 // explicit wider type // Return type always explicit for public API functions fungetUser (): User =fetchFromDb () // explicit for clarity
- Type inference happens at compile time — no runtime cost, full type safety
- Works for local variables, function return types, generic type parameters
- Smart cast: compiler narrows type after is/!is check — no explicit cast needed
- Inference fails for empty collections and when wider type is needed — be explicit
- Public API functions: always declare return type explicitly for readability
Always explicitly declare return types for public functions even though Kotlin can infer them. It makes API contracts clear and prevents accidental return type changes from breaking callers.
Kotlin's collection API is vastly richer than Java's. Mastering these functions eliminates most imperative for-loop code and is heavily tested in interviews.
val users =listOf ( User("Alice", 30, "admin"), User("Bob", 25, "user"), User("Carol", 35, "admin") ) // map — transform each element val names = users.map { it.name } // [Alice, Bob, Carol] // filter — keep matching elements val admins = users.filter { it.role == "admin" } // reduce — combine all into one (no initial value) val totalAge = users.map { it.age }.reduce { sum, age -> sum + age } // 90 // fold — like reduce but with initial value val summary = users.fold ("") { acc, user -> "$acc ${user.name}" } // groupBy — group into Map by key val byRole = users.groupBy { it.role } // {admin=[Alice, Carol], user=[Bob]} // partition — split into Pair(matching, notMatching) val (admins2, others) = users.partition { it.role == "admin" } // associate — build a Map val nameToAge = users.associate { it.name to it.age } // {Alice=30, Bob=25, Carol=35} // flatMap — map then flatten val allChars = users.flatMap { it.name.toList () } // any / all / none / count users.any { it.age > 30 } // true users.all { it.age > 20 } // true users. none { it.age > 50 } // true users.count { it.role == "admin" } // 2
- map/filter: the most used — transform and select elements
- reduce vs fold: reduce has no initial value and fails on empty list; fold is safer
- groupBy: returns Map<K, List<V>> — groups elements by key
- partition: returns Pair(matches, nonMatches) — split in one pass
- associate: build a Map from collection — like a map that returns Pair
Prefer fold over reduce when the collection might be empty — reduce throws NoSuchElementException on empty lists. fold with an initial value is always safe.
Result<T> is Kotlin's functional error handling type — it wraps either a success value or a failure exception, making error handling explicit and composable without try-catch everywhere.
// Result — success or failure wrapped in a type fundivide (a: Int, b: Int): Result<Int> =runCatching { require(b != 0) { "Division by zero" } a / b } // Handle result divide(10, 2) .onSuccess {println ("Result: $it") } // 5 .onFailure {println ("Error: ${it.message}") } // Get value or default val value =divide (10, 0).getOrDefault (0) val value2 =divide (10, 0).getOrElse { -1 } val value3 =divide (10, 2).getOrThrow () // throws if failure // Chain operations with map val result =divide (10, 2) .map { it * 2 } // 10 .map { it.toString () } // "10" // ViewModel pattern with Result viewModelScope.launch { _state.value = UiState.LoadingrunCatching { api.fetchUser () } .onSuccess { _state.value = UiState.Success (it) } .onFailure { _state.value = UiState.Error (it.message !!) } } // Compared to try-catch — cleaner, chainable, explicit
- Result<T>: encapsulates success or failure — eliminates try-catch at every call site
- runCatching: wraps any code in a Result — catches all exceptions
- onSuccess/onFailure: handle both cases functionally
- map: transform success value — failure passes through unchanged
- Best for: repository layer, network calls, parsing — keeps business logic clean
Use Result in the repository layer and sealed UiState in the ViewModel. The repository returns Result<User>, ViewModel maps it to UiState.Success or UiState.Error. This clean separation is what senior interviews look for.
Annotations add metadata to code that can be read at compile time or runtime. They're heavily used in Android (Room, Retrofit, Hilt, Compose) — understanding them helps you write your own libraries.
// Built-in Kotlin annotations @Deprecated("Use newFunction() instead", ReplaceWith("newFunction()")) funoldFunction () { } @Suppress("UNCHECKED_CAST") funcast (x: Any) = x as String @JvmStatic @JvmOverloads // Java interop annotations // Custom annotation @Target(AnnotationTarget.FUNCTION, AnnotationTarget.CLASS) @Retention(AnnotationRetention.RUNTIME) annotation class RequiresAuth( val role: String = "user" ) @RequiresAuth(role = "admin") fundeleteUser (id: String) { } // Read annotation at runtime via reflection val annotation = ::deleteUser .findAnnotation <RequiresAuth>() println(annotation?.role) // "admin" // Android annotations you use daily @Composable // Compose — marks composable functions @HiltViewModel // Hilt — inject ViewModel @Entity // Room — marks database table @GET("/users") // Retrofit — HTTP GET @Parcelize // generates Parcelable
- @Target: where annotation can be applied (function, class, property, etc.)
- @Retention: SOURCE (compile only), BINARY (bytecode), RUNTIME (reflection)
- Annotations with RUNTIME retention can be read via Kotlin reflection
- Most Android framework annotations use compile-time processing (kapt/ksp)
- KSP (Kotlin Symbol Processing) is the modern replacement for kapt — faster
Mention KSP vs kapt: KSP is Google's modern annotation processor for Kotlin — used by Room, Hilt, Compose. It's 2x faster than kapt. Knowing this shows you track the ecosystem.
Reflection allows inspecting and manipulating code structure at runtime. It's powerful but slow — use sparingly and prefer compile-time alternatives like KSP when possible.
// KClass — Kotlin's runtime class representation val kClass = String::class // KClass<String> val jClass = String::class.java // Class<String> for Java APIs // Inspect class members data class User(val name: String, val age: Int) User::class.memberProperties .forEach { prop -> println("${prop.name}: ${prop.returnType}") } // name: String, age: Int // Access property value by name val user = User("Rahul", 25) val nameProp = User::class.memberProperties .first { it.name == "name" } println(nameProp.get (user)) // "Rahul" // Callable references val nameRef: KProperty1<User, String> = User::name val funcRef: (String) -> Int = String::length // createInstance — instantiate by class val instance = User::class.primaryConstructor !! .call("Bob", 30) // ⚠️ Reflection is SLOW — avoid in hot paths // Use KSP/kapt for compile-time code generation instead // Room, Hilt, Moshi use KSP — zero runtime reflection cost
- KClass: runtime class representation — use ::class to get it
- memberProperties: inspect all properties at runtime
- Callable references (::): lightweight, no reflection overhead — prefer these
- Reflection is slow: avoid in RecyclerView, custom Views, or any repeated code
- KSP generates code at compile time — all the power, none of the runtime cost
If asked "does Room use reflection?" — the answer is no in modern setups. Room uses KSP to generate DAO implementations at compile time. Knowing this difference (compile-time codegen vs runtime reflection) shows architectural depth.
Infix functions allow calling a function without a dot or parentheses, making code read more like natural language. They're used throughout Kotlin's standard library and test frameworks.
// infix function — single parameter, called without dot/parentheses infix fun Int.times (str: String) = str.repeat (this) println(3times "hello") // "hellohellohello" // Standard library infix functions val pair = "key" to "value" // to is infix! val map =mapOf ("a" to 1, "b" to 2) val range = 1 until 10 // until is infix val step = 1..10 step 2 // step is infix val inRange = 5 in 1..10 // in is infix operator // Testing with infix (KoTest, MockK) // KoTest assertions 5 shouldBe 5 "hello" shouldContain "ell" users shouldHaveSize 3 // Custom DSL-style infix infix fun String.shouldEqual (expected: String) { assertEquals(expected, this) } "Rahul".uppercase ()shouldEqual "RAHUL" // Rules for infix functions: // Must be member or extension function // Must have exactly ONE parameter // Parameter cannot have default value or vararg
- Infix: single-parameter member/extension function called without dot or parentheses
- Standard library infix: to, until, step, downTo, and, or, xor, shl, shr
- Used heavily in test frameworks: shouldBe, shouldContain, shouldEqual
- Makes DSLs read like natural language
- Constraints: exactly one parameter, no vararg, no default value
"key" to "value" — the to infix function is what you use to create Pairs for maps. It's infix, not a keyword! Knowing this shows you understand the standard library at a deeper level.
vararg allows passing a variable number of arguments of the same type. The spread operator (*) is used to pass an array where vararg is expected.
// vararg — variable number of arguments funsum (vararg numbers: Int): Int = numbers.sum ()sum (1, 2, 3) // 6sum (1, 2, 3, 4, 5) // 15sum () // 0 — empty is valid // vararg is treated as Array inside function funprintAll (vararg items: String) { items.forEach {println (it) } // items is Array<String> } // Spread operator * — pass array as vararg val nums =intArrayOf (1, 2, 3)sum (*nums) // spread array into vararg val strs =arrayOf ("a", "b")printAll (*strs) // spreadprintAll ("x", *strs, "y") // mix spread with individual args // Common use: listOf, mapOf, println val list =listOf ("a", "b", "c") // vararg under the hood // Only ONE vararg per function, should be last parameter funlog (tag: String, vararg messages: String) { messages.forEach { Log.d (tag, it) } }
- vararg: accepts zero or more arguments of the same type
- Inside the function, vararg parameter is treated as an Array
- Spread operator (*): unpacks an array into individual vararg arguments
- Can mix spread with individual arguments: printAll("x", *strs, "y")
- Only one vararg per function; best placed as the last parameter
The spread operator (*) is often forgotten. If you have an Array and want to pass it to a vararg function, you MUST use *array. Without the spread, you'd be passing the array as a single element.
Nothing is Kotlin's bottom type — a subtype of every type. It represents computations that never complete normally (throws or loops forever), enabling the type system to remain sound.
// Nothing — function never returns normally funfail (msg: String): Nothing = throw IllegalStateException(msg) funinfiniteLoop (): Nothing { while (true) {} } // Nothing is subtype of every type — so this compiles: val name: String = user?.name ?:fail ("Name required") // fail() returns Nothing which is a subtype of String val result: Int = if (condition) 42 elsefail ("bad") // if branch is Int, else is Nothing — result is Int // throw is also of type Nothing val x: String = throw Exception() // compiles! throw is Nothing // Nothing? — the nullable Nothing type // Only possible value is null val nothing: Nothing? = null // Used in sealed classes for impossible branches sealed class Result<out T> { object Loading : Result<Nothing>() // no data type needed data class Success<T>(val data: T) : Result<T>() data class Error(val e: Exception) : Result<Nothing>() }
- Nothing: bottom type — subtype of every type; represents non-terminating computation
- Functions returning Nothing always throw or loop forever — compiler knows code after them is unreachable
- throw expression has type Nothing — that's why it works on the right side of Elvis
- Nothing? has exactly one value: null — useful for nullable Nothing in sealed classes
- Result.Loading and Result.Error use Nothing because they carry no data of type T
Connect Nothing to sealed classes: Loading and Error states use Result<Nothing> because they don't carry a T value. This shows you understand type theory and how the Kotlin type system enables elegant sealed hierarchies.
Kotlin provides multiple patterns for controlling when objects are created — choosing correctly impacts startup time, memory usage, and thread safety.
// EAGER — created immediately at declaration val config = AppConfig() // created NOW val list =mutableListOf <String>() // created NOW // LAZY — created on first access val heavyObject bylazy {println ("Creating...") HeavyDatabase() } // HeavyDatabase() not called yet heavyObject.query() // NOW it's created (once only) heavyObject. query () // uses cached instance // lazy is thread-safe by default (SYNCHRONIZED mode) val safeObj bylazy (LazyThreadSafetyMode.SYNCHRONIZED) { heavyInit() } // NONE mode — single thread only, no lock overhead val fastObj bylazy (LazyThreadSafetyMode.NONE) { init() } // lateinit — deferred initialization for non-null vars lateinit var binding: ActivityMainBinding // binding used BEFORE initialization → UninitializedPropertyAccessException if (::binding.isInitialized ) { binding.doSomething () } // Android patterns: // by viewModels() — lazy ViewModel access // by lazy { Room.databaseBuilder(...).build() } — singleton DB // lateinit var — DI-injected fields, view binding
- Eager: simple, predictable — use when object is always needed and cheap to create
- lazy: thread-safe by default, created once on first access — use for expensive objects
- NONE mode: no synchronization overhead — use when you know access is single-threaded
- lateinit: non-null var with deferred init — use for DI fields and view binding
- isInitialized: check before accessing lateinit to avoid crash
The Room database singleton is the classic lazy example: val db by lazy { Room.databaseBuilder(...).build() }. It's expensive to create, needed app-wide, thread-safe by default. This is a real pattern from every Android app.
Kotlin classes are final by default — the opposite of Java. This is a deliberate design choice promoting composition over inheritance and preventing fragile base class problems.
// Kotlin classes are FINAL by default class Animal // cannot be subclassed! // open — allows subclassing and overriding open class Vehicle(val brand: String) { open funstart () {println ("Starting $brand") } funstop () {println("Stopping") } // NOT open — cannot override } class Car(brand: String) : Vehicle(brand) { override fun start () {println("Vroom!") } // ✅ // override fun stop() { } ❌ stop() is not open } // abstract — must be subclassed, cannot be instantiated abstract class Shape { abstract fun area (): Double // must implement open fundescribe () = "Shape" // can override funprintln (area ()) // cannot override } class Circle(val r: Double) : Shape() { override funarea () = Math.PI * r * r } // interface — multiple implementation, default methods interface Drawable { fundraw () // abstract funhide () {println ("hidden") } // default implementation } // class can implement multiple interfaces but extend only one class
- All Kotlin classes are final by default — prevents accidental subclassing
- open: allows subclassing/overriding — be explicit about extensibility
- abstract: cannot instantiate, forces subclass to implement abstract members
- override keyword: required explicitly — no accidental overrides like Java
- Prefer interfaces over abstract classes — multiple interfaces, single inheritance
Final by default is a deliberate Kotlin design choice based on Effective Java's "design and document for inheritance or else prohibit it." Mentioning this shows you understand the WHY behind Kotlin's design decisions.
map transforms each element to one output. flatMap transforms each element to a collection, then flattens all collections into a single list — a very commonly tested distinction.
// map — one input, one output val words =listOf ("hello", "world") val lengths = words.map { it.length } // [5, 5] — one Int per String // map of collections — nested result val chars = words.map { it.toList () } // [[h,e,l,l,o], [w,o,r,l,d]] — List<List<Char>> // flatMap — one input, many outputs, then flatten val flat = words.flatMap { it.toList () } // [h,e,l,l,o,w,o,r,l,d] — List<Char> flattened // Real-world: user orders data class User(val name: String, val orders: List<String>) val users =listOf ( User("Alice",listOf ("shoes", "bag")), User("Bob",listOf ("phone")) ) // map — nested lists val allOrdersNested = users.map { it.orders } // [[shoes, bag], [phone]] // flatMap — flat list of all orders val allOrders = users.flatMap { it.orders } // [shoes, bag, phone] // flatten() — flatten already-nested collection val nested =listOf (listOf (1,2),listOf (3,4)) val flat2 = nested.flatten () // [1,2,3,4]
- map: 1-to-1 transformation — one element in, one element out
- flatMap: 1-to-many transformation — one element in, collection out, then all flattened
- flatMap = map + flatten in a single operation
- Use flatMap when each element maps to a list and you want a single flat result
- Equivalent in Flow: flatMapLatest/flatMapConcat/flatMapMerge for async streams
The classic interview question: "get all orders from all users as a single list." The wrong answer is map (gives nested lists). The right answer is flatMap. Knowing the difference shows you think in transformations, not loops.
SAM conversion allows using a lambda wherever a Java functional interface (an interface with one abstract method) is expected, eliminating anonymous class boilerplate when working with Java APIs.
// Java functional interface (SAM) // interface Runnable { void run(); } // Without SAM — verbose anonymous class val r = object : Runnable { override funrun () {println("Running") } } // With SAM conversion — clean lambda val r2 = Runnable { println ("Running") } Thread(r2).start () // Most common SAM conversions in Android val handler = Handler(Looper.getMainLooper ()) handler.post {updateUi() } // Runnable SAM view. setOnClickListener {handleClick () } // View.OnClickListener SAMexecutor .submit {doWork () } // Callable/Runnable SAM // Kotlin interfaces — NOT SAM by default interface KotlinCallback { funonResult (result: String) } // KotlinCallback { result -> ... } ❌ doesn't work automatically // fun interface — Kotlin SAM interface fun interface KotlinCallback { funonResult (result: String) } val cb = KotlinCallback { result ->println (result) } // ✅ now works
- SAM: Java interface with single abstract method — can be replaced with lambda
- Kotlin automatically supports SAM for Java interfaces — no annotation needed
- Kotlin interfaces: not SAM by default — need fun interface keyword (Kotlin 1.4+)
- fun interface: explicit SAM interface in Kotlin — enables lambda syntax
- Common SAM: Runnable, Callable, View.OnClickListener, Comparator
fun interface was added in Kotlin 1.4 to enable SAM for Kotlin interfaces. If you're writing a callback interface that should accept lambdas, always use fun interface — it's the modern idiomatic way.
Understanding how suspend functions work under the hood — specifically the continuation-passing style transformation — is a key differentiator for senior Android developers.
// suspend function — can pause execution without blocking thread suspend funfetchUser (): User { return withContext(Dispatchers.IO) { api.getUser () } } // Can only be called from coroutine or another suspend function viewModelScope.launch { val user =fetchUser () // suspend point — thread releasedupdateUi (user) } // How compiler transforms suspend functions: // Kotlin compiler transforms to CPS (Continuation-Passing Style) // suspend fun fetchUser(): User // becomes internally: // fun fetchUser(continuation: Continuation<User>): Any // Continuation — the "rest of the computation" after suspension // Like a callback, but written as sequential code // State machine — compiler generates states for each suspension point // suspend fun example() { // val a = fetch1() // State 0: call fetch1, suspend // val b = fetch2(a) // State 1: call fetch2, suspend // return combine(a,b) // State 2: return result // } // suspendCoroutine — create custom suspend functions suspend funawaitCallback (): String = suspendCoroutine { cont -> asyncApi.fetch ( onSuccess = { cont.resume (it) }, onError = { cont.resumeWithException (it) } ) }
- suspend: marks function as suspendable — compiler transforms to state machine
- Under the hood: each suspend function takes a Continuation parameter
- State machine: each suspension point becomes a state — no threads blocked
- suspendCoroutine: bridge callback-based code to suspend functions
- suspend functions are just regular functions at JVM level — zero JVM magic
When asked "how do coroutines work?" go beyond "lightweight threads." Say: "The compiler transforms suspend functions into state machines with Continuation callbacks. No JVM magic — it's pure compile-time transformation." This is what separates senior answers from junior ones.
Ranges and progressions are first-class in Kotlin, enabling clean iteration, validation, and conditional checks that would require verbose code in Java.
// IntRange — .. operator val range = 1..10 // 1 to 10 inclusive val until = 1 until 10 // 1 to 9 (excludes 10) val down = 10 downTo 1 // 10,9,8,...,1 // step — custom step size 1..10 step 2 // 1,3,5,7,9 10 downTo 0 step 3 // 10,7,4,1 // Iteration for (i in 1..5)size ) { } // index iteration for ((index, value) in list.withIndex ()) { } // preferred // Range checks — in operator val score = 85 if (score in 80..100)println ("A grade") if (score !in 0..50)println("Pass") // String/Char ranges val letters = 'a'..'z' if ('e' in letters) println ("vowel check") // when with ranges val grade = when (score) { in 90..100 -> "A" in 80..89 -> "B" in 70..79 -> "C" else -> "F" }
- .. creates inclusive IntRange; until creates exclusive range
- downTo: creates descending progression
- step: changes increment — works with both ascending and descending
- in operator: checks if value is in range — compiles to efficient comparison, not iteration
- Ranges work for Int, Long, Char, Double, and any Comparable type
score in 0..100 compiles to score >= 0 && score <= 100 — not a loop. It's a simple range check. Mentioning this shows you understand that Kotlin's range checks are zero-cost abstractions.
Context receivers (experimental in Kotlin 1.6+, stable in 2.0) allow functions to declare multiple receivers — solving the "multiple context" problem elegantly. This is one of Kotlin's most exciting 2024-25 features.
// Problem: function needs multiple contexts // Ugly way — pass everything as parameters funshowMessage (context: Context, scope: CoroutineScope, msg: String) { scope.launch { Toast.makeText (context, msg, Toast.LENGTH_SHORT).show () } } // Context receivers — declare multiple implicit contexts context(Context, CoroutineScope) funshowMessage (msg: String) {launch { // CoroutineScope available Toast.makeText (this@Context, msg, Toast.LENGTH_SHORT).show () } } // Call site — contexts provided automatically with(applicationContext) { // Context viewModelScope.launch { // CoroutineScopeshowMessage ("Hello!") // both contexts satisfied } } // Enable in build.gradle.kts // kotlinOptions { freeCompilerArgs += "-Xcontext-receivers" } // Use case — transaction DSL context(Database.Transaction) funsaveUser (user: User) {insert (user) // Transaction methods availableupdateIndex (user.id) } // Can ONLY be called inside a transaction — compile-time safety!
- Context receivers: multiple implicit receivers without nesting
- Solves the "I need Context, Scope, and Transaction all at once" problem
- Compile-time safety: function can only be called when all contexts are in scope
- Stable in Kotlin 2.0 — use -Xcontext-receivers flag for earlier versions
- Alternative to passing many parameters or using ambient/thread-local state
Context receivers are Kotlin 2.0's answer to the "I need multiple scopes" problem. Knowing about them — especially that they provide compile-time enforcement of contexts (like Database.Transaction) — shows you follow the latest Kotlin evolution closely.
Kotlin's three main collection types each have distinct characteristics around ordering, uniqueness, and key-value access. Choosing the right one impacts correctness and performance.
// List — ordered, allows duplicates val list =listOf ("a", "b", "a") // [a, b, a] — keeps order, keeps dups list[0] // "a" — index access list.indexOf ("a") // 0 — finds first // Set — unordered, NO duplicates val set =setOf ("a", "b", "a") // {a, b} — duplicate removed "a" in set // O(1) lookup — much faster than list.contains() // set[0] ❌ no index access // LinkedHashSet — insertion order preserved val linked =linkedSetOf ("c", "a", "b") // [c, a, b] — ordered, unique // Map — key-value pairs, unique keys val map =mapOf ("one" to 1, "two" to 2) map["one"] // 1 map.getOrDefault ("three", 0) // 0 map.getOrPut ("four") { 4 } // insert if absent // Mutable versions val mList =mutableListOf ("a"); mList.add ("b") val mSet =mutableSetOf ("a"); mSet.add ("b") val mMap =mutableMapOf ("a" to 1); mMap["b"] = 2 // Decision guide: // Need order + duplicates? → List // Need unique + fast lookup? → Set // Need key → value mapping? → Map // Need sorted order? → sortedSetOf(), sortedMapOf()
- List: ordered, duplicates allowed — best for sequences and ordered data
- Set: unordered (or insertion-ordered with LinkedHashSet), no duplicates — O(1) contains
- Map: key-value pairs, unique keys — O(1) lookup by key
- Prefer Set over List when checking membership frequently — much faster
- LinkedHashSet/LinkedHashMap: preserve insertion order while giving Set/Map benefits
list.contains() is O(n) — checks each element. set.contains() is O(1) — hash lookup. If you frequently check membership in a large collection, converting to a Set first dramatically improves performance.
Kotlin's object keyword has three distinct use cases — each solves a different problem. Interviewers test whether you know the difference and can choose the right one.
// 1. Object DECLARATION — Singleton pattern object DatabasePool { private val connections =mutableListOf <Connection>() fungetConnection (): Connection = connections.first () } // Thread-safe, lazy-initialized, one instance DatabasePool.getConnection () // 2. Object EXPRESSION — anonymous object (local, one-off) val listener = object : View.OnClickListener { override funonClick (v: View) {handleClick () } } // New instance each time — not a singleton // Can capture variables from outer scope (closure) val count = 0 val obj = object : Runnable { override funrun () {println (count) } // captures count } // 3. Companion OBJECT — class-level members class User(val name: String) { companion object Factory { // optional name const val MAX_AGE = 150 funcreate (name: String) = User(name) funguest () = User("Guest") } } User.create ("Rahul") // called on class User.MAX_AGE // constant User.Factory.guest () // via companion name // Key differences: // object declaration → singleton, global // object expression → local, new instance each time, captures scope // companion object → tied to a class, one per class
- Object declaration: singleton — initialized once, thread-safe, accessed by name
- Object expression: anonymous object — new instance, can implement interfaces, captures scope
- Companion object: class-scoped — static-equivalent members, factories, constants
- Object expressions create a new instance every time — not cached like object declarations
- Companion object implements interfaces — useful for type class pattern
The Factory pattern with companion object is idiomatic Kotlin: MyFragment.newInstance() instead of new MyFragment(). Always use this for Fragment creation with arguments — it's the standard Android pattern since forever.
Idiomatic Kotlin means using language features as intended — not writing Java-style code in Kotlin. Interviewers often show code and ask "how would you improve this?"
// ❌ Java-style in Kotlin if (user != null) {println (user.name) } else {println ("unknown") } // ✅ Idiomaticprintln (user?.name ?: "unknown") // ❌ Verbose initialization val intent = Intent(this, HomeActivity::class.java) intent.putExtra("id", userId) intent.flags = Intent.FLAG_ACTIVITY_CLEAR_TOP startActivity (intent) // ✅ Idiomatic with applystartActivity (Intent(this, HomeActivity::class.java).apply {putExtra ("id", userId) flags = Intent.FLAG_ACTIVITY_CLEAR_TOP }) // ❌ Manual null check var name: String? =getName () if (name != null && name.isNotEmpty ()) {use (name) } // ✅ let for null-safe blockgetName ()?.takeIf { it.isNotEmpty () }?.let {use (it) } // ❌ Loops for transformations val result =mutableListOf <String>() for (user in users) { if (user.active) result.add (user.name) } // ✅ Functional val result = users.filter { it.active }.map { it.name } // ❌ is + explicit cast if (x is String)use (x as String) // ✅ Smart cast if (x is String)use (x) // no cast needed!
- Prefer ?: Elvis over if-null checks
- Use apply/also/let for initialization and null-safe blocks
- Replace loops with filter/map/reduce
- Trust smart casts — no explicit cast after is check
- Use data class copy() instead of manually creating modified copies
In live coding interviews, writing idiomatic Kotlin (filter/map instead of loops, Elvis instead of if-null, apply for initialization) immediately signals experience. Reviewers literally look for this — it's a differentiator between 2-year and 5-year developers.
Kotlin 2.0 shipped in 2024 with major improvements to compilation speed, new language features, and the K2 compiler. Staying current with these is expected in senior 2025-26 interviews.
// K2 Compiler — Kotlin 2.0's biggest change // 2x faster compilation // Better IDE performance // Improved type inference // More accurate error messages // Smart cast improvements (K2) class Box(var value: Any) val box = Box("hello") if (box.value is String) {println (box.value.length ) // ✅ K2 smart-casts mutable var! // K1 didn't — had to use val copy } // Non-local break and continue (Kotlin 2.0+) val list =listOf (1, 2, 3, 4, 5) list.forEach { if (it == 3) return@forEach // continue to next if (it == 4) return@forEach // same pattern
- K2 Compiler: 2x faster, better type inference, smarter smart casts for mutable properties
- Kotlin Multiplatform stable since 2.0 — production-ready for Android + iOS
- Power Assert: better test failure messages — shows values in assertion expressions
- KSP2: next-gen annotation processing built on K2 compiler
- Smart cast improvements: K2 can smart-cast mutable var properties in some cases
Kotlin 2.0 released May 2024. The K2 compiler is the most impactful change — it makes builds faster and the IDE snappier. Mentioning that you've migrated a project to K2 and noticed faster incremental builds immediately shows you're current.
KMP (Kotlin Multiplatform) lets you share business logic across Android, iOS, web, and server while keeping platform-specific UI native. It became stable in Kotlin 2.0 and is rapidly being adopted in 2025.
// KMP project structure // shared/ // commonMain/ ← shared Kotlin code // androidMain/ ← Android-specific // iosMain/ ← iOS-specific // androidApp/ ← Android UI (Compose) // iosApp/ ← iOS UI (SwiftUI) // CAN SHARE: // ✅ Data models, business logic, use cases // ✅ Repository pattern // ✅ Networking (Ktor) // ✅ Database (SQLDelight) // ✅ JSON parsing (kotlinx.serialization) // ✅ Coroutines, Flow // CANNOT SHARE (platform-specific): // ❌ UI (Compose Android vs SwiftUI iOS) // ❌ Camera, GPS, Bluetooth // ❌ Push notifications // expect/actual — platform implementations // commonMain expect funcurrentTimeMillis (): Long expect class PlatformContext // androidMain actual funcurrentTimeMillis () = System.currentTimeMillis () actual class PlatformContext(val context: android.content.Context) // iosMain actual funcurrentTimeMillis () = NSDate().timeIntervalSince1970 .toLong () * 1000 actual class PlatformContext // no context needed on iOS // Shared repository — used by both Android and iOS class UserRepository( private val api: UserApi, // Ktor — multiplatform private val db: UserDatabase // SQLDelight — multiplatform ) { suspend fungetUser (id: String) = api.fetchUser (id) }
- Share: business logic, repositories, models, networking, database — up to 70% of code
- Keep native: UI (Compose on Android, SwiftUI on iOS) — best UX per platform
- expect/actual: declare interface in common, implement per platform
- KMP stable since Kotlin 2.0 — companies like Netflix, Cash App, Touchlab use it
- Compose Multiplatform (by JetBrains): share UI too — still maturing for iOS
KMP's value proposition: "Write business logic once, deploy to Android and iOS with native UI." A 70% code share typically reduces cross-platform feature development time by 40%. Quantifying the business value like this impresses senior interviewers.
Most frequently asked Compose questions in 2025-26 — recomposition, state, side effects, layouts, performance & more.
Jetpack Compose is Android's modern declarative UI toolkit. Instead of mutating views, you describe what the UI should look like for a given state — Compose figures out the updates.
// View system — imperative, mutate existing views val textView = findViewById<TextView>(R.id.name) textView.text = "Rahul" textView.visibility = View.VISIBLE // Developer manages WHEN and HOW to update UI // Compose — declarative, describe the UI for current state @Composable fun UserCard(name: String, visible: Boolean) { if (visible) { Text(text = name) // Compose handles updates automatically } } // When name or visible changes, Compose re-runs this function // Key differences: // View system: XML + Kotlin/Java, View tree, manual updates // Compose: 100% Kotlin, no XML, reactive, automatic updates // Compose advantages: // Less code — no adapters, ViewHolders, xml layouts // Preview in Android Studio // Built-in animations, Material3 // Interops with View system — can use both together
- Declarative: describe UI for a state, Compose updates the screen automatically
- No XML: UI written entirely in Kotlin — type-safe, refactorable
- Recomposition: when state changes, Compose re-executes only affected composables
- Interop: can embed Compose in View system (ComposeView) and vice versa (AndroidView)
- Google's official recommendation for new Android UI development since 2021
The key mental shift: in View system you say "update this text view to X." In Compose you say "show a text view with X" and Compose decides what to update. This declarative model is what makes Compose powerful.
Recomposition is Compose re-executing a composable function when its inputs change. Understanding and minimising unnecessary recompositions is critical for performance.
// Recomposition triggered when state/param changes @Composable fun Counter() { var count by remember { mutableStateOf(0) } Button(onClick = { count++ }) { Text("Count: $count") } // count changes → Button AND Text recompose } // ❌ Bad — whole parent recomposes when count changes @Composable fun ParentScreen() { var count by remember { mutableStateOf(0) } ExpensiveHeader() // recomposes unnecessarily! Text("Count: $count") ExpensiveFooter() // recomposes unnecessarily! } // ✅ Good — move state down, scope recomposition @Composable fun ParentScreen() { ExpensiveHeader() // never recomposes CounterSection() // only this recomposes ExpensiveFooter() // never recomposes } // Stable types — Compose skips recomposition if inputs unchanged // Primitives, String, data classes with val properties = stable @Stable data class UserUiState(val name: String, val age: Int) // @Immutable — tells Compose type never changes, safe to skip @Immutable data class Config(val baseUrl: String) // key() — stable identity for items in lists LazyColumn { items(users, key = { it.id }) { UserRow(it) } }
- Recomposition: Compose re-runs composable when observed state changes
- Scope minimisation: move state as low as possible — only affected composables recompose
- Stable types: Compose skips recomposition if all inputs are equal and stable
- @Stable / @Immutable: hint to Compose that a type is stable — enables skipping
- key(): stable identity in lists — prevents unnecessary recomposition on list changes
The most common recomposition mistake: reading state high in the tree. Rule: "move state down to the composable that needs it." If only the counter needs count, don't put count in the screen-level composable.
Both persist state across recompositions, but rememberSaveable also survives configuration changes (rotation) and process death by saving to a Bundle.
// remember — survives recomposition only var count by remember { mutableStateOf(0) } // count = 0 after screen rotation ❌ // rememberSaveable — survives recomposition + rotation + process death var count by rememberSaveable { mutableStateOf(0) } // count preserved after rotation ✅ // rememberSaveable with custom Saver (for non-primitive types) @Parcelize data class SearchState(val query: String, val filters: List<String>) : Parcelable var state by rememberSaveable { mutableStateOf(SearchState("",emptyList ())) } // Custom Saver for complex types val colorSaver = Saver<Color, Long>( save = { it.value.toLong() }, restore = { Color(it.toULong()) } ) var color by rememberSaveable(stateSaver = colorSaver) { mutableStateOf(Color.Red) } // When to use which: // remember: UI-only state (animation state, dropdown open/closed) // rememberSaveable: user input, scroll position, selected tab // ViewModel: business data, network results — best for most cases
- remember: survives recomposition — lost on rotation or process death
- rememberSaveable: survives recomposition, rotation, AND process death via Bundle
- Works automatically for primitives, String, and @Parcelize types
- Custom Saver: for non-parcelable types — define save/restore logic
- Prefer ViewModel for business state — rememberSaveable for transient UI state only
Good rule: "If the user would be annoyed to lose it on rotation, use rememberSaveable or ViewModel." Scroll position, typed text, selected tab → rememberSaveable. API results, user data → ViewModel.
Side effects are operations that escape the scope of a composable — network calls, analytics, subscriptions. Compose provides specific APIs for each use case to keep side effects safe and lifecycle-aware.
// LaunchedEffect — coroutine scoped to composable lifecycle // Runs when key changes, cancelled when composable leaves @Composable fun UserScreen(userId: String) { LaunchedEffect(userId) { // key = userId viewModel.loadUser(userId) // re-runs if userId changes } // cancelled if screen leaves } // SideEffect — runs after every successful recomposition // Use for: sync Compose state to non-Compose code @Composable fun AnalyticsScreen(name: String) { SideEffect { analytics.setScreen(name) // runs after each recomposition } } // DisposableEffect — setup + teardown (like onStart/onStop) // Use for: event listeners, sensors, observers @Composable fun LifecycleObserver(onStop: () -> Unit) { val owner = LocalLifecycleOwner.current DisposableEffect(owner) { val observer = LifecycleEventObserver { _, event -> if (event == Lifecycle.Event.ON_STOP) onStop() } owner.lifecycle.addObserver(observer) onDispose { owner.lifecycle.removeObserver(observer) } // cleanup! } } // produceState — convert non-Compose state to Compose State @Composable fun NetworkImage(url: String): State<Bitmap?> = produceState<Bitmap?>(null, url) { value = withContext(Dispatchers.IO) { loadBitmap(url) } }
- LaunchedEffect: coroutine tied to composable — relaunches when key changes, cancelled on leave
- SideEffect: runs after every recomposition — sync Compose to external systems
- DisposableEffect: setup/teardown pair — always provide onDispose for cleanup
- produceState: bridge external state (Flow, LiveData, callback) into Compose State
- rememberCoroutineScope: get scope for user-triggered coroutines (button clicks)
Missing onDispose in DisposableEffect is a memory leak — the observer never gets removed. This is the most common Compose bug. Always ask: "What needs to be cleaned up when this composable leaves?"
State hoisting is the pattern of moving state up to the caller, making composables stateless and reusable. It's Compose's answer to the separation of concerns problem.
// ❌ Stateful — state owned inside, not reusable/testable @Composable fun BadTextField() { var text by remember { mutableStateOf("") } TextField(value = text, onValueChange = { text = it }) // Caller can't read or control the text value! } // ✅ Stateless (hoisted) — caller controls state @Composable fun GoodTextField( value: String, // state flows DOWN onValueChange: (String) -> Unit // events flow UP ) { TextField(value = value, onValueChange = onValueChange) } // Caller owns the state @Composable fun LoginScreen() { var email by remember { mutableStateOf("") } var password by remember { mutableStateOf("") } GoodTextField(value = email, onValueChange = { email = it }) GoodTextField(value = password, onValueChange = { password = it }) Button(onClick = { login(email, password) }) { Text("Login") } } // Unidirectional Data Flow (UDF): // State flows DOWN (from parent to child) // Events flow UP (from child to parent via lambdas) // This is the foundation of Compose architecture
- State hoisting: move state to caller — composable becomes stateless and reusable
- State flows down (parameters), events flow up (lambda callbacks)
- Stateless composables: easier to test, preview, and reuse
- Unidirectional Data Flow (UDF): the architectural principle behind state hoisting
- Hoist to the lowest common ancestor that needs the state
State hoisting is the single most important Compose concept. Every composable should ask: "Who needs this state?" Hoist it to that level. The golden rule: state down, events up.
Compose renders UI in three phases each frame. Understanding this model helps optimise performance — some operations can skip earlier phases entirely.
// Phase 1: COMPOSITION // Compose runs @Composable functions // Builds the UI tree (slot table) // Detects what changed vs last composition // Phase 2: LAYOUT // Measures and places each node // Single-pass measurement (vs View system's multi-pass) // Phase 3: DRAWING // Renders to Canvas // Skipping phases for performance: // Modifier.offset with lambda — skips Composition & Layout! @Composable fun AnimatedBox(scrollState: ScrollState) { // ❌ Reads scroll during Composition — triggers full recompose Box(Modifier.offset(y = scrollState.value.dp)) // ✅ Reads scroll during Layout only — skips Composition Box(Modifier.offset { IntOffset(0, scrollState.value) }) } // graphicsLayer — changes applied at Drawing phase only @Composable fun FadeBox(alpha: Float) { // ❌ alpha param → recomposition on every change Box(Modifier.alpha(alpha)) // ✅ graphicsLayer lambda → Drawing phase only, no recomposition Box(Modifier.graphicsLayer { this.alpha = alpha }) } // derivedStateOf — compute only when inputs change val showButton by remember { derivedStateOf { scrollState.value > 100 } } // showButton changes only when threshold crossed, not on every scroll
- Composition → Layout → Drawing: three phases per frame
- Skipping phases: lambda-based Modifiers defer reads to later phases — huge perf win
- graphicsLayer: apply alpha, scale, rotation at Draw phase — no recomposition
- derivedStateOf: memoize derived state — only recomposes when derived value actually changes
- Single-pass layout: Compose measures each node once — no expensive multi-pass like RelativeLayout
The lambda Modifier trick is the most impactful Compose optimisation: Modifier.offset { } vs Modifier.offset(). The lambda version defers the read to Layout phase, skipping recomposition entirely on scroll. This alone can eliminate jank in scroll-heavy UIs.
Modifiers decorate composables with layout behaviour, drawing, and interaction. Order matters — each modifier wraps the next, like decorators applied inside-out.
// Modifier order matters — applied outside-in // ❌ Wrong order — clickable area doesn't include padding Box( Modifier .clickable { } // small clickable area .padding(16.dp) // padding added OUTSIDE click area ) // ✅ Correct — clickable includes padding area Box( Modifier .padding(16.dp) // padding first .clickable { } // click area includes the padding ) // Common modifier operations Modifier .fillMaxWidth() // fill parent width .fillMaxSize() // fill parent width and height .size(48.dp) // fixed size .wrapContentSize() // wrap content .padding(16.dp) // inner spacing .background(Color.Blue) // background color .clip(RoundedCornerShape(8.dp)) // clip shape .border(1.dp, Color.Gray) // border .clickable { } // handle clicks .semantics { } // accessibility .testTag("myButton") // UI testing // Custom modifier — reusable combination fun Modifier.cardStyle() = this .fillMaxWidth() .clip(RoundedCornerShape(12.dp)) .background(MaterialTheme.colorScheme.surface) .padding(16.dp)
- Modifiers are ordered — earlier modifiers wrap later ones from outside-in
- padding before clickable: click area is smaller (excludes padding)
- clickable before padding: click area includes padding — usually correct
- Custom modifiers: compose multiple modifiers into reusable extension functions
- Modifier.then(): programmatically chain modifiers based on conditions
Modifier order is a classic interview question. The golden rule: think of modifiers as wrapping layers. padding().clickable() means the click zone is INSIDE the padding. clickable().padding() means the click zone INCLUDES the padding. Almost always you want clickable().padding().
Compose provides a set of layout primitives that replace LinearLayout, FrameLayout, and RecyclerView. Knowing when to use each is fundamental.
// Column — vertical arrangement (like LinearLayout vertical) Column( modifier = Modifier.fillMaxWidth(), verticalArrangement = Arrangement.spacedBy(8.dp), horizontalAlignment = Alignment.CenterHorizontally ) { Text("Title") Text("Subtitle") Button(onClick = {}) { Text("OK") } } // Row — horizontal arrangement (like LinearLayout horizontal) Row( horizontalArrangement = Arrangement.SpaceBetween, verticalAlignment = Alignment.CenterVertically ) { Icon(Icons.Default.Home, contentDescription = null) Text("Home") Spacer(Modifier.weight(1f)) // push items apart Badge { Text("3") } } // Box — stack overlapping children (like FrameLayout) Box(Modifier.size(200.dp)) { Image(painter, contentDescription = null, modifier = Modifier.fillMaxSize()) Text("Overlay", modifier = Modifier.align(Alignment.BottomCenter)) } // LazyColumn — efficient scrollable vertical list (like RecyclerView) LazyColumn(contentPadding = PaddingValues(16.dp)) { item { Header() } // single item items(users, key = { it.id }) { UserRow(it) } // list itemsIndexed(items) { index, item -> Row(index, item) } item { Footer() } } // LazyRow — horizontal scrollable list // LazyVerticalGrid — grid layout // LazyVerticalStaggeredGrid — Pinterest-style staggered grid
- Column/Row: non-scrollable layouts — use for fixed content on screen
- Box: stack children — use for overlapping UI (badges, overlays, FAB positioning)
- LazyColumn/Row: only composes visible items — use for lists of any size
- Arrangement: controls spacing between children (SpaceBetween, spacedBy, Center)
- Modifier.weight(): distribute remaining space proportionally — like layout_weight in XML
Never use Column with forEach for large lists — it composes ALL items upfront. Always use LazyColumn which only composes visible items. This is the single most common Compose performance mistake in production apps.
Collecting ViewModel state safely in Compose requires lifecycle-aware collection to stop updates when the app is backgrounded. The recommended approach has evolved significantly.
// ViewModel with StateFlow class UserViewModel : ViewModel() { private val _state = MutableStateFlow<UiState>(UiState.Loading) val state: StateFlow<UiState> = _state.asStateFlow() } // ❌ collectAsState() — doesn't stop when app backgrounds val state by viewModel.state.collectAsState() // ✅ collectAsStateWithLifecycle() — lifecycle-aware (RECOMMENDED) // implementation("androidx.lifecycle:lifecycle-runtime-compose") val state by viewModel.state.collectAsStateWithLifecycle() // Full example @Composable fun UserScreen(viewModel: UserViewModel = hiltViewModel()) { val state by viewModel.state.collectAsStateWithLifecycle() when (val s = state) { is UiState.Loading -> CircularProgressIndicator() is UiState.Success -> UserContent(s.user) is UiState.Error -> ErrorMessage(s.message) } } // Sealed UI state — best practice sealed class UiState { object Loading : UiState() data class Success(val user: User) : UiState() data class Error(val message: String) : UiState() }
- collectAsState(): simple but doesn't stop collection when app is backgrounded
- collectAsStateWithLifecycle(): stops at STARTED, resumes at START — lifecycle-aware
- hiltViewModel(): inject ViewModel in Compose — Hilt-aware, scoped correctly
- Sealed UiState: clean pattern for Loading/Success/Error — exhaustive when expression
- Single state object: one StateFlow for all screen state — avoids multiple collections
Always use collectAsStateWithLifecycle() in production — collectAsState() wastes resources collecting in background. This was Google's official guidance update in 2022 and is still the right answer in 2025.
Navigation in Compose uses Navigation Compose — a declarative NavHost with routes. Navigation 2.8+ introduced type-safe routes using @Serializable, eliminating string-based route errors.
// Navigation Compose 2.8+ — type-safe routes // implementation("androidx.navigation:navigation-compose:2.8+") // Define routes as @Serializable objects/classes @Serializable object HomeRoute @Serializable object ProfileRoute @Serializable data class DetailRoute(val userId: String) // NavHost — declares all destinations @Composable fun AppNavHost() { val navController = rememberNavController() NavHost(navController, startDestination = HomeRoute) { composable<HomeRoute> { HomeScreen(onUserClick = { id -> navController.navigate(DetailRoute(id)) }) } composable<DetailRoute> { backStackEntry -> val route: DetailRoute = backStackEntry.toRoute() DetailScreen(userId = route.userId) } composable<ProfileRoute> { ProfileScreen() } } } // Back stack management navController.navigate(HomeRoute) { popUpTo(HomeRoute) { inclusive = true } // clear back stack launchSingleTop = true // avoid duplicate } navController.navigateUp() // go back
- Type-safe navigation (2.8+): @Serializable routes — compile-time safety, no typos
- toRoute(): extract route data from BackStackEntry — replaces manual argument parsing
- popUpTo: control back stack when navigating — prevent duplicates
- launchSingleTop: avoid duplicate destinations on top of stack
- Nested navigation graphs: organise routes by feature for scalability
Type-safe navigation with @Serializable is the 2025 answer. String-based routes like "detail/{userId}" are error-prone and deprecated in favour of this. Knowing the newer API immediately shows you're current.
Stateful composables own state internally. Stateless composables receive state as parameters. Good architecture maximises stateless composables — they're easier to test, preview, and reuse.
// Stateful — owns its own state @Composable fun StatefulCounter() { var count by remember { mutableStateOf(0) } StatelessCounter(count = count, onIncrement = { count++ }) } // Stateless — receives state from caller @Composable fun StatelessCounter( count: Int, onIncrement: () -> Unit ) { Column { Text("Count: $count") Button(onClick = onIncrement) { Text("+") } } } // Benefits of stateless: // ✅ Easily previewable with any count value @Preview @Composable fun CounterPreview() { StatelessCounter(count = 42, onIncrement = {}) // inject any state } // ✅ Easily testable @Test fun counterDisplaysCorrectly() { composeTestRule.setContent { StatelessCounter(count = 5, onIncrement = {}) } composeTestRule.onNodeWithText("Count: 5").assertIsDisplayed() } // Real pattern: ViewModel provides state → Screen composable (stateless) @Composable fun HomeScreen(viewModel: HomeViewModel = hiltViewModel()) { val state by viewModel.state.collectAsStateWithLifecycle() HomeContent(state = state, onAction = viewModel::handleAction) } // HomeContent is stateless — previewed and tested without ViewModel
- Stateful: owns state — necessary at the top of the hierarchy
- Stateless: receives state as params — reusable, testable, previewable
- Pattern: one stateful screen composable calls many stateless content composables
- ViewModel → Screen (stateful wrapper) → Content (stateless) is the recommended architecture
- Stateless composables can have multiple Previews with different states
The split pattern — HomeScreen(viewModel) wraps HomeContent(state, onAction) — is the official Architecture pattern for Compose. HomeContent is stateless so it's fully testable without mocking a ViewModel.
Custom layouts let you control exactly how children are measured and placed. SubcomposeLayout allows measuring children based on other children's sizes — needed for dynamic layouts.
// Custom Layout — measure and place children manually @Composable fun MyVerticalLayout(modifier: Modifier = Modifier, content: @Composable () -> Unit) { Layout(content = content, modifier = modifier) { measurables, constraints -> // Step 1: Measure all children val placeables = measurables.map { it.measure(constraints) } // Step 2: Calculate layout size val totalHeight = placeables.sumOf { it.height } val maxWidth = placeables.maxOf { it.width } // Step 3: Place children layout(maxWidth, totalHeight) { var y = 0 placeables.forEach { placeable -> placeable.placeRelative(x = 0, y = y) y += placeable.height } } } } // SubcomposeLayout — compose content based on measured sizes // Example: show content only if it fits, else show placeholder @Composable fun AdaptiveContent(content: @Composable () -> Unit, fallback: @Composable () -> Unit) { SubcomposeLayout { constraints -> val main = subcompose("main", content).first().measure(constraints) if (main.height <= constraints.maxHeight) { layout(main.width, main.height) { main.placeRelative(0, 0) } } else { val fb = subcompose("fallback", fallback).first().measure(constraints) layout(fb.width, fb.height) { fb.placeRelative(0, 0) } } } }
- Layout: measure children → calculate size → place children — full control
- measurables.map { it.measure(constraints) } — always measure before placing
- layout(w, h) { placeRelative() } — set size and position children
- SubcomposeLayout: compose content during layout — for size-dependent composition
- Used by LazyColumn, Scaffold, ConstraintLayout internally
SubcomposeLayout is what makes Scaffold work — it measures the FAB first, then composes the content with appropriate padding. Knowing this internal detail shows deep Compose understanding.
Compose has a rich animation API ranging from simple value animations to complex choreographed sequences. Choose the right API for the complexity of the animation.
// animateFloatAsState — simple value animation val alpha by animateFloatAsState( targetValue = if (visible) 1f else 0f, animationSpec = tween(durationMillis = 300), label = "alpha" ) Box(Modifier.graphicsLayer { this.alpha = alpha }) // AnimatedVisibility — show/hide with animation AnimatedVisibility( visible = isVisible, enter = fadeIn() + slideInVertically(), exit = fadeOut() + slideOutVertically() ) { Card { Text("I appear and disappear") } } // Crossfade — animate between composables Crossfade(targetState = currentScreen) { screen -> when (screen) { Screen.Home -> HomeContent() Screen.Profile -> ProfileContent() } } // animateContentSize — animate size changes var expanded by remember { mutableStateOf(false) } Card(Modifier.animateContentSize()) { Text(if (expanded) longText else shortText) Button(onClick = { expanded = !expanded }) { Text("Toggle") } } // Transition — multiple values animated together val transition = updateTransition(selected, label = "selected") val borderColor by transition.animateColor(label = "border") { if (it) Color.Green else Color.Gray } val elevation by transition.animateDp(label = "elevation") { if (it) 8.dp else 2.dp }
- animateXAsState: simple animate a single value — easiest, great for most cases
- AnimatedVisibility: show/hide with enter/exit transitions built-in
- animateContentSize: automatically animate layout size changes
- updateTransition: animate multiple values together in sync
- rememberInfiniteTransition: looping animations (loading spinners, pulsing effects)
Always add label parameter to animations — it shows up in the Animation Inspector in Android Studio, making debugging much easier. This is a small detail that shows production experience.
Compose has its own UI testing framework that works without a real device or emulator. Tests interact with the UI through semantics — accessibility labels that describe what each element does.
// Setup // testImplementation("androidx.compose.ui:ui-test-junit4") // debugImplementation("androidx.compose.ui:ui-test-manifest") class CounterTest { @get:Rule val composeTestRule = createComposeRule() @Test fun counterIncrementsOnClick() { composeTestRule.setContent { CounterScreen() } // Find by text composeTestRule.onNodeWithText("Count: 0").assertIsDisplayed() // Perform action composeTestRule.onNodeWithText("+").performClick() // Assert result composeTestRule.onNodeWithText("Count: 1").assertIsDisplayed() } } // testTag — reliable node identification Button(modifier = Modifier.testTag("submit_button")) { } composeTestRule.onNodeWithTag("submit_button").performClick() // semantics — custom accessibility labels Icon(Icons.Default.Favorite, modifier = Modifier.semantics { contentDescription = "Like button" }) composeTestRule.onNodeWithContentDescription("Like button").assertIsDisplayed() // Common assertions .assertIsDisplayed() .assertIsEnabled() .assertIsSelected() .assertTextEquals("hello") .assertContentDescriptionContains("...")
- createComposeRule(): sets up Compose test environment — no emulator needed for unit tests
- setContent: render any composable in the test
- Finders: onNodeWithText, onNodeWithTag, onNodeWithContentDescription
- Actions: performClick, performTextInput, performScrollTo
- Assertions: assertIsDisplayed, assertIsEnabled, assertTextEquals
Prefer testTag over onNodeWithText for buttons/icons — text might change with localisation, but testTag is stable. Use semantics contentDescription for icon-only elements to make them both accessible and testable.
CompositionLocal provides implicit data passing down the composition tree without explicit parameter passing — like dependency injection within the UI tree.
// Built-in CompositionLocals you use daily LocalContext.current // Android Context LocalLifecycleOwner.current // LifecycleOwner LocalDensity.current // Density for dp↔px conversion LocalConfiguration.current // screen size, orientation MaterialTheme.colorScheme // Material3 colors MaterialTheme.typography // Text styles // Create custom CompositionLocal data class AppConfig(val isDarkMode: Boolean, val locale: String) val LocalAppConfig = compositionLocalOf { AppConfig(false, "en") } // Provide value — wraps the subtree @Composable fun App() { CompositionLocalProvider(LocalAppConfig provides AppConfig(true, "hi")) { HomeScreen() // and all children have access to LocalAppConfig } } // Consume anywhere in subtree — no parameter passing! @Composable fun DeepNestedChild() { val config = LocalAppConfig.current Text(if (config.isDarkMode) "Dark" else "Light") } // compositionLocalOf vs staticCompositionLocalOf // compositionLocalOf: recomposes ONLY consumers when value changes // staticCompositionLocalOf: recomposes ENTIRE subtree when value changes // Use staticCompositionLocalOf for values that rarely change (theme, config) val LocalNavController = staticCompositionLocalOf<NavController> { error("No NavController provided") }
- CompositionLocal: implicit data passed down the UI tree — no parameter drilling
- compositionLocalOf: only recomposes consumers on change — use for frequently changing values
- staticCompositionLocalOf: recomposes whole subtree — use for rarely changing values
- Common use cases: theme, navigation controller, analytics, feature flags
- Avoid overuse — explicit parameters are more readable; use CompositionLocal for cross-cutting concerns
CompositionLocal is how MaterialTheme works — colors, typography, shapes are provided at the top and consumed anywhere without passing through every composable. Don't use it for business data — that belongs in ViewModel.
derivedStateOf creates a State whose value is derived from other state objects. It prevents excessive recomposition when the source state changes often but the derived value changes rarely.
// Problem: showButton recomposes on EVERY scroll position change @Composable fun BadScrollScreen() { val listState = rememberLazyListState() // ❌ reads firstVisibleItemIndex on EVERY frame during scroll val showButton = listState.firstVisibleItemIndex > 0 FloatingActionButton(visible = showButton) { } } // Solution: derivedStateOf — only recomposes when DERIVED value changes @Composable fun GoodScrollScreen() { val listState = rememberLazyListState() // ✅ showButton only changes when crossing the 0→1 threshold val showButton by remember { derivedStateOf { listState.firstVisibleItemIndex > 0 } } AnimatedVisibility(visible = showButton) { FloatingActionButton(onClick = { }) { Icon(Icons.Default.KeyboardArrowUp, null) } } } // Another example — enable submit only when form is valid val isFormValid by remember { derivedStateOf { name.isNotBlank () && email.contains ("@") && password.length >= 8 } } Button(enabled = isFormValid, onClick = { submit() }) { Text("Submit") } // Button only recomposes when isFormValid changes — not on every keystroke
- derivedStateOf: memoises a computation — only recalculates when inputs change
- Without it: recomposition happens on every source state change
- With it: recomposition only when derived value actually changes (e.g. false→true)
- Always wrap in remember — otherwise derivedStateOf is recreated on every recomposition
- Perfect for: scroll threshold, form validation, filter counts, sorted lists
The scroll FAB example is the textbook derivedStateOf use case. Scroll position changes 60 times/second. Without derivedStateOf the FAB recomposes 60 times/second even when it's not changing. With it — zero unnecessary recompositions.
Compose and the View system can coexist — you can embed Compose inside Views and Views inside Compose. This is essential for incremental migration and third-party library integration.
// 1. Compose inside View system (ComposeView) // In Fragment/Activity: val composeView = ComposeView(context).apply { setViewCompositionStrategy(ViewCompositionStrategy.DisposeOnViewTreeLifecycleDestroyed) setContent { MaterialTheme { MyComposableScreen() } } } // In XML layout: // <androidx.compose.ui.platform.ComposeView // android:id="@+id/compose_view" /> binding.composeView.setContent { MyComposable() } // 2. View inside Compose (AndroidView) @Composable fun LegacyMapView(onMapReady: (GoogleMap) -> Unit) { AndroidView( factory = { ctx -> MapView(ctx).apply { onCreate(null); onResume() } }, update = { mapView -> mapView.getMapAsync(onMapReady) } ) } // 3. AndroidViewBinding — use ViewBinding in Compose @Composable fun LegacyChart() { AndroidViewBinding(ChartLayoutBinding::inflate) { chart.setData(chartData) } } // When to use: // ✅ Incremental migration (View → Compose screen by screen) // ✅ Third-party libraries without Compose equivalent (Maps, Charts) // ✅ Complex custom Views hard to rewrite in Compose
- ComposeView: embed Compose in existing Fragment/Activity — start migration per screen
- AndroidView: embed View in Compose — for third-party libraries or complex custom Views
- AndroidViewBinding: use ViewBinding inside Compose — safer than direct View access
- Always set setViewCompositionStrategy on ComposeView in Fragments
- Incremental migration strategy: new screens in Compose, migrate old ones over time
setViewCompositionStrategy(DisposeOnViewTreeLifecycleDestroyed) is critical for ComposeView in Fragments. Without it, the composition survives the view being destroyed, causing memory leaks. This is the most common ComposeView migration bug.
Material Design 3 (Material You) is Google's latest design system — it includes dynamic colour, updated components, and improved typography. All new Compose apps should use Material3.
// Material3 theme setup @Composable fun AppTheme( darkTheme: Boolean = isSystemInDarkTheme(), dynamicColor: Boolean = true, // Material You (Android 12+) content: @Composable () -> Unit ) { val colorScheme = when { dynamicColor && Build.VERSION.SDK_INT >= 31 -> { if (darkTheme) dynamicDarkColorScheme(LocalContext.current) else dynamicLightColorScheme(LocalContext.current) } darkTheme -> DarkColorScheme else -> LightColorScheme } MaterialTheme(colorScheme = colorScheme, typography = AppTypography, content = content) } // Access theme values anywhere Text( text = "Hello", color = MaterialTheme.colorScheme.primary, style = MaterialTheme.typography.headlineMedium ) Surface(color = MaterialTheme.colorScheme.surface) { } // Custom color scheme private val LightColorScheme = lightColorScheme( primary = Color(0xFF6650A4), secondary = Color(0xFF625B71), background = Color(0xFFFFFBFE) ) // Material3 components // Button, OutlinedButton, TextButton, FilledTonalButton // Card, ElevatedCard, OutlinedCard // TextField, OutlinedTextField // Scaffold with TopAppBar, BottomAppBar, FAB, Snackbar // NavigationBar (bottom nav), NavigationRail (tablet side nav)
- Material3 (M3) replaces Material2 — new components, dynamic color, updated tokens
- Dynamic color (Android 12+): colors derived from wallpaper — Material You
- colorScheme: primary, secondary, tertiary, surface, background, error + variants
- Typography: displayLarge through labelSmall — semantic type scale
- Always wrap app in MaterialTheme — provides tokens to all child composables
Dynamic color is Android 12+ only — always provide a fallback color scheme for older devices. The code pattern with when(dynamicColor && Build.VERSION >= 31) is the standard approach used in every new Android project template.
One-time events should never be part of regular UiState — they can't be deduplicated like state. Using the wrong approach causes events to fire multiple times or get lost on rotation.
// ❌ Wrong — StateFlow deduplicates same value // If user clicks "Navigate" twice rapidly, second navigation lost! private val _navigate = MutableStateFlow<String?>(null) // ✅ Option 1: Channel — one consumer, guaranteed delivery class HomeViewModel : ViewModel() { private val _events = Channel<UiEvent>(Channel.BUFFERED) val events = _events.receiveAsFlow() fun onLoginSuccess() { viewModelScope.launch { _events.send(UiEvent.NavigateToHome) } } fun onError(msg: String) { viewModelScope.launch { _events.send(UiEvent.ShowSnackbar(msg)) } } } sealed class UiEvent { object NavigateToHome : UiEvent() data class ShowSnackbar(val message: String) : UiEvent() } // Collect in Compose val snackbarHostState = remember { SnackbarHostState() } LaunchedEffect(Unit) { viewModel.events.collect { event -> when (event) { is UiEvent.NavigateToHome -> navController.navigate(HomeRoute) is UiEvent.ShowSnackbar -> snackbarHostState.showSnackbar(event.message) } } } // ✅ Option 2: SharedFlow(replay=0) — no replay, multiple collectors private val _events = MutableSharedFlow<UiEvent>() viewModelScope.launch { _events.emit(UiEvent.NavigateToHome) }
- StateFlow deduplicates — never use it for one-time events (navigation, toasts)
- Channel.receiveAsFlow(): guaranteed delivery to ONE collector — best for most cases
- SharedFlow(replay=0): zero replay, can have multiple collectors — events not cached
- Collect events in LaunchedEffect(Unit) — runs once when composable enters composition
- SnackbarHostState.showSnackbar(): suspends until snackbar is dismissed — handles queuing
This is one of the most frequently asked advanced Compose questions in 2025. The StateFlow deduplication problem is subtle — navigation to the same screen twice only fires once. Channel or SharedFlow(replay=0) solves this correctly.
Compose's smart recomposition relies on stability — stable types can be skipped if parameters haven't changed. The Compose compiler reports which composables are skippable and why.
// Enable Compose compiler metrics in build.gradle.kts tasks.withType<org.jetbrains.kotlin.gradle.tasks.KotlinCompile>().configureEach { compilerOptions.freeCompilerArgs.addAll( "-P", "plugin:androidx.compose.compiler.plugins.kotlin:metricsDestination=${project.buildDir}/compose_metrics", "-P", "plugin:androidx.compose.compiler.plugins.kotlin:reportsDestination=${project.buildDir}/compose_reports" ) } // Stable types (Compose can skip recomposition): // ✅ Primitives (Int, String, Boolean, Float) // ✅ @Immutable or @Stable annotated classes // ✅ data class with only stable properties // Unstable types (Compose cannot skip): // ❌ List, Map, Set (mutable by default in Kotlin) // ❌ Classes with var properties // ❌ Classes from external libraries not annotated // Fix: use immutable collections @Stable data class UserListState( val users: ImmutableList<User> // kotlinx.collections.immutable ) // Or wrap in @Immutable @Immutable data class UserListState(val users: List<User>) // Layout Inspector — visualise recompositions live // Android Studio → View → Tool Windows → Layout Inspector // Enable "Show Recomposition Counts" — highlights hot composables
- Stable types: Compose can skip recomposition if all parameters are equal and stable
- List/Map are unstable — pass ImmutableList or wrap in @Immutable data class
- Compose compiler metrics: reports skippable vs non-skippable composables
- Layout Inspector: live recomposition counts — find hot spots visually
- @Stable: promise Compose that two equal instances always return same hash — enables skipping
The List stability issue catches everyone. UserListScreen(users: List<User>) is never skippable because List is unstable. Fix: use ImmutableList from kotlinx-collections-immutable or wrap in @Immutable data class. This one fix can eliminate most recompositions in list-heavy screens.
Scaffold implements the basic Material Design layout structure — it handles the coordination of TopAppBar, BottomAppBar, FAB, SnackbarHost, and content padding automatically.
@Composable fun HomeScreen(viewModel: HomeViewModel) { val snackbarHostState = remember { SnackbarHostState() } Scaffold( topBar = { TopAppBar( title = { Text("Home") }, navigationIcon = { IconButton(onClick = { navController.navigateUp() }) { Icon(Icons.Default.ArrowBack, "Back") } }, actions = { IconButton(onClick = { openSettings() }) { Icon(Icons.Default.Settings, "Settings") } } ) }, floatingActionButton = { FloatingActionButton(onClick = { onAddClick() }) { Icon(Icons.Default.Add, "Add") } }, snackbarHost = { SnackbarHost(snackbarHostState) }, bottomBar = { BottomNavigationBar(navController) } ) { innerPadding -> // innerPadding avoids content going under AppBar or BottomBar LazyColumn(contentPadding = innerPadding) { items(items) { ItemRow(it) } } } }
- Scaffold coordinates Material layout slots — topBar, bottomBar, FAB, snackbarHost
- innerPadding: critical — apply to content to avoid overlap with system bars and AppBar
- SnackbarHostState: showSnackbar() suspends — handles queuing automatically
- TopAppBar: CenterAlignedTopAppBar for centered title (Instagram/TikTok style)
- NavigationBar: replaces BottomNavigation — use with NavigationBarItem
Forgetting to apply innerPadding is the most common Scaffold mistake — content goes under the TopAppBar or BottomBar. Always pass innerPadding to the scrollable content either as contentPadding or Modifier.padding(innerPadding).
ViewModels in Compose are scoped to navigation destinations, not Activities. The lifecycle and scoping rules are different from the View system — important to understand to avoid memory leaks.
// viewModel() — from lifecycle-viewmodel-compose @Composable fun HomeScreen() { val viewModel: HomeViewModel = viewModel() // Scoped to the NavBackStackEntry (destination lifecycle) } // hiltViewModel() — Hilt-injected ViewModel (recommended) @Composable fun HomeScreen() { val viewModel: HomeViewModel = hiltViewModel() // Hilt provides dependencies, scoped to destination } // Shared ViewModel across screens (activity-scoped) @Composable fun ProfileScreen() { val sharedVm: SharedViewModel = hiltViewModel(LocalActivity.current) // Same instance across all screens } // ViewModel scoped to NavGraph (parent route) @Composable fun CheckoutScreen() { val backStack = rememberNavBackStackEntry("checkout_graph") val checkoutVm: CheckoutViewModel = hiltViewModel(backStack) // Lives as long as user is in checkout flow } // ViewModel lifecycle in Compose vs View system: // View: scoped to Activity or Fragment backstack entry // Compose: scoped to NavBackStackEntry (destination) // Destroyed when navigating back past that destination
- hiltViewModel(): Hilt-injected ViewModel scoped to NavBackStackEntry — use this always
- Compose ViewModels are scoped to navigation destinations — cleared on navigate back
- Activity-scoped: pass LocalActivity to hiltViewModel() — shared across all screens
- NavGraph-scoped: share ViewModel within a nested nav graph — perfect for multi-step flows
- Never pass ViewModel down as parameter — pass state and lambdas instead
NavGraph-scoped ViewModel is powerful for flows like checkout or onboarding — the ViewModel lives across multiple steps but is cleared when the user exits the flow. This is much cleaner than passing data through each screen.
LazyColumn is Compose's RecyclerView equivalent but requires specific optimisations to avoid common performance pitfalls — especially around keys and item types.
// ✅ Always provide keys — stable identity LazyColumn { items(users, key = { it.id }) { user -> // key prevents re-creation on reorder UserRow(user) } } // ❌ No key — items recreated on list changes (animations broken) LazyColumn { items(users) { user -> UserRow(user) } } // contentType — hint Compose to reuse similar items LazyColumn { items( items = feedItems, key = { it.id }, contentType = { it.type } // VIDEO, IMAGE, TEXT — different composables reused separately ) { item -> when (item.type) { FeedType.VIDEO -> VideoItem(item) FeedType.IMAGE -> ImageItem(item) FeedType.TEXT -> TextItem(item) } } } // Sticky headers val grouped = users.groupBy { it.department } LazyColumn { grouped.forEach { (dept, users) -> stickyHeader { DepartmentHeader(dept) } items(users, key = { it.id }) { UserRow(it) } } } // rememberLazyListState — scroll control and observation val listState = rememberLazyListState() LazyColumn(state = listState) { } // Scroll to position LaunchedEffect(Unit) { listState.scrollToItem(10) } // Animate scroll LaunchedEffect(Unit) { listState.animateScrollToItem(0) }
- key: stable identity — enables animations on reorder, prevents item recreation
- contentType: hint for item reuse — LazyColumn reuses composables of same type
- stickyHeader: headers that stick to top while scrolling through their section
- rememberLazyListState: observe scroll position, programmatically scroll
- Avoid putting heavy logic inside items — compute in ViewModel, pass results down
contentType is the most overlooked LazyColumn optimisation. Without it, Compose might try to reuse a VideoItem composable for a TextItem — causing a full recompose. With contentType, only same-type items are reused — like RecyclerView's viewType.
Compose is split into layers — the runtime (slot table, recomposition engine) is separate from the UI toolkit. This separation enables Compose to be used outside of Android UI — for example in Compose for Desktop and Compose Multiplatform.
// Compose layers (bottom to top): // 1. compose-runtime — core: slot table, snapshot state, recomposition // 2. compose-ui — draw, layout, input, accessibility // 3. compose-foundation — basic building blocks: Box, Text, Image, LazyColumn // 4. compose-material3 — Material Design components // compose-runtime: no Android dependency // Powers: Compose UI, Compose for Desktop, Compose for iOS (Compose Multiplatform) // Snapshot state system — how Compose tracks changes val state = mutableStateOf(0) // Reading state inside a composable subscribes to it // Writing state marks the composable as needing recomposition // Snapshot — consistent view of state at a point in time // Allows reads from multiple threads safely // Compose.async is thread-safe because of snapshots // Slot table — Compose's internal data structure // Stores composition (composable calls + their positions) // Enables Compose to diff and update efficiently // Like a virtual DOM but for Android UI // Compose Multiplatform — shared UI across platforms // implementation("org.jetbrains.compose.ui:ui") // Write Compose UI once → run on Android, iOS, Desktop, Web @Composable fun SharedUi() { Column { Text("Runs on Android AND iOS!") // Compose Multiplatform } }
- Compose runtime: platform-agnostic — slot table, snapshot state, recomposition engine
- Snapshot state: thread-safe state reading — enables consistent UI updates from any thread
- Slot table: stores the composition tree — Compose diffs this to find what changed
- Compose Multiplatform: JetBrains extends Compose runtime to iOS, Desktop, Web
- Layered architecture: you can use compose-runtime without compose-ui for non-UI trees
Compose's "virtual DOM" analogy: the slot table tracks what composables called what, in what order. When state changes, Compose replays from the affected point. Knowing this makes recomposition behaviour intuitive rather than magical.
@Preview renders composables in Android Studio without running the app. Used effectively, previews become a development superpower — enabling rapid UI iteration with instant visual feedback.
// Basic preview @Preview(showBackground = true) @Composable fun UserCardPreview() { AppTheme { UserCard(user = User("Rahul", "Android Developer")) } } // Multiple previews — test different states @Preview(name = "Loading", showBackground = true) @Composable fun LoadingPreview() { HomeContent(state = UiState.Loading) } @Preview(name = "Success", showBackground = true) @Composable fun SuccessPreview() { HomeContent(state = UiState.Success(fakeUsers)) } @Preview(name = "Error", showBackground = true) @Composable fun ErrorPreview() { HomeContent(state = UiState.Error("Network error")) } // Preview with different configurations @Preview(name = "Dark", uiMode = Configuration.UI_MODE_NIGHT_YES) @Preview(name = "Large text", fontScale = 1.5f) @Preview(name = "Tablet", device = Devices.TABLET) @Composable fun MultiConfigPreview() { AppTheme { HomeScreen() } } // Custom annotation for reuse @Preview(name = "Light", showBackground = true) @Preview(name = "Dark", showBackground = true, uiMode = Configuration.UI_MODE_NIGHT_YES) annotation class ThemePreviews @ThemePreviews @Composable fun ButtonPreview() { AppTheme { PrimaryButton("Click me") } }
- @Preview only works on stateless composables — another reason to use state hoisting
- Multiple @Preview on one function: see all states side by side in Android Studio
- Preview parameters: uiMode, fontScale, device, locale — test edge cases without running app
- Custom preview annotation: combine multiple @Preview into one reusable annotation
- PreviewParameterProvider: generate multiple previews from a data set
Effective preview usage is a sign of a mature Compose developer. Say: "I have Loading, Success, Error, and Dark mode previews for every screen. I catch 80% of UI bugs without running the app." This shows you use Compose as intended, not just write it.
Profile first, fix second. The answer involves Layout Inspector, stability annotations, missing keys, and heavy work inside composables.
// Step 1: Layout Inspector → Show Recomposition Counts // Red = excessive recompositions // FIX 1: Always provide keys // ❌ items(posts) { PostCard(it) } // ✅ items(posts, key = { it.id }) { PostCard(it) } // FIX 2: Unstable types prevent skipping // ❌ List<Post> is unstable — PostCard never skipped // ✅ Wrap in @Immutable @Immutable data class FeedState(val posts: List<Post>) // FIX 3: Heavy work inside composable // ❌ Date formatting on every recompose Text(SimpleDateFormat("dd/MM").format(post.date)) // ✅ Pre-format in remember val date = remember(post.date) { formatDate(post.date) } // FIX 4: contentType — reuse composables of same type items(feed, key = { it.id }, contentType = { it.type }) { FeedItem(it) } // FIX 5: Image sizing — load at display size only AsyncImage(model = ImageRequest.Builder(ctx).data(url).size(200).build(), modifier = Modifier.size(48.dp), contentDescription = null)
- Profile first: Layout Inspector recomposition counts shows exactly what's hot
- key: prevents item recreation on scroll — most impactful single fix
- @Immutable: makes composable skippable when inputs unchanged
- contentType: LazyColumn reuses same-type composable slots efficiently
- remember(input): memoize expensive computations like date formatting
Always start with "profile first with Layout Inspector." Interviewers want to see engineering discipline. Then list fixes in order of impact: keys → stability → heavy work → image sizes.
This tests the full ViewModel + Flow + Compose UI pipeline — the most common architecture pattern in production Android apps.
@HiltViewModel class SearchViewModel @Inject constructor(private val repo: SearchRepo) : ViewModel() { private val _query = MutableStateFlow("") val state = _query .debounce(300) .distinctUntilChanged() .flatMapLatest { q -> if (q.isBlank()) flowOf(SearchState.Idle) else flow { emit(SearchState.Loading) runCatching { repo.search(q) } .onSuccess { emit(SearchState.Success(it)) } .onFailure { emit(SearchState.Error(it.message !!)) } } } .stateIn(viewModelScope, SharingStarted.WhileSubscribed(5000), SearchState.Idle) fun onQuery(q: String) { _query.value = q } } @Composable fun SearchScreen(vm: SearchViewModel = hiltViewModel()) { var query by rememberSaveable { mutableStateOf("") } val state by vm.state.collectAsStateWithLifecycle() Column { TextField(value = query, onValueChange = { query = it; vm.onQuery(it) }) when (val s = state) { is SearchState.Idle -> Hint() is SearchState.Loading -> CircularProgressIndicator() is SearchState.Success -> ResultList(s.results) is SearchState.Error -> ErrorText(s.msg) } } }
- debounce(300): wait 300ms after last keystroke — prevents excessive API calls
- flatMapLatest: cancels previous search when new query arrives
- Sealed SearchState: Idle/Loading/Success/Error — exhaustive when in UI
- rememberSaveable for query: preserves typed text on rotation
- stateIn(WhileSubscribed(5000)): stops upstream 5s after no collectors
Walk through the chain: query → debounce → flatMapLatest → sealed state → stateIn → collectAsStateWithLifecycle → when(state). Each step solves a specific problem. Explaining WHY each is there separates senior from junior answers.
Tests ModalBottomSheet, SnackbarHost, one-time events via Channel, and correct Scaffold usage.
@Composable fun ProductScreen(vm: ProductViewModel = hiltViewModel()) { val snackbar = remember { SnackbarHostState() } var selected by remember { mutableStateOf<Product?>(null) } LaunchedEffect(Unit) { vm.events.collect { event -> when (event) { is ProductEvent.AddedToCart -> snackbar.showSnackbar("${event.name} added!") } } } Scaffold(snackbarHost = { SnackbarHost(snackbar) }) { p -> ProductList(Modifier.padding(p), onTap = { selected = it }) } selected?.let { product -> ModalBottomSheet(onDismissRequest = { selected = null }) { ProductDetail(product, onAdd = { vm.addToCart(product); selected = null }) } } } // ViewModel uses Channel — StateFlow would deduplicate same product private val _events = Channel<ProductEvent>() val events = _events.receiveAsFlow()
- ModalBottomSheet shown by non-null state — null to dismiss
- Channel for snackbar: StateFlow deduplicates — adding same item twice would fire once
- LaunchedEffect(Unit) for events: collects for composable lifetime
- SnackbarHostState: handles snackbar queuing automatically
- Dismiss sheet on action: set selected = null
The Channel vs StateFlow distinction for events is the key insight. StateFlow deduplicates same value — if user adds the same product twice rapidly, second snackbar would be lost. Channel guarantees every event is delivered.
Incremental migration is the only safe approach — never big-bang rewrite. Compose and Views coexist seamlessly through interop APIs.
// Strategy: Incremental, screen by screen // Phase 1: New screens in Compose // All new features built in Compose, old screens stay XML // Phase 2: ComposeView in existing Fragments class ProfileFragment : Fragment() { override fun onCreateView(...) = ComposeView(requireContext()).apply { setViewCompositionStrategy(ViewCompositionStrategy.DisposeOnViewTreeLifecycleDestroyed) setContent { AppTheme { ProfileScreen() } } } } // Phase 3: AndroidView for unmigrated components @Composable fun LegacyMap() { AndroidView(factory = { ctx -> MapView(ctx) }) } // Migration order (safest to riskiest): // 1. Leaf components (buttons, cards, list items) // 2. Reusable components (headers, footers) // 3. Full screens // 4. Navigation (last — most disruptive) // ViewModel stays unchanged throughout // Both XML and Compose observe the same StateFlow
- Never big-bang rewrite — incremental is the only safe approach
- ComposeView: embed Compose in Fragment without changing Activity/Fragment structure
- AndroidView: keep complex legacy Views (Maps, Charts) inside Compose screens
- ViewModels unchanged: both XML and Compose observe the same StateFlow
- Migrate navigation last — most disruptive change
setViewCompositionStrategy(DisposeOnViewTreeLifecycleDestroyed) is critical for Fragments — without it, the composition leaks when the Fragment view is destroyed. This is the most common ComposeView migration bug.
Rotation crashes in Compose are almost always state management issues — remember vs rememberSaveable, ViewModel scoping, or unstable LaunchedEffect keys.
// Debug: Enable "Don't keep activities" in Dev Options // Forces Activity recreation aggressively // CAUSE 1: remember loses state on rotation // ❌ var index by remember { mutableStateOf(0) } // 0 after rotation // Crash: list[0] accessed when list is empty on restart // ✅ var index by rememberSaveable { mutableStateOf(0) } // CAUSE 2: ViewModel created manually // ❌ New instance every rotation val vm = MyViewModel() // ✅ Survives rotation val vm: MyViewModel = hiltViewModel() // CAUSE 3: LaunchedEffect with unstable key re-runs on rotation // ❌ LaunchedEffect(users.size ) { loadDetailFor(users[0]) // crash if users empty after rotation } // ✅ LaunchedEffect(Unit) { users.firstOrNull()?.let { loadDetailFor(it) } } // CAUSE 4: NavController not remembered // ❌ val nav = NavHostController(context) // lost on rotation // ✅ val nav = rememberNavController()
- Enable "Don't keep activities" — aggressively tests rotation handling
- rememberSaveable vs remember: rotation destroys remember state
- ViewModel: always use hiltViewModel() or viewModel() — never instantiate manually
- LaunchedEffect: use stable keys — null/Unit won't re-trigger on rotation
- NavController: always rememberNavController() — preserves back stack
"Enable Don't keep activities" immediately signals you know how to properly test rotation. It forces Activity recreation far more aggressively than just rotating, exposing all state management bugs quickly.
Both provide coroutine scopes in Compose — LaunchedEffect for automatic state-driven effects, rememberCoroutineScope for user-triggered actions like button clicks.
// LaunchedEffect — runs automatically when key changes @Composable fun Screen(userId: String) { LaunchedEffect(userId) { viewModel.loadUser(userId) // runs on composition + key change } } // rememberCoroutineScope — user-triggered actions @Composable fun ScrollToTopButton(listState: LazyListState) { val scope = rememberCoroutineScope() FloatingActionButton(onClick = { scope.launch { listState.animateScrollToItem(0) } }) { Icon(Icons.Default.KeyboardArrowUp, null) } } // Decision rule: // State/lifecycle drives it → LaunchedEffect // User action drives it → rememberCoroutineScope // ❌ Anti-pattern: LaunchedEffect for click var clicked by remember { mutableStateOf(false) } LaunchedEffect(clicked) { if (clicked) doWork() } // ✅ Correct: scope for click val scope = rememberCoroutineScope() Button(onClick = { scope.launch { doWork() } }) { Text("Go") }
- LaunchedEffect: automatic — runs when composable enters or key changes
- rememberCoroutineScope: manual — launch from click/event handlers
- Both cancelled when composable leaves composition
- Anti-pattern: using LaunchedEffect with a flag for click events
- Most async work should be in ViewModel — scope is for UI-layer coroutines only
Simple rule: "LaunchedEffect = something that happens TO the UI. rememberCoroutineScope = something the USER makes happen." Scroll-to-top on button click → scope.launch. Load data when screen appears → LaunchedEffect.
The slot API uses @Composable lambdas as parameters — letting callers inject any UI into predefined slots. It's how Material3 achieves maximum flexibility with minimal parameters.
// Slot API — @Composable lambda parameters @Composable fun CustomCard( header: (@Composable () -> Unit)? = null, footer: (@Composable () -> Unit)? = null, content: @Composable () -> Unit ) { Card { Column { header?.invoke() Box(Modifier.padding(16.dp)) { content() } footer?.invoke() } } } // Usage — caller decides each slot's content CustomCard( header = { Image(hero, contentDescription = null) }, content = { Text("Main content") }, footer = { Row { Button(onClick = {}) { Text("OK") } TextButton(onClick = {}) { Text("Cancel") } } } ) // Material3 uses slots everywhere TopAppBar( title = { Text("Title") }, // title slot navigationIcon = { BackButton() }, // icon slot actions = { SearchIcon() } // actions slot ) Button(onClick = {}) { Icon(Icons.Default.Add, null) Text("Add") // content slot — any composable works }
- Slot API: @Composable lambdas as parameters — caller controls what renders
- Optional slots: nullable lambdas — skip sections when null
- Maximum flexibility without endless parameters for every content variant
- Material3 built on slots: Button, TopAppBar, Scaffold all use this pattern
- Scope-restricted slots: RowScope/ColumnScope limit available APIs inside slot
Button's content lambda IS a slot — you can put Icon+Text, just Image, or anything else. This flexibility is impossible with traditional View parameters. Understanding the slot API is what makes you design reusable Compose components rather than one-off widgets.
Bottom navigation with multiple tabs is a fundamental app pattern. The key is saving and restoring tab state so scroll position is preserved when switching tabs.
@Serializable object HomeTab @Serializable object SearchTab @Serializable object ProfileTab @Composable fun MainScreen() { val nav = rememberNavController() val dest by nav.currentBackStackEntryAsState() Scaffold( bottomBar = { NavigationBar { NavigationBarItem( selected = dest?.destination?.hasRoute(HomeTab::class) == true, onClick = { nav.navigate(HomeTab) { popUpTo(nav.graph.startDestinationId) { saveState = true } launchSingleTop = true restoreState = true // restore scroll position! } }, icon = { Icon(Icons.Default.Home, null) }, label = { Text("Home") } ) // Repeat for Search, Profile... } } ) { padding -> NavHost(nav, HomeTab, Modifier.padding(padding)) { composable<HomeTab> { HomeScreen() } composable<SearchTab> { SearchScreen() } composable<ProfileTab> { ProfileScreen() } } } }
- NavigationBar + NavigationBarItem: Material3 bottom nav components
- restoreState = true: preserves scroll position when switching tabs
- saveState = true: saves tab state before leaving — pairs with restoreState
- launchSingleTop: prevents duplicate tabs on back stack
- popUpTo startDestination: back from any tab exits app cleanly
restoreState = true is what makes tabs feel native — scroll position preserved just like Instagram or YouTube. Without saveState/restoreState, every tab switch resets to the top. This is the most common bottom-nav implementation bug.
@Composable is a compiler plugin annotation that fundamentally transforms the function — adding hidden parameters and making it part of the Compose slot table and recomposition system.
// What you write: @Composable fun Greeting(name: String) { Text("Hello $name") } // What compiler generates (conceptually): fun Greeting(name: String, composer: Composer, changed: Int) { composer.startRestartGroup(...) if (changed or !composer.skipping) { Text(name, composer, ...) } composer.endRestartGroup() } // Rules for composables: // ✅ Can only be called from @Composable context // ✅ Can use remember, LaunchedEffect, state // ✅ Can be skipped if inputs unchanged (smart recomposition) // ✅ Can return values (not just Unit) // ❌ Must be idempotent — same input → same output // ❌ Must not have side effects outside Compose APIs // ❌ Not idempotent — different output every recompose @Composable fun Bad() { Text(Random.nextInt().toString()) } // ✅ Stabilised with remember @Composable fun Good() { val value = remember { Random.nextInt() } Text(value.toString()) }
- @Composable: compile-time transformation — hidden Composer parameter added
- Composable functions track position in slot table via hidden call stack info
- "Infectious": can only call @Composable from other @Composable
- Must be idempotent: same inputs → same UI output every time
- Can return values: @Composable fun rememberXxx() = remember { Xxx() }
The hidden Composer parameter explains why @Composable is "infectious." The Composer must be threaded through every call — that's why you can't call LaunchedEffect from a regular function. Knowing this shows deep Compose understanding.
Compose provides a gesture API ranging from simple clicks to complex multi-touch. Each level trades simplicity for control.
// clickable — simplest, handles ripple + accessibility Box(Modifier.clickable( onClick = { handleClick() }, onLongClick = { showMenu() } )) // combinedClickable — adds double-click Box(Modifier.combinedClickable( onClick = { select() }, onDoubleClick = { zoomIn() }, onLongClick = { showContextMenu() } )) // draggable — single-axis drag var offsetX by remember { mutableFloatStateOf(0f) } Box(Modifier .offset { IntOffset(offsetX.roundToInt(), 0) } .draggable(rememberDraggableState { offsetX += it }, Orientation.Horizontal) ) // pointerInput — full raw gesture control Box(Modifier.pointerInput(Unit) { detectTapGestures( onTap = { println("Tap at $it") }, onDoubleTap = { zoomIn() }, onLongPress = { showMenu() } ) }) // detectTransformGestures — pinch-zoom + pan + rotate Box(Modifier.pointerInput(Unit) { detectTransformGestures { _, pan, zoom, rotation -> scale *= zoom; offset += pan; angle += rotation } })
- clickable: simplest — handles ripple, accessibility, click and long press automatically
- combinedClickable: adds double-click detection
- draggable: single-axis drag with delta updates
- pointerInput + detectTapGestures: raw tap/double-tap/long press
- detectTransformGestures: pinch-zoom + rotate + pan simultaneously
Use the highest-level API that fits. clickable for buttons (automatic ripple and accessibility), draggable for sliders/drawers, pointerInput for custom gestures like pinch-zoom. Going lower than needed adds complexity with no benefit.
Collapsing toolbars connect scroll state to layout transformations. Material3 provides a built-in solution via LargeTopAppBar — always prefer this over custom implementations.
// Material3 built-in — easiest approach (RECOMMENDED) @Composable fun CollapsingScreen() { val scrollBehavior = TopAppBarDefaults.exitUntilCollapsedScrollBehavior() Scaffold( topBar = { LargeTopAppBar( title = { Text("Profile") }, scrollBehavior = scrollBehavior ) }, modifier = Modifier.nestedScroll(scrollBehavior.nestedScrollConnection) ) { padding -> LazyColumn(Modifier.padding(padding)) { items(100) { ListItem({ Text("Item $it") }) } } } } // Custom approach — when Material3 style doesn't fit val scrollBehavior2 = TopAppBarDefaults.pinnedScrollBehavior() // stays visible val scrollBehavior3 = TopAppBarDefaults.enterAlwaysScrollBehavior() // re-appears on up // NestedScrollConnection for custom behavior val connection = remember { object : NestedScrollConnection { override fun onPreScroll(available: Offset, source: NestedScrollSource): Offset { // intercept scroll delta, update toolbar height return Offset.Zero } } }
- LargeTopAppBar: built-in collapsing — handles everything automatically
- exitUntilCollapsedScrollBehavior: collapses on scroll down, stays collapsed
- enterAlwaysScrollBehavior: collapses on down, re-appears on scroll up
- nestedScroll: connects Scaffold to LazyColumn scroll events
- NestedScrollConnection: custom behavior — intercept scroll deltas manually
For interviews: "I'd use LargeTopAppBar with exitUntilCollapsedScrollBehavior — it's Material3's built-in solution." Only reach for NestedScrollConnection for non-standard behaviour like a parallax hero image or custom shrink animations.
Shared element transitions animate UI elements smoothly between screens. Native Compose support arrived in Compose 1.7 (2024) via SharedTransitionLayout.
// Stable since Compose 1.7 / BOM 2024.09.00 @Composable fun App() { val nav = rememberNavController() SharedTransitionLayout { // wraps NavHost NavHost(nav, startDestination = "list") { composable("list") { AnimatedVisibilityScope { LazyColumn { items(products, key = { it.id }) { p -> Image( painter = painterResource(p.image), contentDescription = null, modifier = Modifier .sharedElement( rememberSharedContentState("img-${p.id}"), animatedVisibilityScope = this@AnimatedVisibilityScope ) .clickable { nav.navigate("detail/${p.id}") } ) } } } } composable("detail/{id}") { back -> AnimatedVisibilityScope { val id = back.arguments?.getString("id") Image( painter = painterResource(products.first { it.id == id }.image), contentDescription = null, modifier = Modifier .sharedElement( rememberSharedContentState("img-$id"), // same key! animatedVisibilityScope = this@AnimatedVisibilityScope ) ) } } } } }
- SharedTransitionLayout: wraps NavHost, coordinates shared element animation
- sharedElement: apply to same element in both screens with matching key string
- rememberSharedContentState(key): pairs elements across destinations
- AnimatedVisibilityScope: required context for the modifier
- Stable in Compose 1.7 (Sep 2024) — production ready
Shared element transitions were one of the most requested Compose features for years. Knowing they're stable in Compose 1.7 (2024) and the SharedTransitionLayout API shows you follow Compose releases — a strong signal at senior level.
snapshotFlow converts Compose State into a Flow — enabling Flow operators like debounce, filter, distinctUntilChanged on state changes.
// snapshotFlow — Compose State → Flow @Composable fun ScrollAnalytics() { val listState = rememberLazyListState() LaunchedEffect(listState) { // Track scroll position changes snapshotFlow { listState.firstVisibleItemIndex } .distinctUntilChanged() .filter { it > 0 } .collect { index -> analytics.track(index) } } // Debounce scroll for persistence LaunchedEffect(listState) { snapshotFlow { listState.firstVisibleItemIndex } .distinctUntilChanged() .debounce(500) // wait until scrolling stops .collect { index -> viewModel.saveScroll(index) } } LazyColumn(state = listState) { /* ... */ } } // Read multiple state values snapshotFlow { listState.firstVisibleItemIndex to listState.firstVisibleItemScrollOffset }.collect { (index, offset) -> save(index, offset) } // Must be inside a coroutine — use in LaunchedEffect // Re-emits whenever any State read inside lambda changes
- snapshotFlow: converts Compose State reads into a Flow
- Enables: debounce, filter, distinctUntilChanged on State changes
- Must be used inside a coroutine (LaunchedEffect) — Flow, not suspend
- Re-emits whenever ANY State read inside the lambda changes
- Perfect for: analytics tracking, debounced persistence, filtered state reactions
snapshotFlow is the bridge from Compose's snapshot system to Kotlin Flow. Use it when you need Flow operators on Compose State — debouncing scroll saves, throttling analytics, or filtering state changes before reacting to them.
Compose startup involves both app-level init and first composition. Both need optimisation. Baseline Profiles give the biggest single impact.
// 1. Baseline Profiles — biggest single impact // ./gradlew :app:generateBaselineProfile // AOT compiles hot Compose code → 40% faster cold start // 2. Defer heavy ViewModel work // ❌ class HomeViewModel : ViewModel() { private val data = loadAllData() // blocks constructor! } // ✅ class HomeViewModel : ViewModel() { init { viewModelScope.launch(Dispatchers.IO) { loadData() } } } // 3. Skeleton screens — show immediately when (state) { is UiState.Loading -> SkeletonScreen() // instant visual response is UiState.Success -> ContentScreen(state.data) } // 4. Defer non-critical composables var showHeavy by remember { mutableStateOf(false) } LaunchedEffect(Unit) { delay(100); showHeavy = true } if (showHeavy) HeavyAnalyticsDashboard() // 5. R8 full mode — shrinks Compose runtime // buildTypes { release { isMinifyEnabled = true } } // 6. App Startup library — parallelise init // Move heavy init from Application.onCreate to lazy Initializers
- Baseline Profiles: biggest impact — AOT compiles hot paths, 40% faster cold start
- Skeleton screens: show placeholder immediately — perceived performance improvement
- Defer heavy init: never block Application or ViewModel constructor
- LaunchedEffect + delay: defer non-critical composables by one frame
- R8 full mode: shrinks Compose runtime code along with app code
Layer your answer: "Baseline Profiles for cold start, skeleton screens for perceived performance, App Startup for init order, R8 for binary size." Multiple techniques with clear reasoning shows senior engineering thinking.
Paging 3 integrates with Compose through LazyPagingItems — it handles page loading, error states, and retry automatically when combined with LazyColumn.
// ViewModel @HiltViewModel class FeedViewModel @Inject constructor(repo: FeedRepo) : ViewModel() { val posts = Pager(PagingConfig(pageSize = 20)) { repo.getPostsPagingSource() }.flow .cachedIn (viewModelScope) // CRITICAL — preserve pages on recompose } // Compose UI @Composable fun FeedScreen(vm: FeedViewModel = hiltViewModel()) { val posts = vm.posts.collectAsLazyPagingItems() LazyColumn { items(posts, key = { it.id }) { post -> if (post != null) PostCard(post) else PostPlaceholder() } when (posts.loadState.append) { is LoadState.Loading -> item { CircularProgressIndicator() } is LoadState.Error -> item { Button(onClick = { posts.retry() }) { Text("Retry") } } else -> {} } } when (posts.loadState.refresh) { is LoadState.Loading -> FullScreenLoader() is LoadState.Error -> FullScreenError { posts.retry() } else -> {} } }
- collectAsLazyPagingItems(): converts Flow<PagingData> for Compose LazyColumn
- cachedIn(viewModelScope): caches loaded pages — without this, navigation back reloads from page 1
- loadState.append: bottom loading indicator — pagination in progress
- loadState.refresh: initial load state — full screen loading/error
- posts.retry(): retry last failed page load
cachedIn(viewModelScope) is the most important line in Paging 3. Without it, every navigation back to the screen reloads from page 1, losing scroll position and all loaded pages. With it, pages are cached in the ViewModel and restored instantly.
Custom OTP/PIN inputs use a hidden BasicTextField for keyboard handling and custom visual boxes for display — the standard pattern for all PIN/OTP screens.
@Composable fun PinField(onComplete: (String) -> Unit) { var pin by remember { mutableStateOf("") } val focus = remember { FocusRequester() } LaunchedEffect(Unit) { focus.requestFocus() } Box(Modifier.fillMaxWidth()) { // Hidden input — captures keyboard BasicTextField( value = pin, onValueChange = { new -> if (new.length <= 6 && new.all { it.isDigit() }) { pin = new if (new.length == 6) onComplete(new) } }, keyboardOptions = KeyboardOptions(keyboardType = KeyboardType.NumberPassword), modifier = Modifier.focusRequester(focus).size(1.dp) // hidden ) // Visual boxes Row( horizontalArrangement = Arrangement.spacedBy(8.dp), modifier = Modifier.clickable { focus.requestFocus() } ) { (0..5).forEach { i -> val isFocused = pin.length == i Box( Modifier.size(48.dp) .border(2.dp, if(isFocused) Color.Blue else Color.Gray, RoundedCornerShape(8.dp)), contentAlignment = Alignment.Center ) { if (pin.length > i) Text("●") } } } } }
- Hidden BasicTextField (1.dp): captures keyboard input while being visually invisible
- Visual boxes: rendered separately — full control over appearance per digit
- FocusRequester: auto-focus on composition, re-focus when boxes tapped
- onValueChange filter: only digits, max 6 characters
- isFocused highlight: shows which box receives next keystroke
The hidden BasicTextField is the standard Compose OTP pattern. The hidden field handles all keyboard complexity while you control the visual representation completely. This is cleaner than managing 6 separate TextFields with manual focus passing.
Composables have their own lifecycle independent of Activity. Understanding both and how they interact prevents resource leaks and incorrect behavior.
// Composable lifecycle: // 1. Enter Composition → composable first appears // 2. Recompose → inputs change, re-runs // 3. Leave Composition → removed from tree // NOT tied to Activity lifecycle! // Navigate away → composable leaves composition (Activity still STARTED) // Navigate back → composable enters composition again // LaunchedEffect tracks composable lifecycle LaunchedEffect(Unit) { startWork() // on Enter Composition // cancelled on Leave Composition } // DisposableEffect: Enter + Leave cleanup DisposableEffect(Unit) { register() onDispose { unregister() } // always called on Leave } // Observe Activity lifecycle FROM Compose val owner = LocalLifecycleOwner.current DisposableEffect(owner) { val observer = LifecycleEventObserver { _, event -> when (event) { Lifecycle.Event.ON_RESUME -> refresh() Lifecycle.Event.ON_PAUSE -> save() else -> {} } } owner.lifecycle.addObserver(observer) onDispose { owner.lifecycle.removeObserver(observer) } }
- Composable lifecycle: Enter → Recompose* → Leave — independent of Activity
- Navigate away = leave composition; navigate back = enter composition
- LaunchedEffect cancelled on leave — not when Activity pauses/stops
- LocalLifecycleOwner: access Activity lifecycle from within composable
- repeatOnLifecycle: respects Activity lifecycle — used for flow collection
Key insight: composable lifecycle ≠ Activity lifecycle. When you navigate away, the composable leaves composition (LaunchedEffect cancelled) but Activity is still STARTED. When you navigate back, composable re-enters and LaunchedEffect runs again from scratch.
Excessive recomposition needs a systematic debugging approach. Profile first, then apply targeted fixes.
// Step 1: Layout Inspector → Show Recomposition Counts // Red = hot composable // Step 2: Add SideEffect counter for exact count @Composable fun SuspiciousComp(data: MyData) { val count = remember { Ref(0) } SideEffect { println("Recompose #${++count.value}") } } // CAUSE A: New lambda per recompose // ❌ New lambda every time items(posts) { post -> PostCard(post, onClick = { vm.like(post.id) }) } // ✅ Stable lambda val onLike = remember(vm) { { id: String -> vm.like(id) } } // CAUSE B: Unstable parameter // ❌ List<T> unstable — composable never skipped // ✅ @Immutable data class wrapper @Immutable data class FeedState(val posts: List<Post>) // CAUSE C: High state read // ❌ Reads scroll every frame val showFab = listState.firstVisibleItemIndex > 0 // ✅ derivedStateOf val showFab by remember { derivedStateOf { listState.firstVisibleItemIndex > 0 } } // CAUSE D: Check compiler metrics // compilerOptions.freeCompilerArgs += // "-P", "plugin:...:reportsDestination=build/compose_reports"
- Start with Layout Inspector — recomposition counts show exactly what's hot
- SideEffect counter: precise count per composable during debugging
- Inline lambdas in items(): new objects every recompose → composable never skipped
- Unstable types (List): wrap in @Immutable for skippability
- High state reads: use derivedStateOf to scope recomposition to actual changes
Walk through your debugging process: "Layout Inspector → find red composable → check stability with compiler report → fix lambda/type/state read issues." A systematic process impresses more than just listing fixes.
Multi-step flows need shared state across screens, back navigation, and validation per step. A NavGraph-scoped ViewModel is the cleanest solution.
// NavGraph-scoped ViewModel — lives across all steps @HiltViewModel class OnboardingViewModel @Inject constructor() : ViewModel() { var name by mutableStateOf("") var email by mutableStateOf("") val step1Valid get() = name.isNotBlank () val step2Valid get() = email.contains ("@") val progress get() = when { name.isNotBlank () && email.isNotBlank () -> 1f; name.isNotBlank () -> 0.5f; else -> 0f } } // NavGraph with shared ViewModel scoped to "onboarding" graph @Composable fun OnboardingFlow(nav: NavController) { NavHost(nav, startDestination = "onboarding/step1") { navigation(startDestination = "onboarding/step1", route = "onboarding") { composable("onboarding/step1") { entry -> val vm: OnboardingViewModel = hiltViewModel( remember(entry) { nav.getBackStackEntry("onboarding") } ) StepContent( progress = vm.progress, content = { TextField(vm.name, { vm.name = it }, label = { Text("Name") }) }, onNext = { if (vm.step1Valid) nav.navigate("onboarding/step2") } ) } // step2, step3 same pattern } } } @Composable fun StepContent(progress: Float, content: @Composable () -> Unit, onNext: () -> Unit) { Column(Modifier.padding(16.dp)) { LinearProgressIndicator(progress = { progress }, Modifier.fillMaxWidth()) content() Button(onClick = onNext, Modifier.fillMaxWidth()) { Text("Next") } } }
- NavGraph-scoped ViewModel: one ViewModel shared across all steps, cleared on flow exit
- hiltViewModel(navBackStackEntry): scope to graph's back stack entry — not the screen's
- Shared state: name, email — accessible from all steps without parameter drilling
- Validation per step: computed properties in ViewModel, Next disabled until valid
- Progress: derived from state, updates automatically as user fills fields
The NavGraph-scoped ViewModel solves "how do I share data between steps without prop drilling?" It lives across all steps and is cleared when user exits the flow. This is the correct architectural answer — not passing data through each composable.
Compose provides text components at different abstraction levels — from styled Material components to raw BasicTextField for fully custom inputs.
// Text — display only Text(text = "Hello", style = MaterialTheme.typography.bodyLarge, maxLines = 2, overflow = TextOverflow.Ellipsis, fontWeight = FontWeight.Bold) // AnnotatedString — mixed styles val annotated = buildAnnotatedString { append("Click ") withStyle(SpanStyle(color = Color.Blue, fontWeight = FontWeight.Bold)) { append("here") } } Text(annotated) // TextField — Material filled input var text by remember { mutableStateOf("") } TextField( value = text, onValueChange = { text = it }, label = { Text("Email") }, isError = !text.contains("@") && text.isNotEmpty(), singleLine = true, keyboardOptions = KeyboardOptions(keyboardType = KeyboardType.Email) ) // OutlinedTextField — outlined variant, same API OutlinedTextField(value = text, onValueChange = { text = it }, label = { Text("Name") }) // BasicTextField — no Material styling, full control BasicTextField( value = text, onValueChange = { text = it }, decorationBox = { innerTextField -> Box(Modifier.border(1.dp, Color.Gray).padding(8.dp)) { if (text.isEmpty()) Text("Placeholder", color = Color.Gray) innerTextField() } } )
- Text: display only — AnnotatedString for mixed styles, colors, fonts
- TextField: Material filled — label, error, leadingIcon, trailingIcon built in
- OutlinedTextField: outlined Material variant — same API as TextField
- BasicTextField: unstyled — use for custom designs (OTP, chat, inline search)
- KeyboardOptions: configure keyboard type, IME action, auto-correct per field
BasicTextField with decorationBox is the secret weapon for custom inputs. OTP boxes, chat bubbles, inline search — any time Material style doesn't fit, BasicTextField gives you full visual control while keeping all keyboard handling. This shows you know the full Compose text API.
Compose Multiplatform (by JetBrains) extends Jetpack Compose to iOS, Desktop, and Web. iOS support became stable in 2024 — enabling shared UI across platforms.
// Jetpack Compose: Google's Android-only UI toolkit // Compose Multiplatform: JetBrains extension → iOS, Desktop, Web, Android // Shared UI in commonMain — runs on all platforms @Composable fun UserCard(user: User) { Card { Text(user.name, style = MaterialTheme.typography.headlineSmall) Text(user.email) } } // Runs on Android, iOS, Desktop, Web! // Platform-specific with expect/actual expect @Composable fun PlatformMap(lat: Double, lng: Double) // androidMain actual @Composable fun PlatformMap(lat: Double, lng: Double) { AndroidView({ MapView(it) }) } // iosMain actual @Composable fun PlatformMap(lat: Double, lng: Double) { UIKitView({ MKMapView() }) } // 2025 status: // ✅ iOS stable — production apps shipping // ✅ Desktop stable (Windows/macOS/Linux) // ✅ Web (Wasm) — stable preview // Used by JetBrains IDEs, Touchlab, many OSS projects
- Jetpack Compose: Android-only, by Google
- Compose Multiplatform: same API, runs on iOS/Desktop/Web, by JetBrains
- iOS support stable since 2024 — real production apps shipping
- expect/actual: platform-specific composables for Maps, Camera, Sensors
- KMP vs CMP: KMP = shared logic + native UI; CMP = shared logic + shared UI
Key differentiator: KMP shares business logic but keeps native UI. Compose Multiplatform shares BOTH logic AND UI. Choose KMP for consumer apps needing native UX, CMP for internal tools and productivity apps where consistency matters more.
Compose has built-in accessibility through the semantics API. Good accessibility makes apps usable by all users and is increasingly required by enterprise clients.
// contentDescription — required for icons and images Icon(Icons.Default.Favorite, contentDescription = "Like") Image(painter, contentDescription = "Profile photo of Rahul") Icon(Icons.Default.Star, contentDescription = null) // decorative // semantics — rich accessibility info Box(Modifier.semantics { contentDescription = "Like button, currently liked" role = Role.Button stateDescription = "Liked" onClick(label = "Unlike") { onUnlike(); true } }) // mergeDescendants — treat group as one accessible element Row(Modifier.semantics(mergeDescendants = true) {}) { Image(avatar, contentDescription = null) Column { Text("Rahul Kumar") Text("Android Developer") } } // TalkBack reads: "Rahul Kumar, Android Developer" // Minimum touch target IconButton(onClick = {}, modifier = Modifier.minimumInteractiveComponentSize()) { Icon(Icons.Default.Delete, "Delete") } // clearAndSetSemantics — custom complex component description Box(Modifier.clearAndSetSemantics { contentDescription = "Rating: 4.5 out of 5 stars" }) { StarRatingBar(rating = 4.5f) } // Test accessibility composeTestRule.onNodeWithContentDescription("Like").performClick()
- contentDescription: required for all images and icon-only buttons — null for decorative
- semantics: add role, stateDescription, custom onClick labels
- mergeDescendants: combine multiple elements into one accessible unit
- minimumInteractiveComponentSize: ensures 48dp touch target
- clearAndSetSemantics: replace auto-generated semantics for complex custom components
Many Material3 components provide good accessibility defaults. Icon with contentDescription, Button with text — mostly handled. Where you need to add work: icon-only buttons, custom components, grouped information. Enable TalkBack and navigate your whole app before release.
Chat UIs need reversed layout, auto-scroll to new messages, and a scroll-to-bottom FAB when reading history. reverseLayout = true is the key API.
@Composable fun ChatScreen(vm: ChatViewModel = hiltViewModel()) { val messages by vm.messages.collectAsStateWithLifecycle() val listState = rememberLazyListState() val scope = rememberCoroutineScope() // Auto-scroll on new message val msgCount by remember { derivedStateOf { messages.size } } LaunchedEffect(msgCount) { if (msgCount > 0) listState.animateScrollToItem(0) // 0 = bottom in reversed } Column(Modifier.fillMaxSize()) { LazyColumn( state = listState, reverseLayout = true, // newest at BOTTOM, index 0 = bottom modifier = Modifier.weight(1f), contentPadding = PaddingValues(16.dp) ) { items(messages, key = { it.id }) { msg -> MessageBubble(msg, isOwn = msg.senderId == vm.myId) } } // Scroll-to-bottom FAB val showFab by remember { derivedStateOf { listState.firstVisibleItemIndex > 2 } } AnimatedVisibility(showFab) { FloatingActionButton(onClick = { scope.launch { listState.animateScrollToItem(0) } }) { Icon(Icons.Default.KeyboardArrowDown, null) } } MessageInput(onSend = { vm.send(it) }) } }
- reverseLayout = true: items render from bottom — index 0 is newest message
- animateScrollToItem(0): scrolls to bottom (index 0 in reversed layout)
- derivedStateOf for msgCount: only triggers scroll on actual new messages
- showFab: visible when user scrolled up to read history
- key on messages: correct animations when new messages arrive
reverseLayout = true is the insight for chat. Without it you'd reverse the list and calculate scroll positions manually — much more error-prone. In reversed layout, index 0 IS the bottom — animateScrollToItem(0) always goes to the newest message.
Scaffold coordinates Material layout slots and provides innerPadding so content doesn't render under AppBars or system bars. Forgetting innerPadding is the most common Scaffold mistake.
@Composable fun HomeScreen(vm: HomeViewModel = hiltViewModel()) { val snackbar = remember { SnackbarHostState() } Scaffold( topBar = { TopAppBar( title = { Text("Home") }, actions = { IconButton(onClick = {}) { Icon(Icons.Default.Settings, "Settings") } } ) }, floatingActionButton = { FloatingActionButton(onClick = {}) { Icon(Icons.Default.Add, "Add") } }, snackbarHost = { SnackbarHost(snackbar) }, bottomBar = { BottomNav() } ) { innerPadding -> // ALWAYS use innerPadding! // Option 1: padding modifier Column(Modifier.padding(innerPadding)) { Content() } // Option 2: contentPadding for LazyColumn LazyColumn(contentPadding = innerPadding) { items(items) { ItemRow(it) } } // ❌ Wrong — content goes under TopAppBar and BottomBar! LazyColumn { items(items) { ItemRow(it) } } } }
- Scaffold coordinates: topBar, bottomBar, FAB, snackbarHost, drawer
- innerPadding: calculated by Scaffold — apply to content to avoid overlap with bars
- contentPadding = innerPadding: for LazyColumn — enables scrolling behind the bars but padding at ends
- Modifier.padding(innerPadding): for Column/Box — hard stops at bar boundaries
- SnackbarHostState.showSnackbar(): suspending — handles queue automatically
Forgetting innerPadding is the single most common Scaffold mistake in code reviews. Content renders under the TopAppBar and BottomBar. Always apply innerPadding either as contentPadding (LazyColumn) or Modifier.padding (Column/Box).
Compose and XML Views are complementary in 2024-25 -- ComposeView and AndroidView interop means you can mix them. For a new project, Compose is the clear recommendation: less code, better state management, easier testing, and where Google is investing all future UI work. For an existing XML codebase, migrate screen-by-screen.
// New screen in Compose -- embed in existing Activity/Fragment via ComposeView class HomeFragment : Fragment() { override fun onCreateView(...) = ComposeView(requireContext()).apply { setContent { MaterialTheme { HomeScreen() } } } } // Existing View inside Compose -- embed via AndroidView AndroidView( factory = { ctx -> MapView(ctx).apply { onCreate(null) } }, update = { view -> view.getMapAsync { map -> map.moveCamera(...) } } ) // Performance parity: Compose lazy lists match RecyclerView for most use cases // Compose UI tests: faster to write than Espresso, same coverage
- For new projects: choose Compose -- less boilerplate, better state handling, all new Jetpack APIs target Compose first
- For existing XML apps: migrate screen by screen using ComposeView -- no need to rewrite everything at once
- AndroidView: embed legacy Views (MapView, custom Views) inside Compose -- necessary during migration
- Performance: Compose lazy lists are comparable to RecyclerView; graphicsLayer animations run on RenderThread -- equivalent to hardware layers
- Team skill: budget 2-4 weeks for a team new to Compose to reach productivity -- the mental model shift (state drives UI) is the main investment
Be decisive — don't hedge. "Compose for new projects, period. The 3-week learning curve pays back in 3 months of faster feature development." Interviewers at Flipkart, Swiggy, and Google want technical conviction, not wishy-washy "it depends." Know when it doesn't apply and state it clearly.
50 questions covering coroutine internals, structured concurrency, Flow operators, channels, threading, and real-world Android scenarios for 2025-26 interviews.
A coroutine is a suspendable computation — it can pause and resume without blocking the underlying thread. Unlike threads, thousands of coroutines can run on just a few threads, making them lightweight and efficient.
// Thread — blocks OS thread while waiting Thread { Thread.sleep(1000) // thread blocked, OS context switch updateUi() // crash if not on main thread }.start() // Coroutine — suspends without blocking thread viewModelScope.launch { delay(1000) // thread RELEASED — can do other work updateUi() // safe — resumes on correct dispatcher } // Scale comparison: // 10,000 threads → ~100MB RAM, OS scheduler thrash // 10,000 coroutines → ~few MB, cooperative scheduling // Coroutines are NOT threads: // Coroutines run ON threads (via Dispatchers) // Multiple coroutines share the same thread pool // Suspension = coroutine pauses, thread picks up another coroutine suspend fun example() { delay(1000) // suspends coroutine — doesn't block thread fetchData() // suspends here too — thread free meanwhile }
- Coroutines: lightweight, cooperative concurrency — thread released during suspension
- Threads: heavyweight, preemptive — blocked during sleep/IO
- 10,000 coroutines can run on 4 threads; 10,000 threads crash the JVM
- Coroutines run ON threads via Dispatchers — they're not a replacement for threads
- suspend functions = coroutine can be paused here, thread is freed
Key phrase: "Coroutines don't replace threads — they run ON threads via Dispatchers. What they eliminate is BLOCKING threads. When a coroutine suspends, the thread picks up another coroutine instead of waiting idle."
Dispatchers determine which thread or thread pool a coroutine runs on. Choosing the correct Dispatcher is fundamental to avoiding ANRs and crashes.
// Dispatchers.Main — Android main/UI thread viewModelScope.launch(Dispatchers.Main) { textView.text = "Updated!" // safe — on main thread } // Dispatchers.IO — optimised for blocking I/O // Backed by up to 64 threads (or more on multicore) withContext(Dispatchers.IO) { val data = api.fetchUser() // network call val rows = db.query() // database read File("log.txt").readText() // file read } // Dispatchers.Default — CPU-intensive work // Backed by CPU core count threads withContext(Dispatchers.Default) { list.sortedBy { it.name } // sorting large list computeHeavyAlgorithm() // encryption, image processing parseHugeJson() // heavy parsing } // Dispatchers.Unconfined — inherits caller's thread // Resumes on whatever thread the suspension point resumes on // Rarely used — mainly for testing // Typical Android pattern suspend fun getUser(id: String): User = withContext(Dispatchers.IO) { api.getUser(id) // IO dispatcher for network } viewModelScope.launch { // Main by default val user = getUser("123") // suspends on IO _state.value = UiState.Success(user) // back on Main }
- Main: Android UI thread — update views, collect state
- IO: blocking operations — network, database, file. Up to 64 threads
- Default: CPU-intensive — sorting, parsing, computation. Core-count threads
- Unconfined: no thread confinement — use only in tests
- withContext(): switch dispatcher mid-coroutine — cheap, no new coroutine created
IO vs Default is a common interview question. IO: many threads because most time is spent waiting (network latency). Default: few threads (CPU cores) because work is CPU-bound — more threads just causes context switching overhead.
Structured concurrency means coroutines live within a defined scope — they're started, managed, and cancelled as a group. This prevents orphaned coroutines and resource leaks.
// Without structured concurrency — LEAKED coroutine class BadViewModel { fun load() { GlobalScope.launch { api.fetchData() } // Coroutine NEVER cancelled — lives until app death // ViewModel cleared → coroutine still running! } } // With structured concurrency — scoped lifecycle class GoodViewModel : ViewModel() { fun load() { viewModelScope.launch { api.fetchData() } // Cancelled automatically when ViewModel cleared } } // CoroutineScope = owner of coroutines + CoroutineContext // All children coroutines inherit parent's scope // Rules of structured concurrency: // 1. Parent waits for ALL children to complete // 2. Parent cancellation cancels ALL children // 3. Child failure propagates to parent (by default) val scope = CoroutineScope(Dispatchers.Main + SupervisorJob()) scope.launch { val a = async { fetchA() } val b = async { fetchB() } process(a.await (), b.await ()) // scope ensures A and B are cancelled if scope is cancelled } scope.cancel() // cancels all children // Android scopes // viewModelScope — tied to ViewModel lifecycle // lifecycleScope — tied to Activity/Fragment lifecycle // viewLifecycleScope — tied to Fragment VIEW lifecycle
- Structured concurrency: coroutines have a defined scope — no orphans or leaks
- GlobalScope: unstructured — coroutines live until app death, avoid in production
- Parent waits for children; parent cancellation cancels all children
- viewModelScope: cancelled when ViewModel cleared — safest for ViewModel coroutines
- lifecycleScope vs viewLifecycleScope: Fragment should use viewLifecycleOwner for UI work
GlobalScope is almost always wrong in Android. Always use viewModelScope, lifecycleScope, or a custom scope with a Job. The rule: "coroutines should be cancelled when their owner is done." GlobalScope has no owner.
launch fires and forgets — it returns a Job. async starts a concurrent operation and returns a Deferred — a future value you can await.
// launch — fire and forget, returns Job val job = viewModelScope.launch { sendAnalyticsEvent() // don't need the result saveToDatabase() // side effect only } job.cancel() // can cancel // async — returns Deferred (future value) val deferred: Deferred<User> = viewModelScope.async { api.fetchUser("123") // returns User } val user = deferred.await() // suspends until result ready // KEY use case for async: PARALLEL execution viewModelScope.launch { // Sequential — takes 2 seconds total val user = fetchUser() // 1 second val profile = fetchProfile() // 1 second // Parallel — takes 1 second total val userDef = async { fetchUser() } // starts immediately val profileDef = async { fetchProfile() } // starts immediately val result = combine(userDef.await (), profileDef.await ()) } // async error handling — exception thrown at await() val result = try { async { riskyOp() }.await () } catch (e: Exception) { null } // awaitAll — await multiple Deferreds at once val results = awaitAll(userDef, profileDef, settingsDef)
- launch: fire and forget — returns Job, use for side effects
- async: returns Deferred — use when you need the result
- await(): suspends until Deferred completes — get the value
- Parallel async: start multiple async blocks, then await all — huge performance win
- awaitAll(): cleaner API for awaiting multiple Deferreds simultaneously
The classic interview question: "How do you make two API calls in parallel?" Answer: val a = async { fetchA() }; val b = async { fetchB() }; combine(a.await(), b.await()). This cuts total time from sum to max of both calls.
Coroutine cancellation is cooperative — the coroutine must check for cancellation. Suspend functions do this automatically; CPU-heavy loops must check manually.
// Cancellation is cooperative — not forced like Thread.interrupt() val job = viewModelScope.launch { // suspend functions check cancellation automatically delay(1000) // throws CancellationException if cancelled withContext(Dispatchers.IO) { api.fetch() } // same } job.cancel() // sets cancellation flag // CPU-heavy loop — must check manually suspend fun heavyComputation() { for (i in 0..1_000_000) { ensureActive() // ✅ throws if cancelled // or: if (!isActive) return compute(i) } } // withContext is NOT cancellable by default // yield() lets other coroutines run + checks cancellation suspend fun yieldExample() { repeat(1000) { i -> yield() // suspend point — cancel check + cooperative process(i) } } // Cleanup with finally — always runs even on cancellation val job2 = viewModelScope.launch { try { while (true) { doWork() } } finally { cleanup() // runs on cancellation too withContext(NonCancellable) { db.saveState() // NonCancellable needed to suspend in finally } } }
- Cancellation is cooperative — coroutine checks at suspension points
- All built-in suspend functions (delay, withContext, await) check automatically
- CPU loops: call ensureActive() or yield() to check cancellation manually
- isActive: check without throwing — for graceful loops
- NonCancellable: use in finally blocks when you need to suspend for cleanup
Forgetting ensureActive() in CPU loops is a real bug. If a coroutine runs a tight computation loop without suspension points, job.cancel() sets the flag but the coroutine NEVER checks it — it runs forever despite being "cancelled."
Job propagates failure to siblings — one child failing cancels all. SupervisorJob isolates failures — each child is independent. Android's viewModelScope uses SupervisorJob.
// Job — failure cascades to siblings val scope = CoroutineScope(Job()) scope.launch { launch { throw IOException("Network failed") } // fails launch { doImportantWork() } // CANCELLED by sibling failure! } // Both children cancelled when first throws // SupervisorJob — failure is isolated val supervisorScope = CoroutineScope(SupervisorJob()) supervisorScope.launch { launch { throw IOException("Network failed") } // fails launch { doImportantWork() } // continues! ✅ } // viewModelScope uses SupervisorJob internally // So one ViewModel coroutine failing doesn't cancel others // supervisorScope {} — function that creates supervisor scope suspend fun loadDashboard() = supervisorScope { val news = async { fetchNews() } // might fail val weather = async { fetchWeather() } // independent val stocks = async { fetchStocks() } // independent // If news fails, weather and stocks still complete Dashboard( news = runCatching { news.await () }.getOrNull (), weather = runCatching { weather.await () }.getOrNull (), stocks = runCatching { stocks.await () }.getOrNull () ) }
- Job: child failure cancels parent and all siblings — use for atomic operations
- SupervisorJob: child failure is isolated — siblings continue independently
- viewModelScope uses SupervisorJob — one request failing doesn't break all
- supervisorScope {}: create supervisor scope inside suspend function
- Dashboard pattern: load multiple independent widgets with supervisorScope
The dashboard loading pattern is a perfect SupervisorJob example. News, weather, stocks are independent — if stocks API fails, you still want to show news and weather. supervisorScope with runCatching per widget is the production-quality answer.
Coroutine exception handling is nuanced — try-catch works inside coroutines, but CoroutineExceptionHandler is a last-resort handler for unhandled exceptions in launch.
// try-catch inside coroutine — handles exception locally viewModelScope.launch { try { val data = api.fetchData() _state.value = UiState.Success (data) } catch (e: IOException) { _state.value = UiState.Error (e.message !!) } } // runCatching — cleaner functional try-catch viewModelScope.launch { runCatching { api.fetchData() } .onSuccess { _state.value = UiState.Success (it) } .onFailure { _state.value = UiState.Error (it.message !!) } } // CoroutineExceptionHandler — last resort for launch{} // Does NOT work with async{} — exception is stored in Deferred val handler = CoroutineExceptionHandler { _, throwable -> logCrash(throwable) // log, don't crash } viewModelScope.launch(handler) { api.fetchData() // if this throws, handler catches it } // ⚠️ CancellationException is SPECIAL // Never swallow CancellationException — it breaks cancellation try { delay(1000) } catch (e: Exception) { if (e is CancellationException) throw e // ✅ re-throw! handleError(e) }
- try-catch: works inside coroutines — catches exceptions from suspend functions
- runCatching: functional wrapper — cleaner than try-catch for single operations
- CoroutineExceptionHandler: last-resort handler for uncaught exceptions in launch
- async exceptions: stored in Deferred, thrown at await() — catch there
- CancellationException: ALWAYS re-throw — swallowing it breaks coroutine cancellation
CancellationException is the sneakiest coroutine trap. catch(e: Exception) { handleError(e) } will catch CancellationException and break the cancellation chain. Always re-throw CancellationException or use catch(e: IOException) to be specific.
Flow is Kotlin's reactive stream — a cold, sequential sequence of values emitted asynchronously. It's Kotlin-native, coroutine-integrated, and lighter than RxJava.
// Flow — cold, sequential, coroutine-native fun getNumbers(): Flow<Int> = flow { for (i in 1..5) { delay(100) emit(i) // emit values over time } } // Collection — terminal operator starts the flow viewModelScope.launch { getNumbers().collect { value -> println(value) } } // COLD: new execution for each collect() // Comparison: // LiveData: Android-only, no operators, lifecycle-aware // RxJava: JVM, complex API, heavy dependency // Flow: Kotlin-native, rich operators, coroutine-integrated // Flow advantages over LiveData: // ✅ Type-safe null handling // ✅ Rich transformation operators (map, filter, zip, flatMap...) // ✅ Testing with Turbine library // ✅ Works on any platform (KMP) // ✅ Backpressure handling built-in // LiveData advantage: // ✅ Lifecycle-aware out of the box (no collectAsStateWithLifecycle needed) // Simple flow builders flowOf(1, 2, 3) // fixed values listOf(1, 2, 3).asFlow() // from collection channelFlow { send(1) } // from channel
- Cold: flow code runs only when collected — each collector gets its own execution
- Coroutine-native: uses structured concurrency, dispatchers, cancellation
- Rich operators: map, filter, transform, zip, flatMapLatest — similar to RxJava
- No lifecycle awareness built-in — use collectAsStateWithLifecycle in Compose
- Backpressure: built-in through suspension — emitter suspends if collector is slow
Cold vs Hot is the key distinction. Flow is cold — each collect() triggers fresh execution. StateFlow/SharedFlow are hot — they emit regardless of collectors. This answers "why can't I collect a Flow twice and get the same values?"
Both are hot flows — they emit regardless of collectors. StateFlow is state (current value, replay=1). SharedFlow is events (configurable replay, no value requirement).
// StateFlow — always has a value, replays last value to new collectors class CounterViewModel : ViewModel() { private val _count = MutableStateFlow(0) val count: StateFlow<Int> = _count.asStateFlow() fun increment() { _count.value ++ } } // New subscriber gets CURRENT value immediately // Deduplicates: emitting same value twice = only one emission // SharedFlow — configurable replay, for events private val _events = MutableSharedFlow<UiEvent>( replay = 0, // no replay — event not re-sent to late collectors extraBufferCapacity = 64 // buffer events if no collector ) val events = _events.asSharedFlow() // Emit from background thread safely viewModelScope.launch { _events.emit (UiEvent.Navigate("/home")) } // Key differences: // StateFlow: replay=1, requires initial value, deduplicates // SharedFlow: replay=0 by default, no initial value, no dedup // When to use: // StateFlow → UI state (user profile, loading, error) // SharedFlow → one-time events (navigation, snackbar, toast) // Channel → one-time events, single consumer guaranteed // stateIn() — convert Flow to StateFlow val uiState = repo.getUser() .stateIn(viewModelScope, SharingStarted.WhileSubscribed(5000), UiState.Loading)
- StateFlow: always has value, deduplicates, new subscribers get current value
- SharedFlow: no required value, configurable replay, no deduplication
- StateFlow for state: loading, data, error — screen observes current state
- SharedFlow(replay=0) for events: navigation, toasts — don't re-deliver to late subscribers
- stateIn(): convert cold Flow to hot StateFlow — caches in scope
StateFlow deduplicates — emitting the same value twice only triggers one downstream update. This is correct for state (current count=5 IS the state) but wrong for events (navigating to the same screen twice should fire twice). That's why events need SharedFlow or Channel.
Flow operators are the core of reactive programming. Each transforms the stream differently — mastering them is essential for clean, efficient reactive Android code.
// map — transform each value flowOf(1, 2, 3).map { it * 2 } // 2, 4, 6 userFlow.map { it.name } // User → String // filter — keep only matching values flowOf(1, 2, 3, 4).filter { it % 2 == 0 } // 2, 4 // transform — emit multiple values per input flowOf(1, 2).transform { value -> emit("before $value") emit("after $value") } // "before 1", "after 1", "before 2", "after 2" // zip — combine two flows pair by pair val names = flowOf("Alice", "Bob") val scores = flowOf(100, 200) names.zip(scores) { name, score -> "$name: $score" } // "Alice: 100", "Bob: 200" — waits for both, 1:1 // combine — emit whenever EITHER source emits val query = MutableStateFlow("") val filters = MutableStateFlow(emptyList<String>()) query.combine(filters) { q, f -> search(q, f) } // re-runs search whenever query OR filters change // flatMapLatest — switch to new flow, cancel previous queryFlow .debounce(300) .flatMapLatest { query -> searchRepo(query) // cancels previous search on new query } // flatMapConcat — sequential (wait for previous to complete) // flatMapMerge — parallel (all run simultaneously) // flatMapLatest — cancel previous (for search/reactive queries)
- map/filter: basic transformation — same as collection operators but lazy
- transform: emit multiple values per input — more powerful than map
- zip: pairs values 1:1, waits for both flows
- combine: emits on any change — use for dependent search/filter state
- flatMapLatest: cancels previous flow on new emission — search debounce pattern
flatMapLatest is the search operator. combine is the multi-source state operator. The question "user can filter AND sort AND search simultaneously" is answered with combine(queryFlow, filterFlow, sortFlow) { q, f, s -> search(q, f, s) }.
withContext switches the coroutine to a different dispatcher and returns a result — it's a scope switch, not a new coroutine. This makes it efficient for switching context mid-operation.
// withContext — switch dispatcher, return result // Does NOT start a new coroutine — same coroutine, different thread suspend fun fetchUser(id: String): User { return withContext(Dispatchers.IO) { // switches to IO api.getUser(id) // runs on IO thread } // resumes on original dispatcher } // Full flow — Main → IO → Main viewModelScope.launch { // starts on Main _state.value = UiState.Loading val user = withContext(Dispatchers.IO) { // switches to IO api.getUser("123") } // back to Main _state.value = UiState.Success (user) // on Main } // withContext vs launch: // withContext: same coroutine, different context, returns result, sequential // launch: new coroutine, fire-and-forget, parallel // withContext vs async: // withContext: sequential — suspends until done // async: parallel — use for concurrent operations // Nesting withContext — fine, no extra cost suspend fun process(): String { val raw = withContext(Dispatchers.IO) { fetchRaw() } return withContext(Dispatchers.Default) { parseAndTransform(raw) } }
- withContext: same coroutine, different dispatcher — lightweight context switch
- Returns a result — use when you need the value back on original dispatcher
- Sequential by design: suspends until the block completes
- No new coroutine created — more efficient than launch + join or async + await
- Standard pattern: repository functions use withContext(IO) internally
Repository pattern rule: wrap ALL blocking calls in withContext(Dispatchers.IO) inside the repository. The ViewModel never needs to know about dispatchers — it just calls suspend functions. This is the clean architecture approach to threading.
Cold flows execute fresh for each collector. Hot flows share a single execution with all collectors. stateIn and shareIn convert cold flows to hot StateFlow/SharedFlow.
// COLD — new execution per collector val coldFlow = flow { println("Starting network call...") emit(api.fetchData()) } coldFlow.collect { } // "Starting network call..." coldFlow.collect { } // "Starting network call..." AGAIN! // HOT — single execution, shared with all collectors // StateFlow, SharedFlow, channelFlow // stateIn() — convert cold Flow to hot StateFlow class UserViewModel : ViewModel() { val user: StateFlow<User?> = repo.observeUser() // cold Flow from Room .stateIn( scope = viewModelScope, started = SharingStarted.WhileSubscribed(5000), // upstream active while subscribed initialValue = null ) // Room query runs ONCE, shared with all UI collectors } // SharingStarted options: // Eagerly — start immediately, never stop // Lazily — start on first subscriber, never stop // WhileSubscribed(5000) — start on first subscriber, stop 5s after last unsubscribes // WhileSubscribed(5000) — 5s grace period survives config change // shareIn() — convert to SharedFlow val locationFlow = gps.locationUpdates() .shareIn(viewModelScope, SharingStarted.WhileSubscribed(), replay = 1)
- Cold: each collector triggers fresh execution — risk of duplicate network calls
- Hot: shared execution — all collectors see same emissions
- stateIn: converts Room/API flow to StateFlow — single upstream, multiple UI observers
- WhileSubscribed(5000): stops upstream 5s after no subscribers — survives rotation
- shareIn: convert to SharedFlow — multiple collectors share one upstream
WhileSubscribed(5000) is the production answer. 5 seconds survives rotation (Activity recreates in ~200ms). Without it (Lazily), the upstream never stops — wasting resources. Eagerly starts immediately — even before any UI is shown.
Channels are hot communication primitives for sending values between coroutines — like a queue. Unlike Flow, Channels are consumed — each value is received by exactly one collector.
// Channel — producer/consumer queue val channel = Channel<Int>() // Producer coroutine viewModelScope.launch { for (i in 1..5) { channel.send(i) } // suspends if full channel.close() } // Consumer coroutine viewModelScope.launch { for (value in channel) { println(value) } // suspends if empty } // Channel types (capacity): // RENDEZVOUS (default, 0) — send suspends until receive // UNLIMITED — send never suspends (unbounded buffer) // BUFFERED (default 64) — send suspends when buffer full // CONFLATED — only latest value kept, never suspends val rendezvous = Channel<Int>(Channel.RENDEZVOUS) val buffered = Channel<Int>(Channel.BUFFERED) val conflated = Channel<Int>(Channel.CONFLATED) // Channel as one-time event bus private val _events = Channel<UiEvent>(Channel.BUFFERED) val events = _events.receiveAsFlow() // expose as Flow // Channel vs Flow: // Channel: HOT, each value consumed ONCE, one consumer // Flow: COLD, each collector gets all values independently
- Channel: hot, each value consumed once by one receiver — point-to-point
- RENDEZVOUS: no buffer — sender and receiver synchronise
- BUFFERED: queue — sender suspends only when buffer full
- CONFLATED: only latest value — fast producers overwrite slow consumers
- receiveAsFlow(): expose Channel as Flow for reactive collection
Use Channel for one-time events (navigation, snackbar) that should go to exactly ONE consumer and not be replayed. Use SharedFlow(replay=0) when you might have multiple collectors. Channel.BUFFERED + receiveAsFlow() is the standard one-time event pattern.
A CoroutineContext is a set of elements that define a coroutine's behavior — dispatcher, job, name, and exception handler. Elements combine with the + operator.
// CoroutineContext elements val context = Dispatchers.IO + // which thread SupervisorJob() + // job hierarchy CoroutineName("DataLoader") + // debug name CoroutineExceptionHandler { _, e -> logError(e) } val scope = CoroutineScope(context) // + operator — later element overrides earlier val combined = Dispatchers.IO + Dispatchers.Main // Only one Dispatcher allowed — Main wins (last one) // Child coroutines inherit parent context viewModelScope.launch(Dispatchers.IO) { // overrides default Main launch { // inherits IO + parent Job println(coroutineContext[CoroutineDispatcher]) // IO } } // coroutineContext — access current context inside coroutine suspend fun example() { println(coroutineContext[Job]) println(coroutineContext[CoroutineName]) println(coroutineContext.isActive ) } // Android built-in scopes and their contexts // viewModelScope: SupervisorJob + Dispatchers.Main.immediate // lifecycleScope: SupervisorJob + Dispatchers.Main.immediate
- CoroutineContext: immutable set of elements — dispatcher, job, name, handler
- + operator: combines elements, later element wins for same type
- Children inherit parent's context — dispatcher, job hierarchy
- Job links parent and child — structured concurrency mechanism
- coroutineContext: access current context from inside any suspend function
viewModelScope.launch(Dispatchers.IO) doesn't replace the entire context — it just overrides the Dispatcher element. The SupervisorJob and other elements are inherited. Understanding this explains why cancellation still works correctly even when switching dispatchers.
Flow lifecycle operators let you hook into collection events — emission start, each value, completion, and errors. They're used for logging, loading indicators, and cleanup.
// onStart — runs before first emission fun getUsers(): Flow<List<User>> = repo.getAllUsers() .onStart { emit(emptyList()) // emit loading placeholder println("Started collecting") } // onEach — side effect per emission, passes value through repo.getUsers() .onEach { users -> println("Got ${users.size} users") } .collect { updateUi(it) } // onCompletion — runs on completion (normal, error, or cancel) repo.getUsers() .onCompletion { cause -> if (cause != null) logError(cause) else println("Completed successfully") hideLoadingIndicator() // always runs } // catch — handle errors mid-stream repo.getUsers() .catch { e -> emit(emptyList()) // emit fallback on error logError(e) } // Loading indicator pattern repo.getUsers() .onStart { _loading.value = true } .onCompletion { _loading.value = false } .catch { _error.value = it.message } .collect { _users.value = it }
- onStart: before first emission — show loading, log, emit placeholder
- onEach: after each emission, value passes through — logging, analytics
- onCompletion: always runs — loading indicators, cleanup, error logging
- catch: handle errors — emit fallback, recover gracefully
- catch only catches UPSTREAM errors — doesn't catch errors in collect{}
The loading indicator pattern using onStart/onCompletion is cleaner than managing loading state manually. The completion callback always runs — on success, on error, and on cancellation — making it perfect for hiding spinners.
Backpressure occurs when a producer emits faster than the consumer processes. Kotlin Flow handles it through suspension — the emitter naturally slows to match the consumer.
// Backpressure — producer faster than consumer val fastFlow = flow { for (i in 1..1000) { emit(i) // emits as fast as possible } } // Default — suspension handles backpressure automatically fastFlow.collect { value -> delay(100) // slow consumer process(value) } // emit() SUSPENDS when collector is slow — natural backpressure // buffer() — process producer and consumer concurrently fastFlow .buffer(64) // buffer up to 64 values .collect { process(it) } // Producer fills buffer without waiting for consumer // conflate() — drop intermediate values, keep only latest fastFlow .conflate() // only latest value when collector is ready .collect { renderFrame(it) } // Perfect for UI frame rendering — old frames skipped // collectLatest — cancel slow processing for new values fastFlow.collectLatest { value -> delay(100) // if new value arrives, this is cancelled process(value) // only latest value fully processed } // Use case guide: // buffer() — producer needs to run ahead (disk → network) // conflate() — only latest matters (sensor data, price ticks) // collectLatest — latest request cancels in-progress work
- Default: suspension — emitter waits when collector is busy, natural backpressure
- buffer(): decouple producer and consumer with a queue
- conflate(): drop intermediate values — only latest matters
- collectLatest: cancel current processing when new value arrives
- Unlike RxJava, Flow doesn't overflow — suspension prevents data loss by default
Flow's default backpressure is elegant: emit() is a suspend function, so it naturally waits. This contrasts with RxJava which needed explicit backpressure strategies. For UI rendering, conflate() is ideal — you only want to render the most recent frame.
Coroutine testing requires controlling time and dispatchers. The kotlinx-coroutines-test library provides TestDispatcher and runTest for fast, deterministic tests.
// testImplementation("org.jetbrains.kotlinx:kotlinx-coroutines-test") // testImplementation("app.cash.turbine:turbine") // runTest — replaces runBlocking in tests // Controls virtual time — delay(1000) completes instantly class UserViewModelTest { @Test fun loadsUser() = runTest { val vm = UserViewModel(FakeRepository()) vm.loadUser("123") assertEquals(UiState.Loading , vm.state.value ) advanceUntilIdle() // run all pending coroutines assertTrue(vm.state.value is UiState.Success ) } } // Inject dispatcher for testability class UserViewModel( private val repo: UserRepository, private val dispatcher: CoroutineDispatcher = Dispatchers.IO // injectable! ) : ViewModel() { } // In tests — use TestDispatcher val testDispatcher = StandardTestDispatcher() val vm = UserViewModel(repo, testDispatcher) // Turbine — Flow testing library @Test fun flowEmitsCorrectly() = runTest { repo.getUser().test { assertEquals(UiState.Loading, awaitItem()) assertEquals(UiState.Success (user), awaitItem()) awaitComplete() } } // advanceTimeBy — test time-based flows @Test fun debounceWorks() = runTest { query("a") advanceTimeBy(100) // under debounce threshold query("ab") advanceTimeBy(400) // past debounce — triggers search assertEquals("ab", lastQuery) }
- runTest: virtual time — delay(1000) completes instantly in tests
- advanceUntilIdle(): run all pending coroutines — like fast-forwarding time
- StandardTestDispatcher: manual time control — more predictable
- Inject dispatchers: constructor injection makes classes testable
- Turbine: Flow testing — awaitItem(), awaitComplete(), awaitError()
Always inject Dispatchers — never hardcode them in production code. Pass CoroutineDispatcher as a constructor parameter with a default value. In tests, pass TestDispatcher. This is the single most important coroutine testing practice.
The suspend modifier triggers a compile-time transformation — the compiler adds a Continuation parameter and generates a state machine. No JVM magic involved.
// What you write: suspend fun fetchUser(id: String): User { val raw = withContext(Dispatchers.IO) { api.get(id) } return parseUser(raw) } // What compiler generates (conceptually — CPS transform): fun fetchUser(id: String, cont: Continuation<User>): Any { // State machine with labels when (cont.label) { 0 -> { cont.label = 1 val result = withContext(Dispatchers.IO, { api.get(id) }, cont) if (result == COROUTINE_SUSPENDED) return COROUTINE_SUSPENDED } 1 -> { val raw = cont.result as RawData return parseUser(raw) } } } // Continuation = "rest of the computation" // Each suspension point = one state in the state machine // COROUTINE_SUSPENDED = coroutine paused, thread released // Why suspend is "contagious": // fetchUser() now takes a Continuation parameter // Its callers must also take a Continuation // → Only callable from suspend functions or coroutine builders // suspend fun can call regular fun — no restriction // regular fun CANNOT call suspend fun — no continuation to pass suspend fun ok() { regularFun() } // ✅ fun broken() { suspendFun() } // ❌ compile error
- suspend = compile-time transformation — Continuation parameter added by compiler
- State machine: each suspension point becomes a label in a when-expression
- COROUTINE_SUSPENDED: sentinel value — signals thread to release and reschedule
- "Contagious": callers need a Continuation too — only callable from coroutine context
- No JVM magic: plain JVM function at bytecode level, compiler does all the work
When asked "how do coroutines work?", explain CPS: "The compiler transforms suspend functions into state machines with Continuation callbacks. No JVM magic — it's purely compile-time transformation. The Continuation holds the 'rest of the computation' after each suspension point."
Three different ways to collect a Flow — each with different behavior for slow collectors and coroutine scope management.
// collect — sequential, waits for each emission to process viewModelScope.launch { flow.collect { value -> delay(500) // process takes 500ms renderUi(value) // new emissions queue behind this } } // collectLatest — cancels processing when new value arrives viewModelScope.launch { searchQueryFlow.collectLatest { query -> delay(300) // if new query arrives, cancelled! val results = search(query) // only latest query runs to completion showResults(results) } } // launchIn — collect in background, returns Job // Uses onEach for side effects repo.observeUser() .onEach { user -> updateUi(user) } .launchIn(viewModelScope) // non-blocking, returns Job // Equivalent to: viewModelScope.launch { flow.collect { ... } } // Multiple flows with launchIn userFlow.onEach { handleUser(it) }.launchIn(viewModelScope) settingsFlow.onEach { applySettings(it) }.launchIn(viewModelScope) errorFlow.onEach { showError(it) }.launchIn(viewModelScope) // All three collected concurrently! // When to use which: // collect: sequential processing, order matters // collectLatest: only latest matters, cancel old (search, UI updates) // launchIn: fire-and-forget collection, multiple concurrent flows
- collect: sequential — new emissions wait for current processing to finish
- collectLatest: cancels current when new arrives — use for search and UI updates
- launchIn: non-blocking collection with onEach — clean for multiple concurrent flows
- launchIn returns a Job — can cancel collection by cancelling the job
- All three require being called from a coroutine (or using launchIn with a scope)
collectLatest is the search-as-you-type operator. When the user types "an" and then immediately "and", the "an" search is cancelled and only "and" runs to completion. This is exactly what debounce + collectLatest or flatMapLatest achieve in practice.
When multiple coroutines access shared mutable state, race conditions occur. Kotlin provides Mutex — a coroutine-friendly lock — and atomic operations for thread-safe state.
// Problem — race condition var counter = 0 repeat(1000) { viewModelScope.launch(Dispatchers.Default) { counter++ // NOT thread-safe! Race condition! } } // Final value: anywhere from 1 to 1000 // Solution 1: Mutex — coroutine-friendly lock val mutex = Mutex() var counter = 0 repeat(1000) { viewModelScope.launch(Dispatchers.Default) { mutex.withLock { counter++ } // suspends, doesn't block thread } } // Final value: exactly 1000 ✅ // Solution 2: Atomic types val atomicCounter = AtomicInteger(0) atomicCounter.incrementAndGet() // always thread-safe // Solution 3: Confine to single thread (Actor pattern) // StateFlow always writes from same coroutine private val _state = MutableStateFlow(AppState()) viewModelScope.launch { // single coroutine owns state actionChannel.consumeEach { action -> _state.value = reduce(_state.value , action) } } // Mutex vs synchronized: // synchronized: BLOCKS the thread // Mutex.withLock: SUSPENDS the coroutine — thread is freed // Always prefer Mutex in coroutine code
- Race condition: multiple coroutines reading and writing shared state concurrently
- Mutex: coroutine-friendly lock — suspends (not blocks) waiting coroutines
- AtomicInteger/AtomicReference: lock-free atomic operations — fastest for simple updates
- Single-thread confinement: channel + single coroutine processes all state updates
- Never use synchronized{} in coroutines — it blocks the thread
The key insight: synchronized{} blocks the thread — terrible for coroutines on shared thread pools. Mutex.withLock{} suspends — the thread is freed to run other coroutines. On Dispatchers.IO with 64 threads, synchronized can cause serious thread starvation.
Android has strict threading rules — UI can only be updated on the main thread, network/disk must not block the main thread. Coroutines make these rules easy to follow.
// Android threading rules: // 1. UI updates ONLY on main thread → CalledFromWrongThreadException // 2. Network NEVER on main thread → NetworkOnMainThreadException // 3. Heavy computation NOT on main thread → ANR (>5s) // Old way — complex and error-prone Thread { val data = api.fetchData() // background thread runOnUiThread { updateView(data) } // back to main thread }.start() // Coroutines way — clean and safe viewModelScope.launch { // Main thread val data = withContext(Dispatchers.IO) { api.fetchData() // IO thread } // back to Main updateState(data) // Main thread } // Room + Coroutines — automatic IO dispatch @Dao interface UserDao { @Query("SELECT * FROM users") suspend fun getAllUsers(): List<User> // Room runs on IO automatically @Query("SELECT * FROM users") fun observeUsers(): Flow<List<User>> // Flow runs on IO automatically } // Retrofit + coroutines — suspend automatically interface ApiService { @GET("/users") suspend fun getUsers(): List<User> // Retrofit handles IO switching } // Dispatchers.Main.immediate — avoid unnecessary posts // If already on Main, runs immediately; otherwise posts // viewModelScope uses Main.immediate internally
- Main thread: UI only — CalledFromWrongThreadException if violated
- NetworkOnMainThreadException: Android blocks network calls on main thread
- withContext(IO): switch to IO for all blocking operations
- Room and Retrofit suspend functions handle IO automatically
- Main.immediate: run immediately if already on main, avoids unnecessary dispatch
Room and modern Retrofit handle IO dispatch automatically for suspend functions. You still need withContext(IO) for raw file I/O, legacy libraries, or SharedPreferences. Knowing which libraries handle it and which don't shows real-world experience.
Android provides three built-in scopes tied to different lifecycle owners. Choosing the right one prevents memory leaks and cancelled-work bugs.
// viewModelScope — tied to ViewModel lifecycle class UserViewModel : ViewModel() { fun loadUser() { viewModelScope.launch { val user = repo.getUser() _state.value = user } // Cancelled when ViewModel.onCleared() is called // Survives configuration changes (rotation) } } // lifecycleScope — tied to Activity/Fragment lifecycle class MainActivity : AppCompatActivity() { override fun onCreate(...) { lifecycleScope.launch { startAnimation() // cancelled on Activity destroy } // Cancelled when Activity is destroyed // Cancelled on ROTATION (Activity recreates!) } } // viewLifecycleOwner.lifecycleScope — tied to Fragment VIEW class UserFragment : Fragment() { override fun onViewCreated(...) { // ✅ Correct for collecting UI state viewLifecycleOwner.lifecycleScope.launch { viewModel.state.collectAsStateWithLifecycle() } // ❌ Wrong — Fragment lifecycle outlives view lifecycleScope.launch { viewModel.state.collect { } } } } // repeatOnLifecycle — suspend until lifecycle state viewLifecycleOwner.lifecycleScope.launch { repeatOnLifecycle(Lifecycle.State.STARTED) { viewModel.state.collect { updateUi(it) } // Stops when STOPPED, resumes when STARTED } }
- viewModelScope: best for data operations — survives rotation, outlives Activity
- lifecycleScope: use for Activity/Fragment UI operations — cancelled on destroy
- viewLifecycleOwner.lifecycleScope: Fragment UI work — cancelled when view destroyed
- Never use lifecycleScope in Fragment for UI collection — view may be gone
- repeatOnLifecycle(STARTED): stops collection when backgrounded — lifecycle-aware
The Fragment gotcha: Fragment's lifecycleScope is NOT the same as its viewLifecycleOwner.lifecycleScope. The Fragment can exist without a view (back stack). Collecting UI state with the wrong scope = crash when view is null. Always use viewLifecycleOwner for UI work in Fragments.
Both are lazy, sequential, and use the same operators. The key difference: Sequence is synchronous (blocking), Flow is asynchronous (suspending). Never use Sequence for IO operations.
// Sequence — synchronous, lazy, no suspend support val seq = sequence { yield(1) yield(2) // yield(api.fetch()) ❌ cannot suspend here! } seq.filter { it > 0 }.forEach { println(it) } // blocking // Flow — asynchronous, lazy, suspend support val flow = flow { emit(1) emit(api.fetch()) // ✅ can suspend here } flow.filter { it > 0 }.collect { println(it) } // suspending // Sequence: perfect for in-memory lazy computations val fibonacci = sequence { var a = 0; var b = 1 while (true) { yield(a); val c = a + b; a = b; b = c } } fibonacci.take(10).toList() // [0, 1, 1, 2, 3, 5, 8, 13, 21, 34] // When to use which: // Sequence: pure in-memory data, no IO, CPU-bound lazy eval // Flow: IO operations, async, time-based, reactive streams // Sequence.asFlow() — convert to Flow when you need async operators val asyncFibonacci = fibonacci.asFlow().take(10)
- Sequence: synchronous — operators run on calling thread, cannot suspend
- Flow: asynchronous — operators can suspend, cross threads with withContext
- Use Sequence for in-memory lazy computation — Fibonacci, list processing
- Use Flow for anything involving IO, timing, or coroutines
- asFlow(): convert Sequence to Flow when you need async capabilities
Performance trap: Sequence operators run on the calling thread synchronously. If you use a Sequence for a large in-memory transformation on the main thread, you'll ANR. For in-memory + concurrent, use Flow with Dispatchers.Default. For pure lazy in-memory, Sequence is lighter than Flow.
Legacy libraries use callbacks. suspendCancellableCoroutine wraps them into suspend functions — the modern, clean way to bridge callback APIs with coroutines.
// Old callback API interface LocationCallback { fun onLocation(location: Location) fun onError(e: Exception) } fun getLocation(callback: LocationCallback) { /* ... */ } // suspendCancellableCoroutine — bridge to coroutines suspend fun getLocationAsync(): Location = suspendCancellableCoroutine { cont -> val callback = object : LocationCallback { override fun onLocation(location: Location) { cont.resume(location) // success — resume coroutine } override fun onError(e: Exception) { cont.resumeWithException(e) // failure — throw in coroutine } } getLocation(callback) // Cancellation cleanup — called if coroutine is cancelled cont.invokeOnCancellation { locationManager.removeUpdates(callback) } } // Usage — now just a suspend function! viewModelScope.launch { val loc = getLocationAsync() // suspends until callback fires showOnMap(loc) } // callbackFlow — for repeated callbacks → Flow fun locationUpdates(): Flow<Location> = callbackFlow { val listener = LocationListener { trySend(it) } locationManager.addUpdates(listener) awaitClose { locationManager.removeUpdates(listener) } // awaitClose — cleanup when flow is cancelled }
- suspendCancellableCoroutine: wraps one-shot callbacks — one resume or resumeWithException
- invokeOnCancellation: cleanup when coroutine is cancelled — prevent listener leaks
- callbackFlow: wraps repeated callbacks — emit multiple values as a Flow
- awaitClose: required cleanup for callbackFlow — always remove listeners here
- trySend(): non-suspending send for callbacks — won't throw if channel closed
callbackFlow is how you convert location updates, Bluetooth events, or sensor readings into Flow. The awaitClose block is mandatory — without it, listeners accumulate every time the flow is collected, causing memory leaks.
This is a classic concurrency design question. The correct structure depends on dependencies between calls — independent calls should be parallel, dependent calls sequential.
// Scenario: user (needed) → orders (needs userId) → recommendations (independent) @HiltViewModel class DashboardViewModel @Inject constructor( private val userRepo: UserRepository, private val orderRepo: OrderRepository, private val recoRepo: RecoRepository ) : ViewModel() { fun loadDashboard() { viewModelScope.launch { _state.value = DashboardState.Loading // Parallel: user + recommendations (independent) val userDeferred = async { userRepo.getUser() } val recoDeferred = async { recoRepo.get() } // Wait for user first (orders depend on it) val user = userDeferred.await () // Sequential: orders depend on userId val orders = orderRepo.getOrders(user.id) // Recos may already be done by now val recos = recoDeferred.await () _state.value = DashboardState.Success (user, orders, recos) } } } // With supervisorScope — partial failure OK fun loadDashboardResilient() { viewModelScope.launch { supervisorScope { val userDeferred = async { runCatching { userRepo.getUser() } } val recoDeferred = async { runCatching { recoRepo.get() } } val user = userDeferred.await ().getOrNull () ?: return@supervisorScope val orders = runCatching { orderRepo.getOrders(user.id) }.getOrDefault (emptyList ()) val recos = recoDeferred.await ().getOrDefault (emptyList ()) _state.value = DashboardState.Success (user, orders, recos) } } }
- Identify dependencies: orders depend on userId — must be sequential after user
- Independent calls: user + recommendations can run in parallel with async
- await() in dependency order: user.await() before orders, recos.await() at end
- supervisorScope: non-critical failures (recommendations) don't block critical data
- Timing: total time ≈ max(user, recos) + orders — not sum of all three
Draw the dependency graph before coding: user → orders (sequential). user ‖ recommendations (parallel). Total time: max(userTime, recoTime) + ordersTime instead of userTime + ordersTime + recoTime. Showing you can reason about timing complexity separates senior from mid-level answers.
Coroutines provide withTimeout and withTimeoutOrNull for clean timeout handling — far simpler than the thread-based alternatives.
// withTimeout — throws TimeoutCancellationException try { val user = withTimeout(5000L) { // 5 second limit api.fetchUser() } _state.value = UiState.Success (user) } catch (e: TimeoutCancellationException) { _state.value = UiState.Error ("Request timed out") } // withTimeoutOrNull — returns null on timeout (cleaner) val user = withTimeoutOrNull(5000L) { api.fetchUser() } _state.value = if (user != null) UiState.Success (user) else UiState.Error ("Timed out") // OkHttp timeout — different layer (transport level) val okHttpClient = OkHttpClient.Builder() .connectTimeout(10, TimeUnit.SECONDS) .readTimeout(30, TimeUnit.SECONDS) .writeTimeout(15, TimeUnit.SECONDS) .build() // Best practice: BOTH layers of timeout // OkHttp: network-level (connect, read, write) // withTimeout: business-level (overall operation) // Retry with exponential backoff + timeout suspend fun fetchWithRetry(): User { var delay = 1000L repeat(3) { attempt -> val result = withTimeoutOrNull(5000L) { api.fetchUser() } if (result != null) return result if (attempt < 2) delay(delay.also { delay *= 2 }) } throw IOException("Failed after 3 attempts") }
- withTimeout: throws TimeoutCancellationException — handle it explicitly
- withTimeoutOrNull: cleaner — returns null on timeout, no exception handling needed
- OkHttp timeouts: transport-level — different from coroutine timeouts
- Both layers recommended: OkHttp for transport, withTimeout for business logic
- TimeoutCancellationException: is a CancellationException — don't swallow it
Use both OkHttp timeout AND withTimeout. OkHttp handles TCP-level timeouts. withTimeout handles the business case: "if this entire operation takes more than 5 seconds, fail gracefully." They serve different purposes at different layers.
Retry with exponential backoff is a production pattern for resilient network calls. Coroutines make it clean with a simple loop and delay.
// Clean retry with exponential backoff suspend fun <T> retryWithBackoff( maxRetries: Int = 3, initialDelay: Long = 1000L, maxDelay: Long = 10_000L, factor: Double = 2.0, block: suspend () -> T ): T { var currentDelay = initialDelay repeat(maxRetries) { attempt -> try { return block() } catch (e: Exception) { if (e is CancellationException) throw e // never swallow! if (attempt == maxRetries - 1) throw e // last attempt: rethrow println("Attempt ${attempt+1} failed, retrying in ${currentDelay}ms") delay(currentDelay) currentDelay = minOf(currentDelay * factor.toLong (), maxDelay) } } throw IllegalStateException("Should not reach here") } // Usage viewModelScope.launch { val user = retryWithBackoff(maxRetries = 3, initialDelay = 500L) { api.fetchUser() } _state.value = UiState.Success (user) } // Using Flow retry operators repo.getUser() .retry(3) { cause -> cause is IOException // only retry network errors } .retryWhen { cause, attempt -> if (cause is IOException && attempt < 3) { delay(1000L * (2.0.pow(attempt.toDouble ())).toLong ()) true // retry } else false // don't retry } .collect { updateUi(it) }
- Exponential backoff: delay doubles each attempt — 1s, 2s, 4s, 8s
- maxDelay cap: prevents extremely long waits — cap at 10-30 seconds
- Always re-throw CancellationException — don't swallow it in the catch block
- Flow.retry: declarative retry for Flow-based APIs
- Flow.retryWhen: full control — access to cause and attempt count
Always add jitter to backoff in production — multiple clients retrying at exactly the same intervals causes "thundering herd" at your server. Add delay += Random.nextLong(0, currentDelay/2) to spread retries.
Debounce waits for a pause before emitting. Throttle limits emission rate. Both are critical for search fields, button clicks, and real-time updates.
// debounce — wait for PAUSE before emitting // Resets timer on each emission searchQueryFlow .debounce(300) // wait 300ms after last keystroke .collect { query -> search(query) } // User types "android" — only fires ONCE, 300ms after "d" // throttleFirst — emit first value, ignore for duration // Not in stdlib — implement manually fun <T> Flow<T>.throttleFirst(periodMs: Long): Flow<T> = flow { var lastEmit = 0L collect { value -> val now = System.currentTimeMillis() if (now - lastEmit >= periodMs) { lastEmit = now emit(value) } } } // Real-world use cases: // debounce(300): search-as-you-type // debounce(500): auto-save form inputs // throttleFirst: button clicks (prevent double-submit) // debounce(1000): scroll position persistence // Sample operator — emit first in each window import kotlinx.coroutines.flow.sample locationFlow .sample(1000) // take one value every 1 second max .collect { updateMap(it) } // distinctUntilChanged — deduplicate consecutive emissions userFlow .map { it.name } .distinctUntilChanged() // only emit when name actually changes .collect { updateNameView(it) }
- debounce: waits for pause — ideal for search, auto-save, resize events
- throttleFirst: first emission in window — prevent double-submit on buttons
- sample: take one value per time window — location updates, stock prices
- distinctUntilChanged: skip consecutive duplicates — reduce unnecessary UI updates
- Combine debounce + distinctUntilChanged for the cleanest search pipeline
Button double-click prevention is a common production bug. Wrap button clicks in a Flow with throttleFirst(1000) or use a simple flag. The coroutine approach is cleaner than managing booleans manually — and testable with virtual time in runTest.
Polling with coroutines is elegant — a simple loop with delay. The scope lifecycle handles cleanup automatically, and you can stop on success or error.
// Clean polling with coroutines @HiltViewModel class OrderViewModel @Inject constructor( private val repo: OrderRepository ) : ViewModel() { private var pollingJob: Job? = null fun startPolling(orderId: String) { pollingJob?.cancel() // cancel any existing polling pollingJob = viewModelScope.launch { while (isActive) { val status = runCatching { repo.getOrderStatus(orderId) } .getOrNull () when (status) { OrderStatus.DELIVERED -> { _state.value = UiState.Success (status) break // stop polling on terminal state } OrderStatus.FAILED -> { _state.value = UiState.Error ("Order failed") break } else -> _state.value = UiState.Loading (status) } delay(5_000L) // wait 5 seconds } } } fun stopPolling() { pollingJob?.cancel() } override fun onCleared() { pollingJob?.cancel() } } // Elegant Flow-based polling fun pollOrderStatus(orderId: String): Flow<OrderStatus> = flow { while (true) { emit(repo.getOrderStatus(orderId)) delay(5_000L) } }.takeWhile { it != OrderStatus.DELIVERED && it != OrderStatus.FAILED }
- while(isActive): poll loop respects cancellation — stops when scope is cancelled
- pollingJob?.cancel(): stop previous poll before starting new one
- break on terminal state: stop when order is delivered/failed — no zombie polling
- Flow-based polling: takeWhile stops emission on terminal state — elegant and composable
- delay inside loop: cooperative — respects cancellation between polls
Always cancel the previous polling Job before starting a new one. If the user taps "refresh" twice, two polling loops will run simultaneously without this guard. Storing the Job reference and cancelling it is the standard pattern.
Coroutines and threads both achieve concurrency but with completely different resource models. A thread is an OS-level resource -- creating thousands is expensive. A coroutine is a lightweight suspended computation -- you can have millions with minimal overhead because they share a small thread pool.
// Threads -- OS managed, ~1MB stack each val t = Thread { doWork() }; t.start() // 10,000 threads ≈ 10GB RAM -- impractical // Coroutines -- suspend and resume on a shared thread pool val scope = CoroutineScope(Dispatchers.IO) repeat(100_000) { scope.launch { delay(1000) } } // 100k coroutines, ~handful of threads // Suspension: coroutine releases its thread while waiting suspend fun fetchData(): Data { val result = withContext(Dispatchers.IO) { api.call() } // suspends, thread free return result // resumes on original dispatcher }
- Thread: OS-managed, ~1MB stack -- creating thousands is expensive and causes context-switch overhead
- Coroutine: user-space, ~1KB -- suspends instead of blocking, returns its thread to the pool while waiting
- 100,000 coroutines run on ~64 IO threads -- the thread pool is shared, coroutines are scheduled cooperatively
- Suspension is the key: delay(1000) suspends the coroutine but doesn't block the thread -- the thread handles other work
- Structured concurrency: coroutine scope defines lifetime -- child coroutines are cancelled when the scope is cancelled
Cooperative scheduling is both a strength and a weakness. CPU-bound coroutines with no suspension points block the entire thread they're running on, starving other coroutines. Always use Dispatchers.Default for CPU work AND add ensureActive() or yield() in tight loops.
The actor pattern confines mutable state to a single coroutine. All mutations happen via message-passing through a channel — eliminating shared state and race conditions.
// Actor pattern — state confined to one coroutine sealed class CounterMsg { object Increment : CounterMsg() object Decrement : CounterMsg() class GetCount(val response: CompletableDeferred<Int>) : CounterMsg() } fun CoroutineScope.counterActor() = actor<CounterMsg> { var count = 0 // private, never shared! for (msg in channel) { when (msg) { is CounterMsg.Increment -> count++ is CounterMsg.Decrement -> count-- is CounterMsg.GetCount -> msg.response.complete (count) } } } // Usage — all mutations via messages, no locks needed! val counter = viewModelScope.counterActor() counter.send(CounterMsg.Increment) counter.send(CounterMsg.Increment) val response = CompletableDeferred<Int>() counter.send(CounterMsg.GetCount(response)) println(response.await ()) // 2 — always correct // Modern alternative: Redux-style with Channel val actions = Channel<Action>() viewModelScope.launch { var state = AppState() for (action in actions) { state = reduce(state, action) // pure function _uiState.value = state // publish to UI } }
- Actor: single coroutine owns mutable state — no locks, no races
- Messages: all mutations via sealed class messages — type-safe commands
- No shared state: count is never accessed outside the actor coroutine
- Redux pattern: actions channel → reduce function → StateFlow — scalable unidirectional flow
- actor{} is experimental in kotlinx.coroutines — use Channel manually for production
The actor pattern is the theoretically correct answer to concurrent state mutation. In practice, MVI architecture (Model-View-Intent) in Android is the actor pattern — actions in, state out, single reducer coroutine. Connecting actor to MVI shows architectural depth.
zip pairs values 1:1 and waits for both. combine emits whenever either source changes — different semantics for different use cases.
// zip — strict 1:1 pairing, waits for both val names = flowOf("Alice", "Bob", "Carol") val scores = flowOf(100, 200) names.zip(scores) { name, score -> "$name: $score" }.collect { println(it) } // "Alice: 100" // "Bob: 200" // Carol dropped — zip stopped when shorter flow completed // combine — emits when EITHER changes, uses latest of other val query = MutableStateFlow("") val filters = MutableStateFlow(emptyList<String>()) query.combine(filters) { q, f -> search(q, f) }.collect { showResults(it) } // query changes → emits with latest filters // filters change → emits with latest query // combine with 3+ flows combine(userFlow, settingsFlow, locationFlow) { user, settings, location -> HomeUiState(user, settings, location) } // Real-world example: search with filters and sort combine( _searchQuery.debounce(300), _selectedCategory, _sortOrder ) { query, category, sort -> repo.search(query, category, sort) // triggers new search }.flatMapLatest { it } // cancels previous search .collect { _results.value = it } // When to use: // zip: ordered pairing, parallel producers, 1:1 relationship // combine: reactive state from multiple sources, any change triggers
- zip: 1:1 pairing — waits for both, stops at shorter flow, order preserved
- combine: latest-wins — emits when any source changes, uses cached latest of others
- combine(f1, f2, f3): multi-source state — most common for filter+search+sort
- combine is for state; zip is for ordered matching of parallel streams
- combine + flatMapLatest: the complete search pipeline pattern
combine is the reactive search filter pattern: whenever query, category, or sort changes, combine fires with all three current values. The interview question "search screen with multiple filters" is almost always solved with combine + debounce + flatMapLatest.
Image processing is CPU-intensive — it should run on Dispatchers.Default. Combining it with a proper scope and progress updates keeps the UI responsive.
// Problem: image processing on Main thread → UI freeze // ❌ Wrong fun onImagesSelected(uris: List<Uri>) { val results = uris.map { processImage(it) } // blocks main thread! showResults(results) } // ✅ Correct — Dispatchers.Default for CPU work fun onImagesSelected(uris: List<Uri>) { viewModelScope.launch { _state.value = ProcessState.Loading (0, uris.size ) val results = withContext(Dispatchers.Default) { uris.mapIndexed { index, uri -> ensureActive() // check cancellation between images val result = processImage(uri) // Update progress on Main withContext(Dispatchers.Main) { _state.value = ProcessState.Loading (index + 1, uris.size ) } result } } _state.value = ProcessState.Done (results) } } // Parallel processing — process all images concurrently val results = withContext(Dispatchers.Default) { uris.map { uri -> async { processImage(uri) } // parallel! }.awaitAll() } // Limit parallelism — avoid overwhelming CPU val limitedDispatcher = Dispatchers.Default.limitedParallelism(4) uris.map { async(limitedDispatcher) { processImage(it) } }. awaitAll ()
- Never process images on Main — causes UI freeze and ANR
- Dispatchers.Default: CPU-optimised thread pool — correct for image processing
- ensureActive(): check cancellation between images — user can cancel
- Progress updates: use withContext(Main) inside Default block to update UI
- limitedParallelism(): control max concurrent jobs — prevent memory pressure
Dispatchers.Default is often overlooked in favour of IO. IO is for blocking/waiting (network, disk). Default is for CPU work (image processing, sorting, JSON parsing). Using IO for CPU work wastes the IO thread pool and can block network calls.
Flow error handling has specific rules — catch only handles upstream errors, and exceptions in collect must be handled with try-catch. Understanding the asymmetry prevents bugs.
// catch — handles UPSTREAM errors only repo.getUsers() .map { transform(it) } // ← catch handles errors from here .catch { e -> emit(emptyList()) // emit fallback _error.value = e.message } .collect { // ← catch does NOT handle errors here! updateUi(it) } // Error in collect — must use try-catch try { flow.collect { value -> updateUi(value) // if this throws, catch() won't catch it } } catch (e: Exception) { handleError(e) } // Complete error handling chain viewModelScope.launch { repo.getUsers() .onStart { _loading.value = true } .catch { e -> _loading.value = false _error.value = e.message emit(emptyList()) } .onCompletion { _loading.value = false } .collect { _users. value = it } } // Transparent error forwarding in Flow fun safeFlow(): Flow<User> = flow { emit(api.getUser()) }.catch { e -> if (e is NetworkException) emit(User.EMPTY) else throw e // re-throw unexpected errors }
- catch: handles errors from upstream operators only — not from collect{}
- try-catch around collect: handles errors in the terminal operator
- Transparent catch: re-throw unexpected errors, only handle known ones
- Emit in catch: provide fallback value instead of failing completely
- Never catch CancellationException — always re-throw it
The catch operator's asymmetry is a common interview trap. "Why is my catch not working?" — because the exception is thrown inside collect{}, which is downstream. Upstream errors (inside flow{}, map{}, filter{}) are caught. Downstream errors (inside collect{}) are not.
In structured concurrency, parent and child coroutines have a strict relationship — parent waits for children, children cancel with parent, and exceptions propagate up unless isolated.
// Parent-child relationship rules: // 1. Parent COMPLETES only after all children complete // 2. Parent CANCELLATION cancels all children // 3. Child FAILURE propagates to parent (with Job) // 4. Child FAILURE is isolated (with SupervisorJob) // Rule 1: parent waits for children viewModelScope.launch { launch { delay(1000); println("child 1") } launch { delay(2000); println("child 2") } println("parent starting") // Parent coroutine ends at 2 seconds, not immediately } // Rule 3: child failure propagates up val scope = CoroutineScope(Job()) scope.launch { launch { throw IOException("Child failed") // propagates up! } launch { doWork() } // cancelled due to sibling failure } // BOTH child coroutines cancelled, scope's Job is failed // Rule 4: SupervisorJob isolates failures val supervisedScope = CoroutineScope(SupervisorJob()) supervisedScope.launch { launch { throw IOException("fails") } // isolated launch { doWork() } // continues! } // coroutineScope{} — structured scope, propagates failure // supervisorScope{} — supervisor scope, isolates failure suspend fun loadAll() = supervisorScope { val a = async { fetchA() } // isolated val b = async { fetchB() } // isolated Results(runCatching { a.await () }.getOrNull (), runCatching { b.await () }.getOrNull ()) }
- Parent waits for all children before completing — structured, predictable
- Parent cancellation cascades down to all children — clean resource cleanup
- Child failure with Job: propagates up, cancels parent and siblings
- Child failure with SupervisorJob: isolated, other children continue
- coroutineScope{}: propagates — use for atomic operations. supervisorScope{}: isolates — use for partial-failure OK scenarios
The choice between coroutineScope and supervisorScope is architectural: "Is this operation atomic (all or nothing) or resilient (partial success is OK)?" Downloading a ZIP file → coroutineScope (all parts needed). Dashboard widgets → supervisorScope (some widgets failing is OK).
Main.immediate skips the message queue if already on the main thread — executing immediately rather than posting to the handler. This avoids one frame of latency in common scenarios.
// Dispatchers.Main — always posts to main thread message queue // Even if already on main thread → 1 frame latency viewModelScope.launch(Dispatchers.Main) { _state.value = UiState.Loading // posted to queue (extra frame) } // Dispatchers.Main.immediate — runs immediately if already on main viewModelScope.launch(Dispatchers.Main.immediate ) { _state.value = UiState.Loading // runs immediately on main thread } // viewModelScope uses Main.immediate by default // That's why immediate UI updates work without frame delay // When you'd notice the difference: // Dispatchers.Main: state update visible 1 frame later // Dispatchers.Main.immediate: state update visible THIS frame // Example where it matters — loading flash prevention // With Main: brief empty state flash before Loading shows // With Main.immediate: Loading shows on same frame as trigger // In tests — TestCoroutineScheduler controls both // Use StandardTestDispatcher for predictable test behavior val testDispatcher = StandardTestDispatcher() Dispatchers.setMain(testDispatcher) // override for tests
- Main: always posts to Android Handler — runs on next main thread iteration
- Main.immediate: runs immediately if already on main — no extra frame
- viewModelScope: uses Main.immediate — that's why StateFlow updates are instant
- Practical: prevents loading state flash when starting coroutines from main thread
- Testing: use Dispatchers.setMain(TestDispatcher) to control both Main and Main.immediate
Most developers don't notice Main vs Main.immediate because viewModelScope already uses the right one. But if you manually create CoroutineScope(Dispatchers.Main), you might see a one-frame flash. This is a subtle but real production issue in animation-heavy UIs.
This combines parallel execution, progress reporting, error isolation, and cancellation — a comprehensive coroutines scenario covering all major concepts.
@HiltViewModel class UploadViewModel @Inject constructor( private val uploadRepo: UploadRepository ) : ViewModel() { private val _progress = MutableStateFlow(UploadProgress(0, 0, 0)) val progress = _progress.asStateFlow() private var uploadJob: Job? = null fun uploadAll(files: List<File>) { uploadJob = viewModelScope.launch { val total = files.size var done = 0; var failed = 0 val mutex = Mutex() supervisorScope { // one failure doesn't cancel others files.map { file -> async(Dispatchers.IO) { try { uploadRepo.upload(file) mutex.withLock { done++ _progress.value = UploadProgress(done, failed, total) } } catch (e: CancellationException) { throw e // always re-throw! } catch (e: Exception) { mutex.withLock { failed++ _progress.value = UploadProgress(done, failed, total) } } } }.awaitAll () } } } fun cancel() { uploadJob?.cancel() } } data class UploadProgress(val done: Int, val failed: Int, val total: Int)
- supervisorScope: one upload failure doesn't cancel others
- async(IO): parallel uploads on IO dispatcher
- Mutex: thread-safe progress counter — concurrent async blocks need synchronisation
- awaitAll(): wait for all uploads to complete
- CancellationException: always re-throw — enables cancel() to work
This question tests everything at once: supervisorScope (isolation), Mutex (concurrent counter), Dispatchers.IO (network), CancellationException re-throw (cancellation), and Job tracking (cancel button). Walk through each decision explicitly — that's what senior interviewers want to see.
Both create a coroutine scope and wait for all children — but coroutineScope suspends (non-blocking) while runBlocking blocks the thread. runBlocking is for tests and main functions only.
// coroutineScope — suspends, non-blocking // ✅ Use inside suspend functions suspend fun loadUserAndPosts(): Pair<User, List<Post>> = coroutineScope { val user = async { fetchUser() } val posts = async { fetchPosts() } user.await () to posts.await () // suspends until both done, thread is FREE } // runBlocking — BLOCKS the thread // ❌ Never use in Android production code (ANR!) // ✅ Only for: main() functions, unit tests fun main() = runBlocking { val result = fetchData() println(result) } // In tests @Test fun testFetch() = runTest { // runTest, not runBlocking, for coroutine tests val result = fetchData() assertEquals("expected", result) } // coroutineScope properties: // ✅ Inherits parent context // ✅ Propagates cancellation // ✅ Waits for all children // ✅ Propagates exceptions // ✅ Thread released during suspension // runBlocking properties: // ✅ Can be called from non-suspend context // ❌ BLOCKS the calling thread — freezes Android UI // Use runTest{} for coroutine unit tests instead
- coroutineScope: suspends — use in suspend functions for parallel work
- runBlocking: blocks — only for main() and legacy unit tests
- runTest: the test version of runBlocking — virtual time, no real delays
- Both wait for all children before completing — structured
- runBlocking in production Android code = ANR risk
If you see runBlocking in production Android code (not tests), it's a bug. One runBlocking on the main thread with a 100ms IO call = 100ms UI freeze. In a 5-second timeout, that's an ANR. Always use coroutineScope or viewModelScope.launch.
The "stale-while-revalidate" pattern: show cache immediately, then fetch fresh data in background. This is implemented with Flow merging or sequential emissions.
// Pattern 1: Flow emission order — cache then network fun getUserWithRefresh(id: String): Flow<User> = flow { // Emit cached value immediately val cached = cache.getUser(id) if (cached != null) emit(cached) // Fetch fresh from network try { val fresh = withContext(Dispatchers.IO) { api.getUser(id) } cache.saveUser(fresh) emit(fresh) // emit updated value } catch (e: Exception) { if (cached == null) throw e // no cache → propagate error // else: cache already shown, network failed quietly } } // Pattern 2: merge — parallel cache + network fun getUserMerged(id: String): Flow<User> = merge( flowOf(cache.getUser(id)).filterNotNull(), flow { emit(api.getUser(id)) }.catch { } ) // Pattern 3: Room + Retrofit (most common in Android) fun getUser(id: String): Flow<User> { viewModelScope.launch { // Background refresh withContext(Dispatchers.IO) { val fresh = api.getUser(id) db.userDao().insertOrReplace(fresh) } } // Room Flow emits immediately from DB, then again after insert return db.userDao().observeUser(id) }
- Emit cache first: immediate response, great UX — user sees data instantly
- Fetch and emit fresh: network result updates UI automatically
- Error handling: if network fails but cache exists, show cache silently
- Room + Retrofit: the production pattern — Room Flow auto-updates on DB change
- merge(): parallel cache + network — whichever comes first is shown first
Room + network refresh is the production answer. Room's Flow observes the database — when you update the DB from network, the Flow emits automatically. Single source of truth: DB is the source, network just refreshes it. This is the offline-first architecture pattern.
limitedParallelism creates a view of a dispatcher that limits how many coroutines can run concurrently — crucial for rate limiting, resource control, and avoiding server overload.
// limitedParallelism — limit concurrent coroutines val limitedIO = Dispatchers.IO.limitedParallelism(4) // Upload with max 4 concurrent uploads files.map { file -> async(limitedIO) { uploadRepo.upload(file) } }.awaitAll () // Only 4 uploads run simultaneously, others wait // Semaphore — another approach for concurrency limiting val semaphore = Semaphore(4) files.map { file -> async(Dispatchers.IO) { semaphore.withPermit { uploadRepo.upload(file) } } }.awaitAll () // Chunked processing — process N at a time files.chunked(10).forEach { chunk -> chunk.map { async { process(it) } }.awaitAll () // Process 10 at a time, then next 10 } // Use cases for limiting parallelism: // API rate limits: max N concurrent requests to server // DB connections: SQLite has limited connections // Memory: image processing — limit to avoid OOM // CPU: avoid over-saturating Default dispatcher // Default parallelism values: // Dispatchers.IO: up to 64 threads // Dispatchers.Default: CPU core count // limitedParallelism(1) = single-threaded — use for ordered processing
- limitedParallelism(n): at most n coroutines run concurrently on this dispatcher
- Semaphore: explicit permit system — more flexible than limitedParallelism
- chunked(): sequential batching — process N, then next N
- Use cases: API rate limiting, DB connection limits, memory management
- limitedParallelism(1): single-threaded — guaranteed ordering
limitedParallelism was added in Kotlin 1.6 — it replaces the old newFixedThreadPoolContext pattern. If you see newFixedThreadPoolContext in a codebase, it should be migrated to Dispatchers.IO.limitedParallelism(n). Knowing the modern API shows you're current.
Room's Flow integration is one of Android's most powerful patterns — the database becomes a reactive source that automatically notifies the UI of any changes.
// Room DAO with Flow @Dao interface UserDao { @Query("SELECT * FROM users") fun observeAll(): Flow<List<User>> // re-emits on any change @Query("SELECT * FROM users WHERE id = :id") fun observeUser(id: String): Flow<User?> @Insert(onConflict = OnConflictStrategy.REPLACE) suspend fun insertOrReplace(user: User) } // Repository — single source of truth pattern class UserRepository @Inject constructor( private val dao: UserDao, private val api: UserApi ) { fun observeUser(id: String): Flow<User?> = dao.observeUser(id) suspend fun refreshUser(id: String) { val fresh = withContext(Dispatchers.IO) { api.getUser(id) } dao.insertOrReplace(fresh) // Room automatically notifies observeUser() Flow! } } // ViewModel — combines observation + refresh class UserViewModel @Inject constructor(private val repo: UserRepository) : ViewModel() { val user = repo.observeUser("123") .stateIn(viewModelScope, SharingStarted.WhileSubscribed(5000), null) init { viewModelScope.launch { repo.refreshUser("123") } } }
- Room Flow: auto-emits when queried table changes — true reactive database
- Single source of truth: DB is the source, API just populates it
- observeUser returns Flow: UI always shows latest DB state
- refreshUser: update DB, Room Flow notifies UI automatically
- stateIn: convert cold Room Flow to hot StateFlow for multiple UI collectors
The offline-first architecture: UI → observes Room Flow. ViewModel init → refreshes from network. Any DB write triggers the Flow. This means: app works offline, network updates are automatic, and the UI is always consistent. This is what Google's Now in Android sample demonstrates.
Coroutine leaks are a common production issue. They occur when coroutines outlive their scope — typically from GlobalScope, wrong scope choice, or missing cancellation.
// Diagnosing leaks: // 1. Memory Profiler — heap dumps show CoroutineImpl objects // 2. Debug.dumpJavaHeap() — analyze coroutine references // 3. LeakCanary — detects ViewModel/Fragment context leaks // 4. Log coroutine start/end with CoroutineName // CAUSE 1: GlobalScope — never cancelled // ❌ GlobalScope.launch { infinitePolling() } // lives forever! // ✅ viewModelScope.launch { infinitePolling() } // cancelled with VM // CAUSE 2: Wrong scope in Fragment // ❌ lifecycleScope outlives Fragment view lifecycleScope.launch { flow.collect { updateView(it) } } // ✅ viewLifecycleOwner ties to Fragment view viewLifecycleOwner.lifecycleScope.launch { repeatOnLifecycle(Lifecycle.State.STARTED) { flow.collect { updateView(it) } } } // CAUSE 3: Holding Activity/Context reference in coroutine // ❌ Activity leaked if coroutine outlives it val context = requireActivity() // captured in lambda! viewModelScope.launch { delay(60_000); context.doSomething() } // ✅ Use WeakReference or pass data, not context // CAUSE 4: callbackFlow without awaitClose // ❌ Listener never removed fun leakyFlow() = callbackFlow { listener.register { trySend(it) } // Missing: awaitClose { listener.unregister() } } // ✅ Always include awaitClose fun safeFlow() = callbackFlow { listener.register { trySend(it) } awaitClose { listener.unregister() } }
- GlobalScope: most common leak source — never cancelled, lives until process death
- Wrong Fragment scope: lifecycleScope instead of viewLifecycleOwner.lifecycleScope
- Context capture: Activity/Fragment reference in long-running coroutine = memory leak
- callbackFlow without awaitClose: listeners accumulate, never removed
- Debug: CoroutineName + logging, LeakCanary, Memory Profiler heap dumps
LeakCanary + CoroutineName is the debug stack. Add CoroutineName("UserFetcher") to every launch — if you see it in a heap dump after the screen is gone, it's leaked. Then trace back to which scope launched it.
delay() is coroutine-aware — it suspends the coroutine without blocking the thread. Thread.sleep() blocks the OS thread — never use it inside coroutines.
// Thread.sleep() — BLOCKS the OS thread viewModelScope.launch(Dispatchers.IO) { Thread.sleep(1000) // IO thread blocked for 1 second! // While blocked: thread can't serve other coroutines // With 1000 sleep-blocked coroutines: IO pool exhausted → hang } // delay() — SUSPENDS the coroutine, frees the thread viewModelScope.launch(Dispatchers.IO) { delay(1000) // coroutine suspended, thread picks up another coroutine // 1 second later: coroutine resumed on available thread } // Practical impact on Dispatchers.IO (64 threads) // Thread.sleep(1000) × 65 coroutines → 65th coroutine waits 1+ second // delay(1000) × 1000 coroutines → all complete at ~1 second // delay() is cancellable — Thread.sleep() is not val job = viewModelScope.launch { delay(10_000) // user navigates away } job.cancel() // ✅ delay() cancelled immediately val job2 = viewModelScope.launch { Thread.sleep(10_000) // user navigates away } job2.cancel() // ❌ Thread.sleep() continues sleeping 10 seconds! // Testing: delay() respects TestCoroutineScheduler // In runTest, delay(1000) completes instantly // Thread.sleep(1000) in runTest = actual 1 second wait
- Thread.sleep: blocks OS thread — no other coroutines can use it while sleeping
- delay(): suspends coroutine — thread freed, serves other coroutines
- delay() is cancellable; Thread.sleep() is not
- delay() works with TestCoroutineScheduler — instant in tests
- In Dispatchers.IO, Thread.sleep() can exhaust the 64-thread pool
Thread.sleep() inside coroutines is always a bug. Even on Dispatchers.IO with 64 threads — 65 sleeping threads exhausts the pool, causing the 65th coroutine to hang. Always use delay() in coroutines.
WebSocket + Flow with auto-reconnection demonstrates callbackFlow, retry operators, and lifecycle-aware collection — a senior-level architecture question.
// WebSocket Flow with automatic reconnection fun chatMessages(roomId: String): Flow<ChatMessage> = callbackFlow { val client = OkHttpClient() val request = Request.Builder().url("wss://chat.example.com/room/$roomId").build() val ws = client.newWebSocket(request, object : WebSocketListener() { override fun onMessage(ws: WebSocket, text: String) { trySend(parseMessage(text)) // non-suspending send } override fun onFailure(ws: WebSocket, t: Throwable, response: Response?) { close(t) // closes the flow with error → triggers retry } override fun onClosed(ws: WebSocket, code: Int, reason: String) { close() // server closed normally } }) awaitClose { ws.close(1000, "Client closed") } // cleanup! }.retryWhen { cause, attempt -> if (cause is IOException && attempt < 5) { val backoff = 2000L * (2.0.pow (attempt.toDouble ())).toLong () delay(minOf(backoff, 30_000L)) // max 30s backoff true // reconnect } else false // give up } // ViewModel val messages = chatMessages("room-123") .shareIn(viewModelScope, SharingStarted.WhileSubscribed(), replay = 50)
- callbackFlow: wraps WebSocket callbacks into a Flow
- trySend: non-suspending — safe to call from listener callbacks
- awaitClose: cleanup when flow is cancelled — close WebSocket properly
- retryWhen: exponential backoff reconnection — max 5 attempts, 30s cap
- shareIn: share single WebSocket connection with multiple UI observers
callbackFlow + retryWhen is the production WebSocket pattern. awaitClose is not optional — without it, the WebSocket stays open forever even when no one is collecting. shareIn ensures only ONE WebSocket connection per room regardless of how many UI components observe it.
launch creates a new concurrent coroutine. withContext switches the current coroutine to a different dispatcher sequentially. This distinction affects concurrency, result access, and code clarity.
// withContext — same coroutine, switches dispatcher, returns result viewModelScope.launch { val user = withContext(Dispatchers.IO) { // switches, waits, returns api.fetchUser() // runs on IO } // back on Main _state.value = user // can use result immediately } // launch(IO) — new coroutine, parallel, no result viewModelScope.launch { launch(Dispatchers.IO) { // new coroutine, runs independently val user = api.fetchUser() // Can't return user to parent — different coroutine! withContext(Dispatchers.Main) { _state.value = user } } // This runs concurrently — doesn't wait for launch{} _state.value = UiState.Loading } // When to use which: // withContext: need the result, sequential, context switch only // launch(IO): fire-and-forget background work, parallel // async(IO): parallel + need result (await it) // Repository pattern — withContext is the right choice suspend fun getUser(id: String): User = withContext(Dispatchers.IO) { dao.getUser(id) // result returned to caller } // Anti-pattern: launch(IO) in repository // suspend fun getUser(id: String): User { // launch(IO) { dao.getUser(id) } // ❌ result lost! // }
- withContext: sequential — suspends until done, result available immediately after
- launch(IO): parallel — new concurrent coroutine, no easy result return
- async(IO): parallel + result — use await() to get value
- Repository pattern: always withContext(IO) inside suspend functions
- Anti-pattern: launch(IO) to get a result — result is lost in fire-and-forget
Repository functions should always use withContext(IO), never launch(IO). withContext is a suspend function — it's designed for "switch context, do work, return result." launch is for fire-and-forget. Mixing them up causes subtle bugs where results are lost.
mapLatest and transformLatest cancel the previous transformation when a new value arrives — combining mapping with the cancellation behavior of flatMapLatest.
// mapLatest — map + cancel if new value arrives mid-transform userIdFlow .mapLatest { id -> delay(500) // if new id arrives, this delay is cancelled api.fetchUser(id) // and this fetch is cancelled too } .collect { showUser(it) } // vs regular map — never cancels userIdFlow .map { id -> api.fetchUser(id) // all run to completion, even stale ones } // transformLatest — emit multiple + cancel searchQueryFlow .transformLatest { query -> emit(SearchState.Loading) // emit loading immediately delay(300) // debounce val results = api.search(query) emit(SearchState.Success (results)) } .collect { updateUi(it) } // New query cancels previous loading + debounce + search // Equivalences: // mapLatest = flatMapLatest { flowOf(transform(it)) } // transformLatest = flatMapLatest { flow { transform(it) } } // Real-world: live price updates selectedStockFlow .mapLatest { ticker -> priceApi.getPrice(ticker) // cancelled if user selects different stock } .collect { showPrice(it) }
- mapLatest: cancels in-progress transformation when new value arrives
- transformLatest: emit multiple values + cancel when new arrives
- Both are shortcuts for flatMapLatest with a simple inner flow
- Use when: the latest value makes previous processing obsolete
- Regular map: all values processed, even stale — use when ordering matters
transformLatest is the cleanest way to implement search with loading state: emit Loading immediately (so UI shows spinner), then debounce, then fetch. If user types again mid-fetch, the whole thing cancels and restarts — no stale Loading or results.
Jobs form a tree hierarchy — parent tracks children. joinAll waits for multiple jobs, cancelAndJoin cancels and waits for cleanup to complete.
// Job states: New → Active → Completing → Completed // ↓ // Cancelling → Cancelled val job1 = viewModelScope.launch { doWork() } val job2 = viewModelScope.launch { doMoreWork() } // joinAll — suspend until all jobs complete joinAll(job1, job2) println("Both done") // cancelAndJoin — cancel + wait for cancellation to finish // (important: coroutine does cleanup in finally block) job1.cancelAndJoin() // waits for finally{} to complete println("Cancelled and cleaned up") // vs just cancel() — doesn't wait for cleanup job1.cancel() // returns immediately, cleanup still running! // Job hierarchy — parent waits for children val parentJob = viewModelScope.launch { val child1 = launch { delay(1000); doWork() } val child2 = launch { delay(2000); doMore() } // Parent doesn't complete until both children complete } // parentJob.join() waits ~2 seconds // Job inspection job1.isActive // true while running or waiting job1.isCompleted // true after success or cancel job1.isCancelled // true if cancelled job1.children // sequence of child jobs
- Job states: New → Active → Completing → Completed (or Cancelling → Cancelled)
- joinAll: suspend until all listed jobs complete
- cancelAndJoin: cancel + wait for finally blocks to run
- cancel(): returns immediately — cleanup still in progress
- Job.children: access child jobs for monitoring or cancellation
cancelAndJoin vs cancel is subtle but important. cancel() returns before the finally block runs. If you start a new operation right after cancel(), the cleanup and new operation race. cancelAndJoin() ensures cleanup is complete before proceeding — use it when order matters.
A download manager combines all coroutine concepts: Jobs, Channels for progress, pause/resume with StateFlow, retry on failure, and Flow for observable state.
class DownloadManager @Inject constructor( private val api: DownloadApi ) { private val scope = CoroutineScope(SupervisorJob() + Dispatchers.IO) private val downloads = ConcurrentHashMap<String, DownloadJob>() fun download(url: String): Flow<DownloadState> { val stateFlow = MutableStateFlow<DownloadState>(DownloadState.Queued) val job = scope.launch { retryWithBackoff(maxRetries = 3) { stateFlow.value = DownloadState.Downloading(0) api.download(url).collect { (bytes, total) -> stateFlow.value = DownloadState.Downloading (bytes * 100 / total) } stateFlow.value = DownloadState.Done } } downloads[url] = DownloadJob(job, stateFlow) return stateFlow.asStateFlow() } fun pause(url: String) { downloads[url]?.job?.cancel() downloads[url]?.state?.value = DownloadState.Paused } fun resume(url: String) { download(url) } fun cancelAll() { scope.coroutineContext[Job]?.cancelChildren() } } sealed class DownloadState { object Queued : DownloadState() data class Downloading(val progress: Int) : DownloadState() object Paused : DownloadState() object Done : DownloadState() }
- SupervisorJob scope: one download failing doesn't cancel others
- MutableStateFlow per download: observable progress for each download
- retryWithBackoff: automatic retry on network failure
- pause(): cancel the Job, update state to Paused
- cancelChildren(): cancel all downloads without destroying the scope
This design shows you understand Job lifecycle management, SupervisorJob for independent failures, and StateFlow for observable progress. The key insight: cancelChildren() cancels all active downloads without destroying the scope — so you can start new downloads afterward.
Kotlin coroutines in 2024-25 received several improvements: structured concurrency enforcement, coroutineScope builders for parallel decomposition, improved Flow operators, and better integration with Compose via collectAsStateWithLifecycle. The most impactful production improvement is replacing LiveData with StateFlow.
// Kotlin 2.0+ coroutine improvements // awaitAll() -- parallel async with structured error propagation val (user, orders) = coroutineScope { val u = async { api.getUser(id) } val o = async { api.getOrders(id) } awaitAll(u, o) // both run in parallel, either failure cancels both } // Flow.toCollection() -- new terminal operator (Kotlin 1.9+) val items = flow.toList() // collects all emissions into a list // collectAsStateWithLifecycle -- lifecycle-safe Compose collection val state by viewModel.uiState.collectAsStateWithLifecycle() // Stops collecting when the composable leaves the Lifecycle.State.STARTED state
- awaitAll(): launch multiple async blocks and await all results -- first failure cancels all siblings automatically
- collectAsStateWithLifecycle(): the correct way to collect Flow in Compose -- stops collecting when app backgrounds, preventing wasted work
- StateFlow replacing LiveData: StateFlow is coroutine-native, works in KMP, has a cleaner API -- LiveData is legacy
- Flow.catch + retry operators: declarative error handling and retry logic without try-catch blocks throughout the chain
- Kotlin 2.0 coroutine improvements: K2 compiler generates more efficient coroutine state machines -- smaller bytecode, faster execution
Mentioning K2 compiler improvements for coroutines shows you track the ecosystem. More impactful: mention that you use -Dkotlinx.coroutines.debug in development and Turbine for all Flow tests. These are the practical, production-quality habits that impress senior interviewers.
Code review questions test whether you can identify multiple coroutine anti-patterns — a favourite senior interview technique.
// ❌ PR CODE WITH MULTIPLE BUGS — can you find them? class BadViewModel : ViewModel() { fun loadData() { // Bug 1: GlobalScope.launch(Dispatchers.IO) { val data = api.fetchData() // Bug 2: Thread.sleep(1000) // Bug 3: _state.value = data } } fun search(query: String) { // Bug 4: try { viewModelScope.launch { searchApi.search(query) } } catch (e: Exception) { } // Bug 5: viewModelScope.launch(Dispatchers.IO) { val result = async { processLocally(query) }.await () _results.value = result } } } // ✅ FIXED VERSION: class GoodViewModel : ViewModel() { fun loadData() { viewModelScope.launch { // Fix 1: viewModelScope val data = withContext(Dispatchers.IO) { api.fetchData() } delay(1000) // Fix 2: delay not Thread.sleep _state.value = data // Fix 3: already on Main } } fun search(query: String) { viewModelScope.launch { // Fix 4: try-catch inside coroutine try { searchApi.search(query) } catch (e: Exception) { } } viewModelScope.launch { // Fix 5: withContext instead of async+await val result = withContext(Dispatchers.Default) { processLocally(query) } _results.value = result } } }
- Bug 1: GlobalScope — coroutine never cancelled, ViewModel cleared but coroutine lives
- Bug 2: Thread.sleep() — blocks IO thread, breaks cancellation
- Bug 3: Writing StateFlow from IO thread — should be fine with StateFlow but confusing
- Bug 4: try-catch outside launch — never catches exceptions from coroutines
- Bug 5: async{}.await() instead of withContext — unnecessary Deferred overhead
Code review questions test pattern recognition. The 5 bugs: GlobalScope (leak), Thread.sleep (blocking), try-catch placement (never works), and async-await when withContext suffices (needless overhead). Finding all 5 and explaining fixes demonstrates genuine expertise.
25 questions covering MVVM, Clean Architecture, MVI, multi-module apps, dependency injection, design patterns, and real-world system design for 2025-26 Android interviews.
MVVM (Model-View-ViewModel) separates UI from business logic. Each layer has a single, clear responsibility — making the code testable, maintainable, and rotation-safe.
// MODEL — data and business logic data class User(val id: String, val name: String) class UserRepository @Inject constructor( private val api: UserApi, private val db: UserDao ) { suspend fun getUser(id: String): User = db.get(id) ?: api.fetchUser(id).also { db.insert(it) } } // VIEWMODEL — exposes state, handles user intent // Knows nothing about the View @HiltViewModel class UserViewModel @Inject constructor( private val repo: UserRepository ) : ViewModel() { private val _state = MutableStateFlow<UiState<User>>(UiState.Loading) val state = _state.asStateFlow() fun loadUser(id: String) { viewModelScope.launch { runCatching { repo.getUser(id) } .onSuccess { _state.value = UiState.Success (it) } .onFailure { _state.value = UiState.Error (it.message !!) } } } } // VIEW — observes state, sends intents @Composable fun UserScreen(vm: UserViewModel = hiltViewModel()) { val state by vm.state.collectAsStateWithLifecycle() when (state) { is UiState.Loading -> Spinner() is UiState.Success -> UserCard(state.data) is UiState.Error -> ErrorView(state.msg) } }
- Model: data classes + repository — handles data operations and business rules
- ViewModel: exposes StateFlow, calls repository, survives rotation automatically
- View: observes state, sends user actions to ViewModel — zero business logic
- Golden rule: ViewModel never references View (Activity/Composable) — prevents memory leaks
- View never accesses Model directly — always through ViewModel
"ViewModel knows nothing about the View" is the most important principle. If your ViewModel imports android.widget.TextView or stores a Context, it's wrong. ViewModel exposes state — View renders it. That's the complete contract.
Clean Architecture adds a Domain layer with use cases between ViewModel and Repository, enforcing strict dependency rules. Inner layers know nothing about outer layers — Domain is pure Kotlin.
// Clean Architecture layers: Presentation → Domain ← Data // (Domain knows nothing about Presentation or Data) // DOMAIN LAYER — pure Kotlin, no Android, no frameworks data class User(val id: String, val name: String) interface UserRepository { // interface in domain suspend fun getUser(id: String): User } class GetUserUseCase @Inject constructor( private val repo: UserRepository // depends on interface ) { suspend operator fun invoke(id: String): User { require(id.isNotBlank ()) { "ID cannot be blank" } return repo.getUser(id) } } // DATA LAYER — implements domain interfaces class UserRepositoryImpl @Inject constructor( private val api: UserApi, private val dao: UserDao ) : UserRepository { override suspend fun getUser(id: String): User = dao.get(id)?.toDomain () ?: api.getUser(id).toDomain () } // PRESENTATION LAYER — calls use cases, not repositories class UserViewModel @Inject constructor( private val getUser: GetUserUseCase // not repository! ) : ViewModel() { fun load(id: String) { viewModelScope.launch { _state.value = UiState.Success (getUser(id)) } } } // Plain MVVM: ViewModel → Repository // Clean MVVM: ViewModel → UseCase → Repository Interface ← RepositoryImpl
- Dependency rule: arrows point inward — Domain never imports from Data or Presentation
- Use cases: single business operation — reusable across multiple ViewModels
- Repository interface in domain: allows swapping implementations (mock, real, cache)
- Domain is pure Kotlin: no Android plugin, no Room @Entity, no Context
- Testable in isolation: domain tests run on JVM, no emulator needed — milliseconds
Use cases are justified when: (1) business logic is complex, (2) multiple ViewModels share the same logic, or (3) you need pure-Kotlin testability. For simple CRUD apps they can be over-engineering — saying this shows pragmatic thinking.
MVI (Model-View-Intent) enforces strict unidirectional data flow with a single immutable state object. Every user action is an Intent that goes through a reducer — fully predictable and consistent.
// MVI: Intent → ViewModel (reducer) → State → View // Intent — sealed class of ALL user actions sealed class UserIntent { object Load : UserIntent() object Refresh : UserIntent() data class Search(val query: String) : UserIntent() } // State — ONE immutable data class for entire screen data class UserState( val isLoading: Boolean = false, val users: List<User> = emptyList(), val error: String? = null, val query: String = "" ) // ViewModel — dispatch() receives intents, updates single state @HiltViewModel class UserMviViewModel @Inject constructor( private val repo: UserRepository ) : ViewModel() { private val _state = MutableStateFlow(UserState()) val state = _state.asStateFlow() fun dispatch(intent: UserIntent) { viewModelScope.launch { when (intent) { is UserIntent.Load -> loadUsers() is UserIntent.Refresh -> { _state.update { it.copy (isLoading = true) }; loadUsers() } is UserIntent.Search -> _state.update { it.copy (query = intent.query) } } } } private suspend fun loadUsers() { _state.update { it.copy (isLoading = true, error = null) } runCatching { repo.getUsers() } .onSuccess { _state.update { s -> s.copy (isLoading = false, users = it) } } .onFailure { _state.update { s -> s.copy (isLoading = false, error = it.message ) } } } } // View — only calls dispatch() Button(onClick = { vm.dispatch(UserIntent.Refresh) }) { Text("Refresh") }
- Unidirectional: Intent → ViewModel → State → View — strict one-way flow
- Single state: one data class = impossible to have isLoading=true AND showError=true
- MVVM vs MVI: MVVM has multiple separate StateFlows; MVI has one unified state object
- Predictable: given the same intents in order, you always get the same state — easily testable
- copy(): immutable state updates — _state.update { it.copy(isLoading = true) }
MVI's key advantage: impossible state combinations. With separate MVVM StateFlows you can accidentally emit isLoading=true AND hasError=true simultaneously. With MVI's single data class, state is always self-consistent. This is the architectural argument for MVI on complex screens.
The Repository is the single source of truth for data. It abstracts all data sources (network, database, cache) from the ViewModel — the ViewModel never knows where data comes from.
// Repository — single source of truth class ProductRepository @Inject constructor( private val api: ProductApi, private val dao: ProductDao ) { // Offline-first: DB is source of truth fun getProducts(): Flow<List<Product>> = dao.observeAll() suspend fun refresh() { val fresh = withContext(Dispatchers.IO) { api.getProducts() } dao.insertAll(fresh) // triggers getProducts() Flow } // Cache-first with network fallback suspend fun getProduct(id: String): Product = dao.get(id) ?: api.getProduct(id).also { dao.insert(it) } } // ViewModel — agnostic about data source class ProductViewModel @Inject constructor( private val repo: ProductRepository ) : ViewModel() { val products = repo.getProducts() .stateIn(viewModelScope, SharingStarted.WhileSubscribed(5000), emptyList()) fun refresh() { viewModelScope.launch { repo.refresh() } } // No idea if data came from DB, API, or cache — doesn't need to know }
- Single source of truth: all data flows through Repository — consistent state everywhere
- Data source agnostic: ViewModel calls getUser(), not api.getUser() or dao.getUser()
- Offline-first: Room as source of truth, network refreshes the DB, DB notifies UI
- Caching strategy: Repository decides cache-first, network-first, or both
- Testable: inject FakeProductRepository in ViewModel tests — no real network
Single source of truth is the most important concept. Without it: DB shows 5 items, UI cache shows 3, next API call shows 8 — all inconsistent. Repository ensures everyone reads from the same place — the database — which is always the authoritative copy.
Dependency Injection provides objects their dependencies from the outside instead of letting them create their own. Hilt is Google's recommended DI framework for Android — compile-time safe, built on Dagger.
// Without DI — tightly coupled, impossible to test class UserViewModel { private val repo = UserRepositoryImpl(RetrofitApi(), RoomDatabase()) // Can't swap to FakeRepository in tests } // Hilt setup @HiltAndroidApp class MyApp : Application() // Step 1: annotate Application @Module @InstallIn(SingletonComponent::class) object NetworkModule { @Provides @Singleton // Step 2: provide dependencies fun provideRetrofit(): Retrofit = Retrofit.Builder() .baseUrl("https://api.example.com").build () @Provides @Singleton fun provideApi(retrofit: Retrofit): UserApi = retrofit.create (UserApi::class.java) } // Step 3: inject via constructor class UserRepository @Inject constructor( private val api: UserApi, private val dao: UserDao ) @HiltViewModel class UserViewModel @Inject constructor( private val repo: UserRepository // Hilt auto-injects ) : ViewModel() // Key Hilt annotations: // @HiltAndroidApp — Application class // @AndroidEntryPoint — Activity, Fragment, Service // @HiltViewModel — ViewModel // @Inject — constructor / field injection // @Module — class that provides dependencies // @InstallIn — scope (SingletonComponent, ViewModelComponent...) // @Provides — method that creates a dependency // @Binds — bind interface to implementation // @Singleton — one instance app-wide
- DI: classes receive dependencies, not create them — swappable for testing
- Hilt: compile-time — missing binding fails the BUILD, not at runtime
- @InstallIn scopes: Singleton (app), ViewModel, Activity, Fragment
- @Binds vs @Provides: @Binds for interface→impl mapping, @Provides for complex creation
- Testing: inject FakeUserRepository — no network, fast, predictable
The #1 DI benefit: testability. "With Hilt I create UserViewModel(FakeUserRepository()) in unit tests. Without DI, the real Retrofit is baked in — I can't test without the network." Interviewers want this specific answer, not a definition of DI.
Multi-module architecture splits the app into independent Gradle modules -- each with its own source set, dependencies, and build config. Each module compiles independently so a change in :feature:home doesn't force :feature:profile to recompile. The main trade-offs are faster incremental builds and enforced layer separation, versus the initial setup cost and increased Gradle complexity.
// Typical module graph (dependencies flow downward only) // :app → :feature:home → :core:domain → :core:data → :core:network // settings.gradle.kts -- declare all modules include(":app", ":feature:home", ":feature:profile", ":core:domain", ":core:data", ":core:network", ":core:ui") // :feature:home/build.gradle.kts -- depends only on core layers dependencies { implementation(project(":core:domain")) implementation(project(":core:ui")) // cannot depend on :feature:profile -- enforced by module boundaries }
- Faster incremental builds: change one module → only that module and its dependents recompile
- Enforced boundaries: :feature:home cannot import :feature:profile -- prevents accidental coupling
- Parallel compilation: independent modules build simultaneously with org.gradle.parallel=true
- Dynamic delivery: modules can become on-demand Dynamic Feature Modules -- reducing install size
- Trade-off: significant upfront setup cost, Gradle complexity -- worth it beyond ~5 developers or ~50k LOC
Frame the decision around team size and build pain. Solo developer on a small app → single module. 5+ devs, 3+ minute builds → multi-module. "I'd start with good package-by-feature structure in a single module and extract to multi-module when builds exceed 2 minutes."
SOLID principles guide object-oriented design toward maintainable, extensible, testable code. They're the foundation of all modern Android architecture patterns.
// S — Single Responsibility: one class, one reason to change // ❌ ViewModel fetches, formats dates, AND validates // ✅ class UserViewModel(private val repo: UserRepository) : ViewModel() { } class DateFormatter @Inject constructor() { fun format(ts: Long): String = /* ... */ "" } // O — Open/Closed: open for extension, closed for modification interface PaymentProcessor { suspend fun process(amount: Double): Result<Unit> } class StripeProcessor : PaymentProcessor { /* ... */ } class RazorpayProcessor : PaymentProcessor { /* ... */ } // Add new provider = new class, no existing code changed // L — Liskov Substitution: subtypes replaceable for base type fun checkout(processor: PaymentProcessor) { processor.process (100.0) // works with Stripe OR Razorpay } // I — Interface Segregation: don't force unused method impl // ❌ Fat interface interface UserManager { fun getUser(); fun saveUser(); fun banUser() } // ✅ Segregated interface UserReader { fun getUser(): User } interface UserAdmin { fun banUser(id: String) } // D — Dependency Inversion: depend on abstractions // ❌ class UserViewModel(private val repo: UserRepositoryImpl) // concrete! // ✅ class UserViewModel(private val repo: UserRepository) // interface
- S: ViewModel manages state only — DateFormatter, Validator are separate classes
- O: new payment provider = new class implementing PaymentProcessor — no existing changes
- L: all PaymentProcessor implementations work wherever the interface is expected
- I: ProfileViewModel needs UserReader only — not the full fat UserManager interface
- D: ViewModel depends on UserRepository interface → can inject FakeUserRepository in tests
D (Dependency Inversion) is the most impactful for Android. ViewModel depending on a UserRepository interface — not UserRepositoryImpl — is what makes unit testing possible. This is why Hilt @Binds exists: to wire the interface to its implementation at runtime.
UiState represents everything the screen needs to render. Two approaches — sealed class for mutually exclusive states, data class for combinable states. Each suits different screen complexity.
// Approach 1: Sealed class — mutually exclusive states sealed class UiState<out T> { object Loading : UiState<Nothing>() data class Success<T>(val data: T) : UiState<T>() data class Error(val msg: String) : UiState<Nothing>() } // Pros: exhaustive when{}, impossible to be in two states // Cons: can't show data + refreshing spinner simultaneously // Approach 2: Data class — combinable states data class UserListState( val users: List<User> = emptyList(), val isLoading: Boolean = false, val isRefreshing: Boolean = false, // OLD data visible + spinner val error: String? = null ) // Pros: pull-to-refresh (users visible + isRefreshing=true) // Cons: possible invalid combinations // Update immutably _state.update { it.copy (isRefreshing = true) } val fresh = repo.refresh () _state.update { it.copy (users = fresh, isRefreshing = false) } // Hybrid — best of both worlds data class FeedState( val content: ContentState = ContentState.Loading, val isRefreshing: Boolean = false ) sealed class ContentState { object Loading; data class Success(val data: List<Item>); data class Error(val msg: String) }
- Sealed class: mutually exclusive — great for initial load, exhaustive when forces all cases
- Data class: combinable — supports pull-to-refresh showing old data while loading new
- Never model state with multiple separate StateFlows — leads to inconsistent combinations
- Hybrid: sealed for content state + boolean flags for overlays — best of both
- MVI naturally uses data class state — single object covers entire screen
Pull-to-refresh is the litmus test. With sealed class, when refreshing you can't show "old data + spinner" — you'd lose the data. Data class handles this: isRefreshing=true while users still holds the previous list. This is why complex screens prefer data class state.
Feature modules can't depend on each other, so navigation must go through a shared contract. The NavGraphBuilder extension pattern (used in Google's Now in Android) is the modern recommended approach.
// :core:navigation — shared routes (all features import this) @Serializable object HomeRoute @Serializable data class ProfileRoute(val userId: String) @Serializable data class ProductRoute(val productId: String) // :feature:home contributes its own graph via extension function fun NavGraphBuilder.homeGraph(navController: NavController) { composable<HomeRoute> { HomeScreen( onNavigateToProfile = { userId -> navController.navigate (ProfileRoute(userId)) } ) } } // :app assembles ALL feature graphs — it's the only module that knows all routes @Composable fun AppNavHost(navController: NavHostController) { NavHost(navController, startDestination = HomeRoute) { homeGraph(navController) // from :feature:home profileGraph(navController) // from :feature:profile productGraph(navController) // from :feature:product } } // :feature:home only imports :core:navigation — NOT :feature:profile // navController.navigate(ProfileRoute("123")) works without that import // Alternative: Deep links (fully decoupled but no compile-time safety) navController.navigate ("myapp://profile/123")
- Problem: :feature:home and :feature:profile can't depend on each other — circular dependency
- Shared routes: type-safe route objects in :core:navigation, imported by all features
- NavGraphBuilder extensions: each feature contributes its screens to the main NavHost in :app
- :app owns navigation: the only module with the full picture — wires everything together
- Deep links: fully decoupled alternative — useful for external app navigation
This is exactly the pattern in Google's "Now in Android" project. Each feature module defines a fun NavGraphBuilder.featureGraph() extension, and :app's AppNavHost calls all of them. Feature modules stay completely isolated — :app is the only module that "knows" about all features.
For a 5-person team building a new e-commerce app, start with a single-module clean architecture (Presentation → Domain → Data), then migrate to multi-module only when build times hurt or team size grows. Premature modularisation adds weeks of Gradle setup with no day-one benefit.
// Recommended starting structure -- single module, clear package boundaries com.example.shop ├── data/ // repositories, Room DAOs, Retrofit APIs, DTOs ├── domain/ // use cases, domain models, repository interfaces ├── presentation/ // ViewModels, Compose screens, UI state └── di/ // Hilt modules // Domain layer -- pure Kotlin, no Android imports class GetProductsUseCase @Inject constructor( private val repo: ProductRepository // interface, not implementation ) { operator fun invoke() = repo.getProducts() } // When to add a module: build time > 2 min OR a clear reusable boundary exists
- Start single-module: clean package structure (data/domain/presentation/di) gives most of the architecture benefit with zero Gradle cost
- Domain layer must be pure Kotlin -- no Android framework imports, fully unit-testable without Robolectric
- Repository interfaces live in domain, implementations in data -- ViewModel never touches Room or Retrofit directly
- Extract :core:ui and :core:network as first modules when build time exceeds 2 minutes
- Feature modules come last -- only when the team is large enough to own independent feature delivery
Show judgment, not pattern-matching. "I'd use Clean Architecture because the checkout flow has complex multi-step business rules. For the product catalog — plain MVVM is enough." Knowing when each pattern applies, and when it's over-engineering, is what separates senior from mid-level thinking.
@Provides executes code to create a dependency. @Binds declares which implementation satisfies an interface — no code, just a mapping. @Binds is more efficient and preferred for interface bindings.
// @Provides — runs code to create the object // Use when: third-party library (can't add @Inject), complex setup @Module @InstallIn(SingletonComponent::class) object NetworkModule { @Provides @Singleton fun provideRetrofit(): Retrofit = Retrofit.Builder() .baseUrl("https://api.example.com").build () @Provides @Singleton fun provideApi(retrofit: Retrofit): UserApi = // uses another dep retrofit.create (UserApi::class.java) } // @Binds — maps interface to implementation, zero code overhead // Use when: impl has @Inject constructor — Hilt already knows how to create it // Module MUST be abstract, method MUST be abstract @Module @InstallIn(SingletonComponent::class) abstract class RepositoryModule { @Binds @Singleton abstract fun bindUserRepository( impl: UserRepositoryImpl // has @Inject constructor ): UserRepository // interface it satisfies @Binds abstract fun bindAnalytics(impl: FirebaseAnalytics): AnalyticsTracker } // Mix @Binds + @Provides in same module using companion object @Module @InstallIn(SingletonComponent::class) abstract class AppModule { @Binds abstract fun bindRepo(impl: UserRepositoryImpl): UserRepository companion object { @Provides @Singleton fun provideDb(@ApplicationContext ctx: Context): AppDatabase = Room.databaseBuilder(ctx, AppDatabase::class.java, "app.db").build () } }
- @Provides: executes code — use for third-party libraries like Retrofit, Room, OkHttp
- @Binds: zero-overhead declaration — preferred for your own classes with @Inject constructor
- @Binds requires abstract class and abstract function — no body
- Companion object trick: mix @Binds and @Provides in the same abstract class module
- @Provides parameters: Hilt automatically provides them from the dependency graph
Rule of thumb: if you own the class and can add @Inject to its constructor, use @Binds. If it's a third-party class (Retrofit, OkHttp, Room) you can't annotate, use @Provides. @Binds generates leaner Dagger code — prefer it whenever possible.
The domain layer is the heart of Clean Architecture — pure Kotlin, no Android dependencies. It defines what the app does, independent of how data is stored or displayed.
// ✅ BELONGS in domain layer // Entities — core business objects with business methods data class Order(val id: String, val items: List<OrderItem>, val status: OrderStatus) { val total: Double get() = items.sumOf { it.price * it.quantity } fun canBeCancelled() = status == OrderStatus.PENDING } // Repository INTERFACES (implemented in data layer) interface OrderRepository { fun observeOrders(): Flow<List<Order>> suspend fun cancelOrder(id: String): Result<Unit> } // Use cases with business rules class CancelOrderUseCase @Inject constructor(private val repo: OrderRepository) { suspend operator fun invoke(orderId: String): Result<Unit> { val order = repo.getOrder(orderId) check(order.canBeCancelled ()) { "Cannot cancel a ${order.status} order" } return repo.cancelOrder (orderId) } } // ❌ Does NOT belong in domain layer // import android.content.Context ← Android // import retrofit2.http.GET ← Retrofit (data layer) // import androidx.room.Entity ← Room (data layer) // import androidx.compose.* ← UI (presentation) // String formatting, date display ← presentation // domain/build.gradle.kts — pure Kotlin JVM module // plugins { kotlin("jvm") } — NO android plugin // dependencies { kotlinx-coroutines-core only }
- Domain: pure Kotlin module — no Android Gradle plugin, no @Entity, no Context
- Entities: business objects with business methods (canBeCancelled, total calculation)
- Repository interfaces: defined in domain, implemented in data layer
- Use cases: orchestrate business operations — validate, coordinate, return result
- Domain purity test: "Can I run these tests with just the JVM?" — if yes, it's pure
The domain purity test: "Does this code import anything from Android, Room, Retrofit, or Compose?" If yes, it doesn't belong in domain. Pure domain tests run in milliseconds with no emulator — that's the practical payoff of keeping it clean.
MVP uses a Presenter that holds a direct View reference — causing lifecycle issues and memory leaks. MVVM uses observable state — ViewModel has no View reference, surviving configuration changes cleanly.
// MVP — Presenter holds View reference interface UserView { fun showUser(user: User) fun showLoading() fun showError(msg: String) } class UserPresenter(private var view: UserView?) { fun loadUser(id: String) { view?.showLoading () api.getUser (id) { view?.showUser (it) } // Problem: if Activity rotates, view reference is STALE } fun detach() { view = null } // must call in onDestroy — easy to forget! } // MVVM — ViewModel has ZERO View reference class UserViewModel : ViewModel() { val state = MutableStateFlow<UiState<User>>(UiState.Loading) // Rotation: new Activity subscribes to same StateFlow — gets current state // No detach() needed — nothing to clean up // ViewModel survives rotation in ViewModelStore } // Why MVVM won: // ✅ ViewModel retained by ViewModelStore — survives rotation // ✅ No View reference — no memory leaks // ✅ StateFlow/LiveData handles lifecycle naturally // ✅ Google Architecture Components built for MVVM // ✅ Compose is naturally MVVM: state flows down, events go up // MVP problems: // ❌ Presenter holds View → memory leak if detach() forgotten // ❌ No built-in rotation survival // ❌ View interface = boilerplate for every action
- MVP: bidirectional Presenter↔View reference — lifecycle management required
- MVVM: ViewModel exposes state, View subscribes — no direct reference
- ViewModel survives rotation: retained by ViewModelStore — data preserved
- No cleanup needed: ViewModel never holds View → no memory leaks
- Compose fits MVVM perfectly: state flows down, events (intents) flow up
The rotation argument is decisive: with MVP, the Activity is destroyed and the Presenter either leaks it or loses state. With MVVM, the ViewModel survives — the new Activity subscribes to the same StateFlow and immediately gets the current state. No boilerplate, no lifecycle bugs.
ViewModel tests verify that the right state is emitted for given inputs — using fake repositories, TestDispatcher for coroutines, and Turbine for Flow assertions.
// Fake repository — returns predictable data class FakeUserRepository : UserRepository { var userToReturn: User = User("1", "Alice") var shouldThrow = false override suspend fun getUser(id: String): User { if (shouldThrow) throw IOException("Network error") return userToReturn } } // ViewModel test @OptIn(ExperimentalCoroutinesApi::class) class UserViewModelTest { private val testDispatcher = StandardTestDispatcher() private val fakeRepo = FakeUserRepository() private lateinit var vm: UserViewModel @Before fun setUp() { Dispatchers.setMain (testDispatcher) vm = UserViewModel(fakeRepo) } @After fun tearDown() { Dispatchers.resetMain () } @Test fun loadUser_emitsSuccess() = runTest { vm.state.test { // Turbine assertEquals(UiState.Loading, awaitItem()) // initial vm.loadUser ("1") val success = awaitItem() as UiState.Success assertEquals("Alice", success.data.name) } } @Test fun loadUser_onError_emitsError() = runTest { fakeRepo.shouldThrow = true vm.state.test { awaitItem() // skip Loading vm.loadUser ("1") assertTrue(awaitItem() is UiState.Error ) } } }
- Fakes over mocks: FakeUserRepository is readable, type-safe, not fragile to signature changes
- Dispatchers.setMain(TestDispatcher): makes viewModelScope use controllable test time
- Turbine: awaitItem() for each expected emission — clear assertions on Flow values
- Test both happy path AND error paths — shouldThrow = true for error simulation
- What NOT to test in ViewModel: repository internals (test separately), UI rendering (Compose tests)
Fakes vs mocks: Fakes are hand-written classes (shouldThrow = true). Mocks use Mockk/Mockito frameworks. Prefer fakes for repositories — they're more readable and don't break when method signatures change. Use mocks only for complex interaction verification.
This PR has 5 distinct architectural violations — identifying all of them systematically is what senior code reviews look like.
// ❌ BAD CODE — 5 violations class UserViewModel( private val textView: TextView // Bug 1: View reference in ViewModel! ) : ViewModel() { private val api = Retrofit.Builder() // Bug 2: creating Retrofit here! .baseUrl ("https://api.example.com").build ().create (UserApi::class.java) fun loadUser(id: String) { GlobalScope.launch { // Bug 3: GlobalScope — never cancelled! val user = api.getUser (id) // Bug 4: direct API call, no Repository textView.text = user.name // Bug 5: updating View from ViewModel! } } } // ✅ FIXED VERSION @HiltViewModel class UserViewModel @Inject constructor( // Fix 2: inject dependencies private val repo: UserRepository ) : ViewModel() { // Fix 1: no View reference private val _state = MutableStateFlow<UiState<User>>(UiState.Loading) val state = _state.asStateFlow () // Fix 5: expose state, not View fun loadUser(id: String) { viewModelScope.launch { // Fix 3: viewModelScope runCatching { repo.getUser (id) } // Fix 4: through Repository .onSuccess { _state.value = UiState.Success (it) } .onFailure { _state.value = UiState.Error (it.message !!) } } } }
- Bug 1: TextView in ViewModel — Activity leaks in memory after rotation
- Bug 2: Creating Retrofit in ViewModel — hardcoded, not injectable, not shared
- Bug 3: GlobalScope — coroutine lives until process death, never tied to ViewModel
- Bug 4: Direct API call — bypasses caching, error handling, and Repository abstraction
- Bug 5: Updating View from background — threading violation + breaks MVVM separation
Count violations systematically in a code review: memory leak → threading → scope → abstraction → separation of concerns. Identifying all 5 — not just the obvious one — shows you have a mental architectural checklist. This is what senior PR reviews look like.
Build time improvement is a systematic process — measure first, take the free wins, then strategically modularize the highest-churn code.
// Step 1: Measure first — don't guess // ./gradlew assembleDebug --scan // Gradle build scan identifies which tasks dominate // Step 2: Free wins — no code changes needed // gradle.properties org.gradle.caching=true // reuse cached outputs org.gradle.parallel=true // build modules concurrently org.gradle.jvmargs=-Xmx4g // more heap for Gradle daemon kotlin.incremental=true // incremental Kotlin compilation // Expected improvement: 30-50% from these alone // Step 3: Modularise strategically // Extract :core:network first — changes rarely, no need to recompile often // Then :core:database, :core:ui // Then feature modules — highest churn, biggest incremental gains // Step 4: Use implementation() not api() // api(): exposes dep to consumers → cascade recompile on change // implementation(): keeps dep private → only this module recompiles dependencies { implementation(project(":core:network")) // ✅ private // api(project(":core:network")) ← leaks to consumers! } // Retrofit version bump + api() = all 10 modules recompile // Retrofit version bump + implementation() = only :core:network // Step 5: Convention plugins — consistent, maintainable modules // :build-logic defines reusable Gradle plugins // Each feature module: just 2-3 lines of plugins { }
- Measure first: Gradle build scan shows the bottleneck — don't optimize blindly
- Free wins: org.gradle.caching + parallel + incremental = 30-50% with zero code changes
- Extract stable code first: :core:network changes rarely → compile once, cache forever
- implementation() over api(): prevents recompilation cascade across all consumer modules
- Convention plugins: consistent build config → predictable incremental compilation
"Measure first with ./gradlew --scan" immediately shows engineering discipline. The biggest mistake: spending weeks on modularization before enabling caching and parallel builds — those are 2-minute config changes that give 40% improvement for free.
Design patterns are proven solutions to recurring problems. Connecting theory to real Android APIs — not just naming patterns — is what interviewers want to hear.
// OBSERVER — reactive data flow between layers // StateFlow, Flow, LiveData ARE the Observer pattern val state = MutableStateFlow(UiState.Loading) state.collect { render(it) } // multiple observers subscribe independently // FACTORY — create objects without specifying exact class // ViewModelProvider is a Factory interface AnalyticsTracker { fun track(event: String) } object AnalyticsFactory { fun create(debug: Boolean): AnalyticsTracker = if (debug) LogAnalyticsTracker() else FirebaseAnalyticsTracker() } // BUILDER — construct complex objects step by step val client = OkHttpClient.Builder() .connectTimeout (30, TimeUnit.SECONDS) .addInterceptor(AuthInterceptor()) . addInterceptor(LoggingInterceptor()) . build () // Also: Retrofit.Builder, AlertDialog.Builder, Room.Builder // STRATEGY — interchangeable algorithms interface SortStrategy { fun sort(items: List<Product>): List<Product> } class PriceSortStrategy : SortStrategy { override fun sort(items: List<Product>) = items.sortedBy { it.price } } class NameSortStrategy : SortStrategy { override fun sort(items: List<Product>) = items.sortedBy { it.name } } // PaymentProcessors (Stripe, Razorpay, UPI) are also Strategy pattern // SINGLETON — managed by Hilt @Singleton scope // Room database, OkHttp client — created once, shared everywhere
- Observer: StateFlow/Flow/LiveData — reactive data between layers
- Factory: ViewModelProvider, Hilt modules — create objects with their dependencies
- Builder: OkHttpClient.Builder, Retrofit.Builder — step-by-step complex construction
- Strategy: sort/filter algorithms, payment processors — swap behavior at runtime
- Singleton: Room database, OkHttp — one instance via Hilt @Singleton
Always connect patterns to real APIs. "StateFlow IS the Observer pattern." "ViewModelProvider IS a Factory." "Hilt @Singleton IS the Singleton pattern — but correctly scoped, not a static instance." Connecting theory to real code shows depth that just naming patterns doesn't.
The data layer implements domain interfaces and handles all data concerns. DTOs match the network/database shape; domain entities represent business concepts — they must always be mapped separately.
// DTO — matches network JSON exactly @Serializable data class UserDto( val user_id: String, // snake_case from API val full_name: String, val created_at: Long, // epoch timestamp val is_premium: Boolean ) // Room Entity — matches database schema @Entity(tableName = "users") data class UserEntity( @PrimaryKey val id: String, val name: String, val createdAt: Long, val isPremium: Boolean ) // Domain Entity — clean, business-focused data class User( val id: String, val name: String, val createdAt: Instant, // proper type, not Long val tier: UserTier // richer type, not Boolean ) // Mappers — extension functions in data layer fun UserDto.toDomain() = User( id = user_id, name = full_name, createdAt = Instant.ofEpochSecond (created_at), tier = if (is_premium) UserTier.PREMIUM else UserTier.FREE ) fun UserEntity.toDomain () = User(id, name, Instant.ofEpochSecond (createdAt), if (isPremium) UserTier.PREMIUM else UserTier.FREE) fun User.toEntity () = UserEntity(id, name, createdAt.epochSecond, tier == UserTier.PREMIUM)
- DTO: shaped for network JSON — snake_case, primitive types, @Serializable
- Room Entity: shaped for DB schema — @PrimaryKey, column names, indexes
- Domain Entity: clean business object — Instant instead of Long, enum instead of Boolean
- Mappers in data layer: domain never knows about DTO shapes or Room annotations
- Why separate: API renames user_id to id → update DTO and mapper only, not all 200 usages
The maintenance argument for mapping: APIs change. If you use UserDto everywhere and the API renames a field, you update 200 files. With domain entities, you update one mapper. Domain entities are stable; DTOs are volatile. The mapper is the isolation buffer.
Offline-first architecture treats the local database as the single source of truth. The UI always reads from the DB — the network just refreshes it in the background.
// Room DAO — reactive, auto-emits on any change @Dao interface ProductDao { @Query("SELECT * FROM products") fun observeAll(): Flow<List<ProductEntity>> // works offline immediately @Insert(onConflict = OnConflictStrategy.REPLACE) suspend fun insertAll(products: List<ProductEntity>) } // Repository — observe DB, refresh from network class ProductRepository @Inject constructor( private val dao: ProductDao, private val api: ProductApi ) { fun getProducts(): Flow<List<Product>> = dao.observeAll ().map { it.map { e -> e.toDomain () } } suspend fun refresh(): Result<Unit> = runCatching { val fresh = withContext(Dispatchers.IO) { api.getProducts () } dao.insertAll (fresh.map { it.toEntity () }) // triggers observeAll Flow! } } // ViewModel — load DB immediately, refresh in background class ProductViewModel @Inject constructor(private val repo: ProductRepository) : ViewModel() { val products = repo.getProducts () .stateIn (viewModelScope, SharingStarted.WhileSubscribed (5000), emptyList()) private val _isOffline = MutableStateFlow(false) val isOffline = _isOffline.asStateFlow () init { viewModelScope.launch { repo.refresh ().onFailure { _isOffline.value = true } } } } // UI shows "Offline — showing cached data" banner when isOffline=true // Works without network: DB always has something to show
- DB as source of truth: UI reads from Room — works immediately, even offline
- Network as refresher: background refresh updates DB, Flow auto-notifies UI
- Silent failure: show "cached data" banner on network error, not a full error screen
- WorkManager: schedule background sync for when the device is offline and reconnects
- Room Flow + StateFlow: stateIn caches the latest value for new collectors (rotation-safe)
The mental model: "The UI asks the database, not the network." The network populates the database in the background. When offline, the user still sees stale data — show a "Last updated X minutes ago" indicator. This is the same pattern Google Maps and Instagram use.
Version Catalog centralizes all dependency declarations in one file. In multi-module apps, it prevents version conflicts and makes upgrades a single-line change across all modules.
// gradle/libs.versions.toml [versions] kotlin = "2.0.0" compose-bom = "2024.09.00" hilt = "2.51" retrofit = "2.11.0" room = "2.6.1" [libraries] hilt-android = { module = "com.google.dagger:hilt-android", version.ref = "hilt" } hilt-compiler = { module = "com.google.dagger:hilt-compiler", version.ref = "hilt" } retrofit-core = { module = "com.squareup.retrofit2:retrofit", version.ref = "retrofit" } room-runtime = { module = "androidx.room:room-runtime", version.ref = "room" } room-ktx = { module = "androidx.room:room-ktx", version.ref = "room" } [bundles] room = ["room-runtime", "room-ktx"] // group related libs [plugins] hilt = { id = "com.google.dagger.hilt.android", version.ref = "hilt" } kotlin-ksp = { id = "com.google.devtools.ksp", version = "2.0.0-1.0.21" } // :feature:home/build.gradle.kts — clean, type-safe plugins { id("myapp.android.feature") // convention plugin } dependencies { implementation(libs.hilt.android) // type-safe access ksp(libs.hilt.compiler) implementation(libs.bundles.room) // both room libs in one line } // Upgrade Hilt: change ONE line in toml → all 10 modules updated
- Single source: one file defines all versions — no version conflicts across modules
- Type-safe access: libs.hilt.android in IDE gives autocomplete — no string typos
- Bundles: group related libs — libs.bundles.room adds room-runtime + room-ktx together
- Plugins catalog: plugin versions centralized too — kotlin, hilt, ksp all in one place
- Upgrade impact: Retrofit 2.11.0 → 2.11.1 = one line change → all modules updated
Without Version Catalog, "com.squareup.retrofit2:retrofit:2.11.0" appears in 10 build files. One module accidentally has "2.9.0" — classpath conflict, cryptic build failure. Version Catalog makes this physically impossible. Every multi-module project should start with it.
The testing pyramid guides how many tests to write at each level. Clean Architecture makes each layer independently testable — unit tests at the bottom, UI tests at the top.
// UNIT TESTS — JVM only, milliseconds, 70% of test suite // Test: domain entities, use cases, ViewModels class OrderTotalTest { @Test fun total_sumsItemPrices() { val order = Order(items = listOf(Item(price = 10.0), Item(price = 20.0))) assertEquals(30.0, order.total, 0.001) } } // No Android runtime. Runs in <1ms. Domain layer pure Kotlin → ideal. // INTEGRATION TESTS — Android runtime, seconds, 20% of suite // Test: Repository with real Room, ViewModel with real UseCase @RunWith(AndroidJUnit4::class) class UserRepositoryTest { @Before fun setUp() { db = Room.inMemoryDatabaseBuilder (ctx, AppDatabase::class.java).build () } @Test fun insertUser_thenQuery_returnsUser() = runTest { dao.insert(UserEntity("1", "Alice")) assertEquals("Alice", dao. get ("1")?.name) } } // Needs emulator/Robolectric. Runs in seconds. // UI TESTS — full app, 10-30 seconds, 10% of suite // Test: user journeys, screen interactions @HiltAndroidTest class UserScreenTest { @get:Rule val rule = createAndroidComposeRule<MainActivity>() @Test fun userScreen_showsName() { rule.onNodeWithText ("Alice").assertIsDisplayed () rule.onNodeWithContentDescription ("Refresh").performClick() rule. onNodeWithText ("Refreshing...").assertIsDisplayed () } } // Needs emulator. Runs 10-30 seconds. Catches UI regression bugs.
- Unit (70%): JVM only, milliseconds — domain entities, use cases, ViewModels with fakes
- Integration (20%): Android runtime, seconds — Repository+Room, UseCase+Repository
- UI (10%): full emulator, 10-30s — user journeys, complete screen interactions
- Pyramid ratio: more unit tests (cheap) than UI tests (expensive and slow)
- Clean Architecture enabler: each layer independently testable — unit tests never need emulator
Specific libraries win interviews: "JUnit4 + Fakes for unit tests, Room inMemoryDatabase for integration, Hilt testing + createAndroidComposeRule for UI tests." Knowing the specific tools — not just the concept — shows you've actually written these tests.
Hilt is compile-time, annotation-based — errors fail the build. Koin is a runtime DSL — errors crash at runtime. Both provide DI but differ fundamentally in safety and setup complexity.
// HILT — compile-time safety @Module @InstallIn(SingletonComponent::class) object AppModule { @Provides @Singleton fun provideRetrofit(): Retrofit = Retrofit.Builder().build () } @HiltViewModel class UserViewModel @Inject constructor(val repo: UserRepository) : ViewModel() // Missing binding → BUILD FAILS with clear error message // Zero performance cost at runtime // KOIN — runtime DSL val appModule = module { single { Retrofit.Builder().build () } factory { UserRepositoryImpl(get ()) } viewModel { UserViewModel(get ()) } } class MyApp : Application() { override fun onCreate() { super.onCreate (); startKoin { modules(appModule) } } } // Missing binding → RUNTIME CRASH on first injection // Small startup overhead for graph validation // Decision guide: // Hilt: ✅ Android apps, large teams, safety-critical, Google recommended // Koin: ✅ KMM (Kotlin Multiplatform), quick prototypes, simpler setup // Hilt is the 2025 standard for Android-only projects // Koin's multiplatform support makes it win for KMM projects
- Hilt: compile-time — missing binding fails build, zero runtime cost, more setup
- Koin: runtime — simpler DSL, errors appear when code runs, slight startup cost
- Hilt: generates Dagger components at compile time — no reflection at runtime
- Koin: validates graph at startup (or lazily) — faster to set up initially
- 2025 recommendation: Hilt for Android apps, Koin for Kotlin Multiplatform
"Hilt fails at build time if I forget to provide a dependency. Koin fails at runtime — in front of users." This safety argument is what interviewers want. For production Android apps with a team, build-time safety over developer convenience every time.
A 500-line ViewModel is a code smell — it has too many responsibilities. Delegate to use cases, state holders, and helper classes — each with a single, clear purpose.
// Fat ViewModel — 500 lines doing everything class CheckoutViewModel : ViewModel() { // 100 lines: address validation and selection // 100 lines: payment card validation // 150 lines: order placement + retry logic // 100 lines: promo code calculation // 50 lines: analytics tracking } // Refactor 1: Extract use cases — business logic out of VM class ValidatePaymentUseCase @Inject constructor() { operator fun invoke(card: CreditCard): ValidationResult { /* ... */ } } class PlaceOrderUseCase @Inject constructor(private val orderRepo: OrderRepository, ...) { suspend operator fun invoke(order: Order): Result<OrderId> { /* ... */ } } class ApplyPromoUseCase @Inject constructor() { suspend operator fun invoke(code: String): Discount? { /* ... */ } } // Refactor 2: StateHolder for UI-level state slices class AddressStateHolder @Inject constructor(private val repo: AddressRepository) { val selected = MutableStateFlow<Address?>(null) fun select(address: Address) { selected.value = address } } // Slim ViewModel — now ~80 lines, orchestrates only @HiltViewModel class CheckoutViewModel @Inject constructor( val addressHolder: AddressStateHolder, private val validatePayment: ValidatePaymentUseCase, private val placeOrder: PlaceOrderUseCase, private val applyPromo: ApplyPromoUseCase ) : ViewModel() { fun checkout(card: CreditCard) { val validation = validatePayment(card) if (!validation.isValid) { showError(validation.error); return } viewModelScope.launch { placeOrder(buildOrder()) } } }
- Use cases: extract each business operation — ValidatePayment, PlaceOrder, ApplyPromo
- StateHolder: non-ViewModel class managing a UI state slice — AddressStateHolder for address logic
- ViewModel becomes orchestrator: calls specialists, doesn't implement everything itself
- 500 → 80 lines: ViewModel coordinates, delegates implementation details
- Each extracted class independently testable — higher overall test coverage
StateHolders are underused. An AddressStateHolder manages address selection, validation, and saving — completely independently of the ViewModel. The ViewModel just exposes addressHolder.selected. This keeps ViewModel slim without pushing logic into the View layer.
When a screen has many concurrent inputs (WebSocket, user actions, polling) and complex state transitions, MVI's single state object is the right choice — it prevents impossible UI combinations.
// MVI — justified by complexity of this screen data class OrderTrackingState( val order: Order? = null, val driverLocation: LatLng? = null, val eta: String? = null, val isLoading: Boolean = true, val isCancelling: Boolean = false, val error: String? = null ) // Impossible to have: isCancelling=true AND order=null AND error != null // (data class copy() ensures consistent updates) sealed class OrderIntent { data class Load(val id: String) : OrderIntent() object Cancel : OrderIntent() data class DriverMoved(val loc: LatLng) : OrderIntent() // from WebSocket data class StatusChanged(val status: OrderStatus) : OrderIntent() } @HiltViewModel class OrderTrackingViewModel @Inject constructor( private val orderRepo: OrderRepository, private val wsService: WebSocketService ) : ViewModel() { private val _state = MutableStateFlow(OrderTrackingState()) val state = _state.asStateFlow () fun dispatch(intent: OrderIntent) { viewModelScope.launch { when (intent) { is OrderIntent.Load -> loadOrder(intent.id) is OrderIntent.Cancel -> cancelOrder() is OrderIntent.DriverMoved -> _state.update { it.copy (driverLocation = intent.loc) } is OrderIntent.StatusChanged -> handleStatusChange(intent.status) } } } }
- MVI wins: WebSocket events + user actions + polling all fire as Intents — single processing point
- Single state: impossible to show driver moving AND order cancelled AND error simultaneously
- MVVM problem: separate StateFlows for driverLocation, eta, order, error can combine inconsistently
- Testable: inject intents in order, assert exact final state — deterministic replay
- Debugging: log every Intent → can replay any bug scenario exactly
Frame it as risk: "With MVVM and 6 separate StateFlows for this screen, I'd need to carefully combine them — prone to race conditions. With MVI's single state, consistency is guaranteed by the data class copy() mechanism. For a screen this complex, MVI's overhead is worth the safety guarantee."
A clear end-to-end data flow explanation demonstrates your architectural understanding. Being able to teach it to a junior shows mastery — not just memorization.
// Full journey: Button click → Server → UI update // Step 1: UI triggers action (Compose) Button(onClick = { vm.loadUser ("123") }) { Text("Load Profile") } // Step 2: ViewModel calls use case class UserViewModel(private val getUser: GetUserUseCase) : ViewModel() { fun loadUser(id: String) { _state.value = UiState.Loading // immediate feedback viewModelScope.launch { runCatching { getUser(id) } .fold ({ _state.value = UiState.Success (it) }, { _state.value = UiState.Error (it.message !!) }) } } } // Step 3: Use case validates, calls repository interface class GetUserUseCase(private val repo: UserRepository) { suspend operator fun invoke(id: String): User { require(id.isNotBlank ()) { "ID cannot be blank" } return repo.getUser (id) } } // Step 4: Repository decides cache vs network class UserRepositoryImpl(private val api: UserApi, private val dao: UserDao) { override suspend fun getUser(id: String): User { dao.get (id)?.toDomain ()?.let { return it } // 4a: cache hit val dto = withContext(Dispatchers.IO) { api.getUser (id) } // 4b: network dao.insert (dto.toEntity ()) // 4c: save return dto.toDomain () // 4d: return } } // Step 5: StateFlow update → Compose recomposes val state by vm.state.collectAsStateWithLifecycle () when (state) { is UiState.Loading -> CircularProgressIndicator() is UiState.Success -> UserCard(state.data) is UiState.Error -> ErrorMessage(state.msg) }
- Step 1→2: UI triggers ViewModel function — View never calls Repository directly
- Step 2→3: ViewModel calls UseCase — business validation happens here
- Step 3→4: UseCase calls Repository interface — domain never knows the implementation
- Step 4: Repository decides cache vs network — maps DTO to domain entity
- Step 5: StateFlow update triggers Compose recomposition — UI re-renders automatically
Draw this as a flow diagram while answering: Button → dispatch() → UseCase.invoke() → Repository.getUser() → API/DB → toDomain() → StateFlow update → Compose recomposition. Each arrow has direction and each layer has one job. This visual walkthrough is exactly what senior architects communicate.
Hilt scopes tie dependency lifetime to a component's lifecycle. Choosing the wrong scope causes either memory leaks (too long) or redundant object creation (too short).
// Scope lifetime hierarchy (longest → shortest): // @Singleton → lives as long as Application // @ActivityRetainedScoped → survives rotation, dies when Activity finishes // @ActivityScoped → dies on every rotation // @ViewModelScoped → dies when ViewModel is cleared // @FragmentScoped → dies when Fragment is detached // @ViewScoped → dies when View is destroyed // ✅ Correct @Singleton — no Android Context stored @Module @InstallIn(SingletonComponent::class) object NetworkModule { @Provides @Singleton fun provideOkHttp(): OkHttpClient = OkHttpClient.Builder().build () @Provides @Singleton fun provideDb(@ApplicationContext ctx: Context): AppDatabase = Room.databaseBuilder(ctx, AppDatabase::class.java, "app.db"). build () // ✅ @ApplicationContext — lives as long as app, no leak } // ❌ MEMORY LEAK — @Singleton holding Activity Context @Provides @Singleton fun provideAnalytics(activity: Activity): Analytics = Analytics(activity) // Activity LEAKED — Singleton outlives Activity! // Fix: use @ActivityRetainedScoped or @ApplicationContext // ❌ LEAK — storing Activity in @Singleton service @Singleton class NavigationService @Inject constructor() { private var activity: Activity? = null // LEAK after rotation! fun setActivity(a: Activity) { activity = a } } // @ViewModelScoped — each ViewModel gets its own instance @Module @InstallIn(ViewModelComponent::class) abstract class ViewModelModule { @Binds @ViewModelScoped abstract fun bindRepo(impl: UserRepositoryImpl): UserRepository } // Each ViewModel gets its own UserRepositoryImpl instance // Two ViewModels → two separate repository instances
- @Singleton: lives for app lifetime — never store Activity, Fragment, or View references
- @ApplicationContext: the safe Context for @Singleton — lives as long as the app
- @ActivityRetainedScoped: survives rotation like ViewModel — correct for per-Activity singletons
- @ViewModelScoped: tied to ViewModel — each ViewModel gets its own instance
- Memory leak pattern: @Singleton holding Activity Context — Activity can't be GC'd
The @Singleton + Activity context leak is the most common Hilt mistake. Always use @ApplicationContext in Singleton-scoped dependencies. If you need Activity context, your dependency should be @ActivityScoped or @ActivityRetainedScoped — not @Singleton.
Cross-cutting concerns like analytics should not be scattered across 20 screens. Use the navigation observer, Decorator pattern, or a base ViewModel — add tracking once, get it everywhere.
// Pattern 1: Navigation observer — tracks ALL screen views in ONE place @Composable fun AppNavHost(navController: NavHostController) { val analytics = LocalAnalytics.current val currentEntry by navController.currentBackStackEntryAsState () LaunchedEffect(currentEntry) { // fires on every navigation currentEntry?.destination?.route?.let { route -> analytics.trackScreen (route) // ONE place for all screen tracking } } NavHost(navController, HomeRoute) { /* feature graphs */ } } // Pattern 2: Decorator — wraps repository to add tracking class AnalyticsUserRepository @Inject constructor( private val delegate: UserRepository, private val analytics: Analytics ) : UserRepository by delegate { // Kotlin delegation — delegates everything override suspend fun getUser(id: String): User { analytics.track ("get_user", mapOf("id" to id)) // added here return delegate.getUser (id) } // All other methods delegated automatically — no boilerplate } // Bind decorator via Hilt @Module @InstallIn(SingletonComponent::class) abstract class RepoModule { @Binds @Singleton abstract fun bindRepo(impl: AnalyticsUserRepository): UserRepository // AnalyticsUserRepository wraps UserRepositoryImpl transparently } // Pattern 3: Base ViewModel — shared tracking across all ViewModels abstract class TrackedViewModel( private val analytics: Analytics, protected val screenName: String ) : ViewModel() { init { analytics.trackScreen (screenName) } protected fun track(event: String, params: Map<String, Any> = emptyMap()) { analytics.track (event, params) } } class HomeViewModel @Inject constructor(analytics: Analytics) : TrackedViewModel(analytics, "home_screen") { fun onProductClick(id: String) { track("product_click", mapOf("product_id" to id)) } }
- Navigation observer: one LaunchedEffect logs all screen views — zero code in each screen
- Decorator + Kotlin delegation: wrap repository, track per-method calls, no boilerplate
- Base ViewModel: shared tracking logic inherited automatically by all ViewModels
- Never scatter tracking in 20 composables — maintenance nightmare when tracking changes
- Hilt @Binds swaps: bind AnalyticsUserRepository where UserRepository is needed
The navigation observer is the cleanest answer for screen tracking — one LaunchedEffect, zero changes to screens. For business events, the Decorator pattern is best — wrap the use case or repository, add tracking, Hilt wires it transparently. No screen code changes at all.
Result makes error handling explicit in the return type — callers can't accidentally ignore failure. Exceptions can silently propagate; Result forces the caller to handle both cases.
// Kotlin built-in Result — wraps success or failure suspend fun fetchUser(id: String): Result<User> = runCatching { api.getUser (id) // any exception becomes Result.failure() } // Caller MUST handle both cases fetchUser("123") .onSuccess { user -> _state.value = UiState.Success (user) } .onFailure { e -> _state.value = UiState. Error (e.message !!) } // Custom sealed Result — richer error types sealed class NetworkResult<out T> { data class Success<T>(val data: T) : NetworkResult<T>() data class HttpError(val code: Int, val msg: String) : NetworkResult<Nothing>() data class NetworkError(val cause: Throwable) : NetworkResult<Nothing>() } // API layer converts exceptions → typed errors suspend fun <T> safeCall(call: suspend () -> T): NetworkResult<T> = try { NetworkResult.Success (call()) } catch (e: HttpException) { NetworkResult.HttpError (e.code (), e.message ()) } catch (e: IOException) { NetworkResult.NetworkError (e) } // Exhaustive when — compiler forces handling ALL cases when (val result = repo.getUser (id)) { is NetworkResult.Success -> showUser(result.data) is NetworkResult.HttpError -> showError("Server ${result.code}") is NetworkResult.NetworkError -> showError("No internet") }
- Result: success/failure in return type — compiler prevents ignoring errors
- runCatching: wraps any suspend call in Result.success/failure — one-liner safety
- Sealed NetworkResult: specific error types (HttpError vs NetworkError) — richer UI messages
- Exhaustive when: Kotlin forces handling all sealed subclasses — no accidental missing branch
- Use Result for expected failures (network, validation); exceptions for programming errors
Key argument: with exceptions, getUser() can be called without a try-catch — the compiler won't complain but production will crash. With Result<User>, the caller must unwrap — error handling is enforced at compile time. This is the shift from "hope it works" to "prove it works."
Feature modules can't reference each other — all shared data must flow through a :core module. Three patterns work: shared interface module, app-level ViewModel, or a typed event bus.
// Pattern 1: :core:session — shared interface module // Both :feature:auth and :feature:cart import :core:session // :core:session/SessionManager.kt interface SessionManager { val currentUser: StateFlow<User?> suspend fun getToken(): String? fun isLoggedIn(): Boolean } // :feature:auth implements it class SessionManagerImpl @Inject constructor( private val prefs: SecurePreferences ) : SessionManager { private val _user = MutableStateFlow<User?>(null) override val currentUser = _user.asStateFlow () override suspend fun getToken() = prefs.getToken () override fun isLoggedIn() = _user.value != null } // :app binds it via Hilt @Binds @Singleton abstract fun bindSession(impl: SessionManagerImpl): SessionManager // :feature:cart uses it — no dependency on :feature:auth class CartViewModel @Inject constructor( private val session: SessionManager // from :core:session ) : ViewModel() { val user = session.currentUser.stateIn (viewModelScope, SharingStarted.Eagerly , null) } // Pattern 2: SharedFlow event bus in :core:events sealed class AppEvent { data class UserLoggedIn(val user: User) : AppEvent() object UserLoggedOut : AppEvent() } class AppEventBus @Inject constructor() { private val _events = MutableSharedFlow<AppEvent>() val events = _events.asSharedFlow () suspend fun emit(event: AppEvent) { _events.emit (event) } }
- :core:session interface: both features import the interface — not each other
- :app owns the binding: wires SessionManagerImpl to SessionManager — only :app knows both
- SharedFlow event bus: typed events in :core:events — publish/subscribe without coupling
- Dependency direction: feature → core → never feature → feature
- App-level ViewModel: scoped to root NavGraph — accessible from all feature screens
Draw the dependency graph: :feature:auth → :core:session. :feature:cart → :core:session. :app → :feature:auth + :feature:cart + binds SessionManagerImpl. Never :feature:auth → :feature:cart. The rule: arrows point toward :core, never between features.
Convention plugins extract repeated Gradle configuration into reusable plugins — DRY principle for build scripts. Instead of copying 50 lines to 10 modules, you apply one plugin per module.
// Without convention plugins — copied to every feature module: // :feature:home, :feature:profile, :feature:cart — all identical! plugins { id("com.android.library") id("org.jetbrains.kotlin.android") id("com.google.dagger.hilt.android") id("com.google.devtools.ksp") } android { compileSdk = 35; defaultConfig { minSdk = 24 }; ... } dependencies { implementation(libs.hilt.android); ksp(libs.hilt.compiler); ... } // 50 lines × 10 modules = 500 lines of duplication // WITH convention plugins: // build-logic/convention/src/main/kotlin/AndroidFeaturePlugin.kt class AndroidFeaturePlugin : Plugin<Project> { override fun apply(target: Project) = with(target) { pluginManager.apply ("com.android.library") pluginManager.apply ("org.jetbrains.kotlin.android") pluginManager.apply ("dagger.hilt.android.plugin") extensions.configure <LibraryExtension> { compileSdk = 35 defaultConfig { minSdk = 24 } compileOptions { sourceCompatibility = JavaVersion.VERSION_17 targetCompatibility = JavaVersion.VERSION_17 } } val libs = extensions.getByType <VersionCatalogsExtension>().named ("libs") dependencies { add("implementation", libs.findLibrary ("hilt-android").get ()) add("ksp", libs.findLibrary ("hilt-compiler").get ()) } } } // build-logic/convention/build.gradle.kts — register it gradlePlugin { plugins { register("androidFeature") { id = "myapp.android.feature"; implementationClass = "AndroidFeaturePlugin" } } } // :feature:home/build.gradle.kts — now just 2 lines! plugins { id("myapp.android.feature") // applies all 50 lines above id("myapp.android.compose") // adds Compose config } // 10 modules × 2 lines = 20 lines total (vs 500 before)
- Convention plugins: reusable Gradle plugins extracted to :build-logic composite build
- DRY for builds: 50-line config → one plugin line per module
- Single update point: raise compileSdk in one plugin → all 10 modules updated instantly
- Multiple plugins: myapp.android.feature, myapp.android.compose, myapp.android.testing
- Used in Now in Android: Google's official reference project uses this pattern
Mention Google's "Now in Android" — it's the canonical example of convention plugins. The impact: changing minSdk from 24 to 26 goes from editing 10 build files to changing one number in one plugin. Huge maintenance win for multi-module projects with 6+ modules.
ViewModel survives configuration changes (rotation) but NOT process death. SavedStateHandle persists across process death using Bundle — the OS kills and restores it automatically.
// ViewModel survival table: // Screen rotation → ViewModel SURVIVES (ViewModelStore) // App backgrounded (hours) → Process KILLED by OS — ViewModel LOST // "Don't keep activities" → Process KILLED — ViewModel LOST // Back pressed → ViewModel LOST (intentional) // SavedStateHandle — survives process death via Bundle @HiltViewModel class SearchViewModel @Inject constructor( private val saved: SavedStateHandle, // auto-injected by Hilt private val repo: SearchRepository ) : ViewModel() { // getStateFlow — returns StateFlow backed by SavedState val query: StateFlow<String> = saved.getStateFlow ("query", "") fun onQueryChange(q: String) { saved["query"] = q // automatically serialised to Bundle } // Navigation args — type-safe via toRoute() (Navigation 2.8+) val productId: String = saved.toRoute <ProductRoute>().productId } // What to save in SavedStateHandle: // ✅ User-typed text (search query, form inputs) // ✅ Navigation arguments (route params) // ✅ Selected tab, scroll position, filter state // What NOT to save: // ❌ Large datasets — Bundle max ~500KB // ❌ Network data — re-fetch after restore // ❌ Room data — Room restores from DB automatically // Test process death: Developer Options → Don't keep activities // Or: adb shell am kill com.your.package
- ViewModel: survives rotation via ViewModelStore — NOT process death
- SavedStateHandle: survives process death — serialized to Bundle by the OS
- getStateFlow(): returns StateFlow backed by SavedState — reactive and persistent
- Navigation args: saved.toRoute<Route>() — type-safe access to route params
- Bundle limit: ~500KB — never save large collections or images
Test process death with "Don't keep activities" in Developer Options — it aggressively kills the process when you background the app. If your search query disappears when you come back, you need SavedStateHandle. This is the most common production bug that developers attribute to "rotation" but is actually process death.
Feature flags decouple deployment from release. Architecturally they belong at the navigation or use case boundary — never scattered across individual composables.
// :core:flags — feature flag abstraction interface FeatureFlags { val isNewCheckoutEnabled: Boolean val isAiRecommendations: Boolean suspend fun refresh() } // Firebase Remote Config implementation class RemoteFeatureFlags @Inject constructor( private val remote: FirebaseRemoteConfig ) : FeatureFlags { override val isNewCheckoutEnabled get() = remote.getBoolean ("new_checkout") override val isAiRecommendations get() = remote.getBoolean ("ai_recos") override suspend fun refresh() { remote.fetchAndActivate ().await () } } // Navigation-level gate — entire route switched @Composable fun AppNavHost(navController: NavHostController, flags: FeatureFlags) { NavHost(navController, HomeRoute) { homeGraph(navController) if (flags.isNewCheckoutEnabled) newCheckoutGraph(navController) else legacyCheckoutGraph(navController) } } // ONE place decides which checkout — no flag checks in screens // Use case level — algorithm switching class GetRecommendationsUseCase @Inject constructor( private val flags: FeatureFlags, private val aiRepo: AiRecommendationRepo, private val ruleRepo: RuleBasedRepo ) { suspend operator fun invoke(userId: String) = if (flags.isAiRecommendations) aiRepo.get (userId) else ruleRepo.get (userId) } // Test stub — full control per test class TestFeatureFlags( override val isNewCheckoutEnabled: Boolean = false, override val isAiRecommendations: Boolean = false ) : FeatureFlags { override suspend fun refresh() {} }
- Interface abstraction: FeatureFlags interface — Firebase in prod, TestFeatureFlags in tests
- Navigation-level gate: route-level flag check — entire feature switched in ONE place
- Use case branching: flag-driven algorithm selection — no UI coupling
- Never scatter flags in composables: if (flags.X) scattered across 20 files = maintenance hell
- Test stub: TestFeatureFlags with default false — enable per test with named params
The key architectural rule: flags belong at the BOUNDARY between layers, not inside them. Navigation-level = gate entire features. Use case level = switch algorithms. Never inside individual composables. This keeps flag logic centralized and testable.
api() exposes a dependency transitively to consumers. implementation() keeps it private. Incorrect use of api() causes unnecessary recompilation cascades — the biggest hidden build-time killer in multi-module apps.
// implementation() — private, not exposed to consumers // :core:network/build.gradle.kts dependencies { implementation("com.squareup.retrofit2:retrofit:2.11.0") // Retrofit NOT visible to modules that depend on :core:network // :feature:home cannot use Retrofit directly } // api() — exposed transitively to all consumers dependencies { api(project(":core:common")) // Modules depending on THIS also see :core:common's public API } // Build cascade impact (10 modules depending on :core:network): // Scenario: Retrofit version bump // With api(): // :core:network + :feature:home + :feature:profile + :feature:cart // + :feature:checkout + :feature:orders + :app = ALL 7 recompile // With implementation(): // Only :core:network recompiles — public API unchanged // All consumer modules: SKIP (Gradle cache hit) // Rule: default to implementation() // Only upgrade to api() when: // Your module's PUBLIC functions return types FROM that dependency interface UserDao { fun observeUser(): Flow<UserEntity> // returns UserEntity from Room } // If UserEntity is in :core:database, and UserDao is public API → api() justified // But if you map to domain entity in the module → implementation() is fine
- implementation(): private — only this module recompiles when dependency changes
- api(): public — all consumer modules also recompile when dep changes
- Default: always start with implementation() — upgrade to api() only when necessary
- api() justified: when your module's public functions return types from the dependency
- Practical impact: implementation() vs api() can be the difference between 10s and 90s incremental build
Concrete example: "If :core:network uses api(retrofit) and Retrofit releases a patch, all 10 feature modules recompile even though their code didn't change. With implementation(), only :core:network recompiles. That's 9 unnecessary module compilations eliminated."
Parallel team development requires contract-first design — define the API (interface) before implementation. The main team uses a stub; the payment team delivers the real implementation.
// Step 1: Define contract in :feature:payment:api (public, stable) // Both :app and other features can import this safely interface PaymentFeature { fun NavGraphBuilder.paymentGraph (navController: NavController) suspend fun processPayment(orderId: String, method: PaymentMethod): PaymentResult } // Step 2: Checkout team writes a stub (unblocked from day 1) class StubPaymentFeature : PaymentFeature { override fun NavGraphBuilder.paymentGraph (...) { composable("payment") { Text("Payment coming soon") } } override suspend fun processPayment(...) = PaymentResult.Success ("stub-txn-${System.currentTimeMillis()}") } // Step 3: Payment team delivers :feature:payment:impl class PaymentFeatureImpl @Inject constructor( private val paymentRepo: PaymentRepository ) : PaymentFeature { override fun NavGraphBuilder.paymentGraph (...) { composable("payment/{orderId}") { PaymentScreen() } } override suspend fun processPayment(...) = paymentRepo.process (...) } // Step 4: :app swaps stub → real via Hilt @Binds @Module @InstallIn(SingletonComponent::class) abstract class PaymentModule { @Binds abstract fun bindPayment(impl: PaymentFeatureImpl): PaymentFeature // Change this ONE line when payment team delivers: StubPaymentFeature → PaymentFeatureImpl } // Checkout screen — zero changes needed when impl is ready class CheckoutViewModel @Inject constructor( private val payment: PaymentFeature // interface — works with stub OR impl ) : ViewModel()
- Contract-first: define PaymentFeature interface before ANY implementation
- api vs impl modules: :payment:api is public, :payment:impl is the secret — consumers never see impl
- Stub pattern: StubPaymentFeature lets checkout team ship without payment team
- Hilt swap: one @Binds line change — stub to real, zero code changes in consumers
- Dynamic Feature Modules: payment impl downloaded on-demand — smaller initial install
The stub pattern is what makes parallel team development possible. Checkout team writes against PaymentFeature interface from day 1. When payment team delivers, change ONE @Binds line. Zero changes to checkout code. This is how large engineering teams ship independently.
Now in Android (NiA) is Google's official open-source reference app. It demonstrates the architecture patterns Google recommends for production apps in 2024-25: multi-module with convention plugins, Hilt, Kotlin Serialization, Kotlin Flows, Compose, and offline-first with Room. Reading the NiA source is the fastest way to see how all these pieces fit together.
// NiA module structure -- follow this pattern for new projects // :app, :core:data, :core:database, :core:network, :core:ui, :core:model // :feature:foryou, :feature:bookmarks, :feature:topic // Convention plugin -- shared build config (NiA pattern) class AndroidFeatureConventionPlugin : Plugin<Project> { override fun apply(target: Project) = with(target) { pluginManager.apply("com.android.library") pluginManager.apply("org.jetbrains.kotlin.android") extensions.configure<LibraryExtension> { compileSdk = 35 defaultConfig.minSdk = 24 } } } // Offline-first repository pattern from NiA class OfflineFirstNewsRepository @Inject constructor( private val dao: NewsDao, private val api: NewsApi ) : NewsRepository { override fun getNews() = dao.getAll() // always reads from Room override suspend fun sync() { dao.upsertAll(api.fetch()) } }
- NiA demonstrates: multi-module with convention plugins, Hilt DI, offline-first Room, Kotlin Serialization, Compose-only UI
- Convention plugins: shared build config in build-logic/ -- each feature module's build.gradle.kts is ~5 lines
- Offline-first: Room is the source of truth, network syncs to Room, UI observes Room via Flow
- UI state: sealed UiState (Loading/Success/Error) exposed as StateFlow from ViewModel
- NiA source: github.com/android/nowinandroid -- read it before your next architecture interview
"I've studied the Now in Android project" signals seniority immediately. Even better: "We adopted their convention plugin approach — it reduced our build config duplication by 80% across 8 modules." Applying the pattern, not just knowing it, shows real experience.
Migrate from single-module to multi-module when one of three signals appears: incremental builds take over 2 minutes, two teams are stepping on each other's code, or you need a feature that only makes sense as a separate module (like an on-demand dynamic feature). Don't migrate because it feels architectural -- migrate because a real pain point justifies the cost.
// Phase 1: extract :core:network and :core:database (lowest risk, high build payoff) // Phase 2: extract :core:ui (shared Compose components) // Phase 3: extract :feature:X one screen at a time // Strangler Fig pattern -- migrate incrementally, never freeze the codebase include(":app") // start: everything here // Week 2: include(":app", ":core:network") // extract networking first // Week 4: include(":app", ":core:network", ":core:database") // Convention plugins -- add before extracting feature modules // Otherwise every new module needs 50 lines of duplicated build config
- Migrate when: build time > 2 min, OR two teams conflict in the same code, OR you need Dynamic Feature Modules
- Start with :core:network and :core:database -- they have clear boundaries, no UI dependencies, immediate build speedup
- Use the Strangler Fig pattern: extract one module at a time, keep the app shipping throughout the migration
- Add convention plugins before feature modules -- without them each new module requires 50 lines of duplicated Gradle config
- Never freeze the codebase for a big-bang migration -- incremental extraction keeps the team productive
"Enable Gradle caching first — that's free 30-50% improvement in 5 minutes. Modularization is the expensive, weeks-long investment. Show you measure and take the quick wins before committing to the big architectural change." This pragmatism impresses senior interviewers.
DIP is a design principle (the D in SOLID). DI is a technique to implement it. DIP says WHAT to do (depend on abstractions). DI says HOW to do it (provide concrete implementations from outside).
// Dependency Inversion Principle (DIP) // "High-level modules should not depend on low-level modules. // Both should depend on abstractions." // ❌ Violates DIP — ViewModel depends on concrete implementation class UserViewModel { private val repo = UserRepositoryImpl(RetrofitApi(), RoomDao()) // High-level (VM) depends on low-level (Retrofit, Room) — wrong! } // ✅ Follows DIP — depends on abstraction class UserViewModel(private val repo: UserRepository) { // High-level (VM) depends on interface — Retrofit/Room details hidden } // Dependency Injection — the TECHNIQUE that makes DIP work // Method 1: Manual DI (no framework) val api = RetrofitInstance.userApi val repo: UserRepository = UserRepositoryImpl(api) // concrete provided val vm = UserViewModel(repo) // VM gets interface // Method 2: Hilt (automated DI) @HiltViewModel class UserViewModel @Inject constructor( private val repo: UserRepository // Hilt resolves → UserRepositoryImpl ) : ViewModel() // Testing payoff — BOTH principles together: val vm = UserViewModel(FakeUserRepository()) // Possible because of DIP (interface, not concrete class) // Easy because of DI (constructor injection, not internal creation) // Without DIP: DI is impossible (nothing to swap) // Without DI: DIP works in design but wiring is manual and error-prone
- DIP: design principle — high-level modules depend on interfaces, not concrete classes
- DI: technique — provide the concrete implementation from outside the class
- Without DIP: ViewModel creates its own dependencies — DI impossible, untestable
- Without DI: DIP holds in design, but object graph wired manually — error-prone at scale
- Together: DIP defines architecture, DI provides automation (Hilt) — testability unlocked
The power socket analogy: DIP says "use a socket (interface) not a hardwired connection." DI says "the electrician (Hilt) plugs your appliances in." You need both: without DIP, the electrician can't help. Without DI, you wire everything manually — same principle, more work.
One-time events must not be stored in StateFlow (deduplication issue) or re-delivered after rotation. The standard patterns are Channel-as-Flow or MVI side effects.
// Problem with StateFlow for events: // StateFlow deduplicates — addToCart("item1") twice → only ONE snackbar // StateFlow replays — after rotation, old navigation event fires again! // Solution 1: Channel (MVVM — recommended) @HiltViewModel class CheckoutViewModel @Inject constructor() : ViewModel() { private val _events = Channel<CheckoutEvent>(Channel.BUFFERED) val events = _events.receiveAsFlow () // expose as Flow fun onOrderPlaced(orderId: String) { viewModelScope.launch { _events.send (CheckoutEvent.NavigateToSuccess (orderId)) } } } sealed class CheckoutEvent { data class NavigateToSuccess(val orderId: String) : CheckoutEvent() data class ShowError(val msg: String) : CheckoutEvent() } // Collect in Composable — LaunchedEffect for one-time collection LaunchedEffect(Unit) { vm.events.collect { event -> when (event) { is CheckoutEvent.NavigateToSuccess -> navController.navigate (SuccessRoute(event.orderId)) is CheckoutEvent.ShowError -> snackbar.showSnackbar (event.msg) } } } // Solution 2: MVI side effects — same Channel pattern, named differently sealed class OrderEffect { // Effect = one-time side effect data class Navigate(val route: Any) : OrderEffect() data class ShowSnackbar(val msg: String) : OrderEffect() object ShowConfirmDialog : OrderEffect() }
- StateFlow: wrong for events — deduplicates same value, replays on rotation
- Channel (BUFFERED): one-time delivery — each event consumed exactly once
- receiveAsFlow(): exposes Channel as Flow — reactive collection in composable
- LaunchedEffect(Unit): collects events for composable lifetime — cancelled on leave
- MVI naming: State = current UI state; Effect = one-time side effects; Intent = user action
Why Channel over SharedFlow(replay=0) for navigation? If the composable is not yet collected (e.g. initialising), SharedFlow drops the event. Channel BUFFERS it — the navigation fires when the composable starts collecting. Channel.BUFFERED is the safe default for one-time events.
Product flavors combined with Hilt modules let you swap entire feature implementations per flavor — no if/else checks scattered across code, clean separation at the DI layer.
// build.gradle.kts — define flavors android { flavorDimensions += "tier" productFlavors { create("free") { dimension = "tier" } create("pro") { dimension = "tier" } create("enterprise") { dimension = "tier" } } } // Core interface — same across all flavors interface AnalyticsService { fun track(event: String) } interface ExportService { suspend fun exportToCsv(): Uri } // src/free/java — free flavor implementation class NoOpAnalytics : AnalyticsService { override fun track(e: String) {} } class LockedExportService : ExportService { override suspend fun exportToCsv(): Uri = throw UpgradeRequiredException() } // src/pro/java — pro flavor implementation class FirebaseAnalytics : AnalyticsService { override fun track(e: String) { /* firebase */ } } class CsvExportService : ExportService { override suspend fun exportToCsv() = buildCsv() } // Hilt module — same file, different impl per flavor source set // src/free/java/di/FlavorModule.kt @Module @InstallIn(SingletonComponent::class) abstract class FlavorModule { @Binds abstract fun bindAnalytics(impl: NoOpAnalytics): AnalyticsService @Binds abstract fun bindExport(impl: LockedExportService): ExportService } // src/pro/java/di/FlavorModule.kt — SAME class name, different impl @Module @InstallIn(SingletonComponent::class) abstract class FlavorModule { @Binds abstract fun bindAnalytics(impl: FirebaseAnalytics): AnalyticsService @Binds abstract fun bindExport(impl: CsvExportService): ExportService } // Zero if/else in business code — Hilt injects the right impl per flavor
- Flavor source sets: src/free, src/pro, src/enterprise — different implementations per flavor
- Same interface: AnalyticsService, ExportService — all flavors depend on the same contract
- Same class name, different source set: FlavorModule.kt compiled once per flavor build
- Zero if-else: no BuildConfig.FLAVOR checks scattered in business code
- Hilt injects the right implementation automatically based on the flavor being built
The key insight: same interface name, same Hilt module class name, different source sets. Gradle picks the right source set for the flavor being built. Zero conditional logic in ViewModels or use cases — they just inject ExportService and get the right one for their tier.
API migration with a compatibility period requires abstracting the API version behind the repository interface — ViewModels and use cases are completely unaffected by the backend change.
// Repository interface — unchanged throughout migration interface UserRepository { suspend fun getUser(id: String): User fun observeUsers(): Flow<List<User>> } // V1 API (current) interface UserApiV1 { @GET("/v1/users/{id}") suspend fun getUser(@Path("id") id: String): UserDtoV1 } // V2 API (new) interface UserApiV2 { @GET("/v2/users/{id}") suspend fun getUser(@Path("id") id: String): UserDtoV2 } // Migration repository — uses feature flag to route calls class UserRepositoryMigrating @Inject constructor( private val v1: UserApiV1, private val v2: UserApiV2, private val flags: FeatureFlags, private val dao: UserDao ) : UserRepository { override suspend fun getUser(id: String): User { return if (flags.isV2ApiEnabled) { v2.getUser (id).toDomain () // v2 mapper } else { v1.getUser (id).toDomain () // v1 mapper } } } // After 3 months — clean up: // 1. Remove v1 API interface and DTO // 2. Replace UserRepositoryMigrating with UserRepositoryV2Impl // 3. Update @Binds to use new impl // 4. Remove feature flag // Zero changes to ViewModel, UseCase, or UI layer // Why this works: // Domain layer (UserRepository interface) is stable // Data layer (impl) absorbs the API change // Presentation layer unaffected — doesn't know about v1 or v2
- Repository interface as stability point: ViewModel calls getUser() — API version is an implementation detail
- Migration repository: wraps v1 and v2, routes via feature flag — gradual rollout
- Feature flag control: route 10% → 50% → 100% traffic to v2 over time
- Clean-up plan: after 3 months, delete v1 code, swap @Binds — zero UI changes
- DTOs are versioned: UserDtoV1 and UserDtoV2 mapped to the same User domain entity
This is Clean Architecture's core value demonstrated in a real scenario: the domain layer (UserRepository interface) absorbs the business requirement (use users), the data layer absorbs the technical detail (v1 vs v2 API). The presentation layer never knows any of this happened.
A use case encapsulates a single business operation. They're justified for complex logic, multi-repository orchestration, or shared logic across ViewModels. A simple one-liner is overkill.
// ❌ OVERKILL — wraps a single repository call with zero logic class GetUsersUseCase @Inject constructor(private val repo: UserRepository) { suspend operator fun invoke() = repo.getUsers () // zero business logic } // Just call repo.getUsers() directly in ViewModel — no use case needed // ✅ JUSTIFIED — multi-step business operation class PlaceOrderUseCase @Inject constructor( private val cartRepo: CartRepository, private val inventoryRepo: InventoryRepository, private val paymentRepo: PaymentRepository, private val orderRepo: OrderRepository ) { suspend operator fun invoke(cartId: String, method: PaymentMethod): Order { val cart = cartRepo.getCart (cartId) require(cart.items.isNotEmpty ()) { "Cart is empty" } inventoryRepo.reserve (cart.items) // Step 1 val payment = paymentRepo.charge (cart.total (), method) // Step 2 val order = orderRepo.create (cart, payment) // Step 3 cartRepo.clear (cartId) // Step 4 return order } } // ✅ JUSTIFIED — shared across multiple ViewModels class GetUserUseCase(...) { // Used by: ProfileViewModel, CheckoutViewModel, SettingsViewModel // Validation + enrichment logic shared — no duplication } // Decision checklist: // ✅ Complex multi-step business process → use case // ✅ Multiple repositories orchestrated → use case // ✅ Logic shared by 2+ ViewModels → use case // ✅ Business rules needing pure-Kotlin tests → use case // ❌ Single repository.getXxx() call → skip use case
- Use case justified: multi-step process, multi-repository, shared across ViewModels
- Use case overkill: single repo call with no logic — just call the repo directly
- Business rules: validation, requirements (cart not empty) belong in use cases
- operator fun invoke: callable as a function — getUser("id") not getUser.execute("id")
- Pure Kotlin: no Android imports → testable with JVM only, no Robolectric
Being honest about overkill wins senior interviews: "I use use cases only for complex business logic or shared operations. A GetUsersUseCase that just calls repository.getUsers() adds a class with zero value — I call the repository directly from the ViewModel." Pragmatism over dogmatism.
Both are domain layer violations — the domain must be pure Kotlin with zero Android or framework dependencies. Frame feedback constructively with the WHY, not just "that's wrong."
// ❌ What the junior wrote in domain layer // domain/src/main/kotlin/User.kt import androidx.room.Entity // ❌ Room in domain! import androidx.lifecycle.LiveData // ❌ Android in domain! @Entity(tableName = "users") // ❌ DB annotation on domain entity data class User(val id: String, val name: String) interface UserRepository { fun getUser(id: String): LiveData<User> // ❌ LiveData in domain! } // Why this is wrong: // 1. Domain can't be used in pure Kotlin modules (KMP) — has Android dep // 2. Tests require Android runtime (Robolectric) — not plain JVM // 3. Domain layer should be testable in milliseconds — now it needs emulator // 4. @Entity ties schema decisions (Room) into business model // 5. LiveData lifecycle won't work correctly outside Android // ✅ What it should be // domain/src/main/kotlin/User.kt — pure Kotlin, no imports data class User(val id: String, val name: String) interface UserRepository { fun observeUser(id: String): Flow<User> // Flow from kotlinx-coroutines-core } // kotlinx-coroutines-core is fine in domain — it's pure Kotlin, not Android // Domain build.gradle.kts — purity enforced by module type // plugins { kotlin("jvm") } ← NO android plugin at all // dependencies { kotlinx-coroutines-core only } // Any Android import → build fails immediately — enforced!
- @Entity in domain: ties database schema to business model — can't change DB without touching domain
- LiveData in domain: Android dependency — domain can't be KMP-ready, tests need Android runtime
- Flow is OK in domain: kotlinx-coroutines-core is pure Kotlin — multiplatform compatible
- Enforcement: using plugins { kotlin("jvm") } — any Android import causes build failure
- Constructive feedback: explain the WHY (testability, KMP readiness) not just "wrong"
The purity test line: "Can I run this module's unit tests with just the JVM — no Android, no emulator?" With Room @Entity and LiveData: no. With pure Kotlin data class and Flow: yes. Make the purity enforcement automatic by using plugins { kotlin("jvm") } — any Android import is a compile error.
An application module (:app) produces an installable APK or AAB -- it has an applicationId and is the entry point. A library module produces an AAR consumed by other modules -- it has a namespace but no applicationId. A dynamic feature module is a library that can be delivered on-demand via Play, reducing install size.
// Application module -- com.android.application plugin plugins { alias(libs.plugins.android.application) } android { defaultConfig { applicationId = "com.example.app" } // only app modules have this } // Library module -- com.android.library plugin plugins { alias(libs.plugins.android.library) } android { namespace = "com.example.core.network" // for R class -- no applicationId } // Dynamic feature module -- com.android.dynamic-feature plugin plugins { id("com.android.dynamic-feature") } android { // inherits applicationId from :app -- declared there } // app/build.gradle.kts must list it: dynamicFeatures += setOf(":feature:ar")
- Application module: produces APK/AAB, has applicationId, is the entry point -- exactly one per app
- Library module: produces AAR, has namespace (for R class) but no applicationId -- consumed via implementation(project(':core:network'))
- Dynamic feature module: delivered on-demand by Play Store -- users download it only when they navigate to that feature
- Dependencies flow toward :app, never away -- :feature modules never import :app
- Only :app declares activities in its manifest for the app entry point -- feature modules declare their own activities if needed
Dynamic Feature Modules are the answer to "how do you reduce APK size for a large app?" AR navigation, HD maps, professional editing tools can be 20-50MB — downloaded only when first used. Google Play's initial download threshold matters — keeping under it increases install rates significantly.
The right answer isn't one or the other — it's using each where it fits. MVVM for simple screens, MVI for complex state machines. Team consistency matters but pragmatism wins over dogmatism.
// MVVM is better when: // ✅ Simple screen: show list, handle empty, handle error // ✅ 1-2 concurrent data sources // ✅ Team is new to Compose — simpler mental model // ✅ Screen state has 2-3 independent pieces class ArticleListViewModel : ViewModel() { val articles = repo.getArticles ().stateIn (...) // simple val isRefreshing = MutableStateFlow(false) } // MVI is better when: // ✅ Complex screen with 5+ state pieces that interact // ✅ Multiple concurrent events (WebSocket + user actions + timer) // ✅ Strict predictability required (finance, healthcare) // ✅ Replay/undo needed — Intent log enables this data class TradingState( val price: Double = 0.0, val quantity: Int = 0, val orderType: OrderType = OrderType.LIMIT, val isSubmitting: Boolean = false, val error: String? = null, val confirmRequired: Boolean = false ) // Recommendation for a team: // ✅ MVVM as default — simpler, less ceremony // ✅ MVI for complex feature screens (checkout, live trading, chat) // ✅ Standardise the EVENT pattern (Channel events for both) // ✅ Define what "complex" means for YOUR team (5+ state vars = MVI) // The overhead of MVI for simple screens: // sealed class UserIntent { object Load : UserIntent() } — 3 lines // fun dispatch(intent: UserIntent) — vs fun load() // Unnecessary for a simple list screen
- MVVM default: simpler, less boilerplate — correct for 70% of screens
- MVI for complexity: 5+ interacting state pieces, concurrent event sources, undo/replay
- Team consistency: standardize the Channel event pattern regardless of MVVM or MVI
- Define thresholds: "if screen state has more than 4 pieces, use MVI" — clear team rule
- Don't enforce MVI everywhere: Intent + dispatch() boilerplate for a simple list = over-engineering
The pragmatic senior answer: "MVVM as the default with a clear upgrade path to MVI when state complexity crosses a threshold. We define that threshold as a team — e.g. 5+ state variables or 3+ concurrent event sources. This gives consistency without dogma."
The case against 'put everything in the ViewModel' is separation of concerns: ViewModels should hold UI state and delegate to use cases -- not contain business logic, network calls, or database queries directly. A ViewModel that does everything becomes untestable and violates single responsibility.
class CheckoutViewModel @Inject constructor( private val placeOrderUseCase: PlaceOrderUseCase, // domain layer private val getCartUseCase: GetCartUseCase ) : ViewModel() { val cart = getCartUseCase().stateIn(viewModelScope, SharingStarted.Eagerly, null) fun placeOrder() { viewModelScope.launch { _uiState.update { it.copy(isLoading = true) } placeOrderUseCase() // business logic lives in use case, not here .onSuccess { _uiState.update { it.copy(orderPlaced = true) } } .onFailure { _uiState.update { it.copy(error = it.message) } } } } }
- ViewModel responsibility: hold UI state (UiState), handle UI events, delegate work to use cases -- nothing more
- Use cases: single-purpose classes in the domain layer -- PlaceOrderUseCase, GetCartUseCase -- pure Kotlin, no Android
- Testability: a ViewModel with no business logic is easy to test -- mock the use cases, assert the UiState
- A fat ViewModel that calls Room DAOs and Retrofit APIs directly means the ViewModel test must mock the entire data layer
- The rule: if a ViewModel method has more than 5 lines of logic, most of those lines belong in a use case
The best architecture explanation: connect each layer to the problem it solves. "ViewModel exists because Activities crash on rotation." "Repository exists because ViewModels shouldn't know if data came from DB or network." "Multi-module exists because build times were 5 minutes." Architecture is solutions, not theory.
Qualifiers distinguish between multiple bindings of the same type. When Hilt sees two @Provides methods returning the same type, it can't decide which to inject — qualifiers tell it which to use where.
// Problem: two CoroutineDispatchers — which one gets injected? @Provides fun provideIo(): CoroutineDispatcher = Dispatchers.IO @Provides fun provideDefault(): CoroutineDispatcher = Dispatchers.Default // Hilt error: multiple bindings for CoroutineDispatcher! // Solution 1: @Named — built-in string qualifier (simpler) @Provides @Named("IO") fun provideIo(): CoroutineDispatcher = Dispatchers.IO @Provides @Named("Default") fun provideDefault(): CoroutineDispatcher = Dispatchers.Default // Inject with @Named class Repository @Inject constructor( @Named("IO") private val ioDispatcher: CoroutineDispatcher ) // Downside: string "IO" — typos not caught at compile time // Solution 2: Custom @Qualifier — type-safe (preferred) @Qualifier @Retention(AnnotationRetention.BINARY) annotation class IoDispatcher @Qualifier @Retention(AnnotationRetention.BINARY) annotation class DefaultDispatcher @Provides @Singleton @IoDispatcher fun provideIoDispatcher(): CoroutineDispatcher = Dispatchers.IO class Repository @Inject constructor( @IoDispatcher private val dispatcher: CoroutineDispatcher ) // In tests — inject TestDispatcher with the same qualifier @Provides @IoDispatcher fun provideTestIo(): CoroutineDispatcher = StandardTestDispatcher()
- @Named: built-in string qualifier — simple but no compile-time typo protection
- Custom @Qualifier: annotation-based — refactor-safe, IDE auto-complete, preferred
- Common use cases: multiple dispatchers, multiple Retrofit instances, debug/release configs
- Injecting dispatchers: @IoDispatcher lets tests inject TestDispatcher — all coroutines become controllable
- Build-time safety: typo in @Named("I0") won't be caught; @IoDispatcher misspelling is a compile error
Injecting CoroutineDispatchers with @Qualifier is a production best practice. @IoDispatcher @Provides fun provideIo() = Dispatchers.IO — in tests, bind TestDispatcher to @IoDispatcher. Now ALL coroutines in ALL classes are controlled by the test scheduler automatically.
In a monorepo single-module setup, all teams work in one codebase with one build output. In a true multi-module monorepo, each module is independently buildable and teams own specific modules. The trade-off is build isolation and team autonomy versus the complexity of inter-module dependency management.
// Single-module monorepo -- one build output, shared codebase my-app/ ├── app/src/main/kotlin/ // everything in one module └── build.gradle.kts // Multi-module monorepo -- separate build outputs per module my-app/ ├── app/ ├── core/network/ ├── core/database/ ├── feature/home/ // team A owns this ├── feature/checkout/ // team B owns this └── build-logic/ // shared convention plugins // Build avoidance: Gradle only rebuilds modules with changed inputs // Change in :feature:home → only :feature:home + :app rebuild // Change in :core:network → all dependent modules rebuild
- Single-module monorepo: simple, one build -- works well for small teams, but one change rebuilds everything
- Multi-module monorepo: each module is independently buildable -- change in :feature:home doesn't rebuild :feature:checkout
- Team ownership: each team owns specific modules -- enforced by module boundaries, no accidental cross-feature coupling
- Build cache benefit multiplies with modules: unchanged modules hit cache even on CI, cutting build time dramatically
- Core modules are the bottleneck: a change in :core:network triggers rebuilds of everything that depends on it -- keep core modules stable
"I'd start with good package-by-feature structure in a single module. When builds exceed 2 minutes and the team hits 4 developers, I'd extract :core:network and :core:database first — they change rarely and give immediate cache benefits. Feature modules come last when team ownership becomes the bottleneck."
Data privacy requirements should be handled in the data layer — the domain and presentation layers should be completely unaffected. This is Clean Architecture's value in practice.
// Encryption at rest — transparent to all other layers // Option 1: EncryptedSharedPreferences (for small data) @Provides @Singleton fun provideEncryptedPrefs(@ApplicationContext ctx: Context): SharedPreferences = EncryptedSharedPreferences.create (ctx, "secure_prefs", MasterKey.Builder(ctx).setKeyScheme (MasterKey.KeyScheme.AES256_GCM).build (), EncryptedSharedPreferences.PrefKeyEncryptionScheme.AES256_SIV, EncryptedSharedPreferences.PrefValueEncryptionScheme.AES256_GCM) // Option 2: SQLCipher / Room with encryption @Provides @Singleton fun provideEncryptedDb(@ApplicationContext ctx: Context): AppDatabase = Room.databaseBuilder(ctx, AppDatabase::class.java, "app.db") . openHelperFactory(SupportFactory("passphrase". toByteArray ())).build () // Domain layer: ZERO changes. Data layer: swapped database builder. // PII deletion — use case handles it class DeleteUserDataUseCase @Inject constructor( private val userRepo: UserRepository, private val ordersRepo: OrderRepository, private val prefsRepo: PreferencesRepository, private val analyticsRepo: AnalyticsRepository ) { suspend operator fun invoke(userId: String): Result<Unit> = runCatching { userRepo.anonymize (userId) // replace PII with "DELETED" ordersRepo.removePersonalData(userId) prefsRepo. clearUser(userId) analyticsRepo. deleteUserData (userId) // Atomic — all or nothing } } // ViewModel calls DeleteUserDataUseCase — unaware of encryption details
- Encryption at rest: data layer concern — swap database implementation, zero domain changes
- EncryptedSharedPreferences: Android Keystore-backed — transparent to callers
- SQLCipher/Room: encrypted database at file level — same Room API, different factory
- PII deletion use case: orchestrates anonymization across all repositories
- Domain layer unchanged: GetUser() still returns User — encryption is an implementation detail
This demonstrates Clean Architecture's real-world value: a major compliance requirement (GDPR right to erasure) is handled entirely in the data layer. The ViewModel calls DeleteUserDataUseCase — it doesn't know about SQLCipher, EncryptedPreferences, or which tables store PII. Architecture pays its debt.
2024-25 has brought significant tooling improvements that affect architecture — KSP replaces KAPT, type-safe Navigation, and Hilt improvements make the stack cleaner and faster.
// 1. KSP — replaces KAPT (30-50% faster annotation processing) // ❌ KAPT (old) // kapt("com.google.dagger:hilt-compiler:2.51") // kapt("androidx.room:room-compiler:2.6.1") // ✅ KSP (new) // ksp("com.google.dagger:hilt-compiler:2.51") // ksp("androidx.room:room-compiler:2.6.1") // Both Hilt and Room fully support KSP since 2024 // 2. Navigation Compose 2.8 — type-safe routes // ❌ Old — string-based, no compile-time safety // navController.navigate("profile/{userId}") // ✅ New — Kotlin Serialization, type-safe @Serializable data class ProfileRoute(val userId: String) navController.navigate (ProfileRoute(userId = "123")) fun NavGraphBuilder.profileGraph () { composable<ProfileRoute> { backStack -> val route = backStack.toRoute <ProfileRoute>() ProfileScreen(userId = route.userId) } } // 3. SavedStateHandle.toRoute() — type-safe nav args in ViewModel @HiltViewModel class ProfileViewModel @Inject constructor(savedState: SavedStateHandle) : ViewModel() { val userId = savedState.toRoute <ProfileRoute>().userId // type-safe! } // 4. Compose BOM 2024.09.00 — stable SharedTransitionLayout // Shared element transitions native in Compose // 5. Kotlin 2.0 + K2 compiler // Better smart casts after suspend calls // Faster compilation with K2
- KSP over KAPT: 30-50% faster annotation processing — Hilt, Room, Navigation all support it
- Navigation 2.8 type-safe: @Serializable routes, no string-based navigation — compile-time safety
- SavedStateHandle.toRoute(): type-safe access to navigation args in ViewModel
- SharedTransitionLayout: stable since Compose 1.7 — shared element transitions native
- Kotlin 2.0 K2: better smart casts, faster compilation — upgrade is worth it
Mentioning specific version numbers signals you track the ecosystem: "We migrated from KAPT to KSP in early 2024 — our annotation processing went from 45 seconds to 20 seconds. Navigation 2.8's type-safe routes eliminated an entire class of runtime crashes from string typos." Applied knowledge beats theoretical knowledge.
A 6-month architecture review for a legacy app is a migration, not a rewrite. The goal is to move to a maintainable, testable architecture incrementally -- without pausing feature delivery. Start with the highest-pain areas, establish patterns in a pilot feature, then roll out across the codebase.
// Month 1-2: Audit and stabilise // Add Crashlytics if missing, fix top 3 crashes, add CI if missing // Map dependencies: which classes are used everywhere? Those are your core modules. // Month 3-4: Pilot feature -- establish the target architecture class CheckoutViewModel @Inject constructor( private val placeOrder: PlaceOrderUseCase ) : ViewModel() { /* clean architecture pilot */ } // Month 5-6: Extract :core modules, migrate top 3 features to new pattern // Do not rewrite everything -- use Strangler Fig include(":app", ":core:network", ":core:database", ":feature:checkout")
- Month 1-2 -- stabilise before refactoring: fix crashes, add CI, establish code ownership -- you cannot refactor a system that's on fire
- Month 3-4 -- pilot: rewrite one feature end-to-end in the target architecture. This proves the pattern works and becomes the reference for the rest of the team
- Month 5-6 -- extract core modules and migrate: :core:network, :core:database first (highest build impact), then feature modules
- Strangler Fig: new code uses the new architecture, old code stays until naturally replaced -- never freeze delivery for a big-bang rewrite
- Success metric: build time, crash rate, and test coverage before and after -- quantify the improvement to justify the investment
The strangler fig pattern is the key: "Don't rewrite working code. When we add a new feature to Screen X, we refactor Screen X to MVVM as part of that work. In 6 months, the high-traffic screens are all migrated with zero risk from rewriting." This shows realistic senior-level execution planning.
25 questions covering DI principles, Hilt internals, scoping, testing with DI, multi-module DI, Koin comparison, and real-world scenarios for 2025-26 Android interviews.
Dependency Injection is a pattern where objects receive their dependencies from outside instead of creating them internally. It solves tight coupling, poor testability, and hidden dependencies.
// WITHOUT DI — class creates its own dependencies class UserViewModel { private val api = Retrofit.Builder() .baseUrl("https://api.example.com"). build () .create (UserApi::class.java) private val db = Room.databaseBuilder (..., AppDatabase::class.java, "app.db").build () private val repo = UserRepositoryImpl(api, db.userDao ()) // Problems: // 1. Cannot test without real network + database // 2. New Retrofit instance every ViewModel — memory waste // 3. Change API URL → edit every class that creates Retrofit // 4. Hidden dependencies — hard to understand what class needs } // WITH DI — dependencies provided from outside class UserViewModel @Inject constructor( private val repo: UserRepository // received, not created ) : ViewModel() { // Benefits: // 1. Test with FakeUserRepository — no network needed // 2. One shared Retrofit instance — created by Hilt @Singleton // 3. Change URL → edit one @Provides method in NetworkModule // 4. Explicit dependencies — constructor reveals what's needed } // Three types of DI: // Constructor injection: class Foo @Inject constructor(val bar: Bar) // Field injection: @Inject lateinit var bar: Bar (avoid if possible) // Method injection: @Inject fun inject(bar: Bar) (rare)
- Tight coupling: without DI, classes know HOW to build their deps — brittle and duplicated
- Testability: DI allows swapping real deps for fakes/mocks in tests — core benefit
- Constructor injection: preferred — makes deps explicit, enables compile-time safety
- Single responsibility: class should use dependencies, not create them
- Lifecycle management: DI framework controls object lifetimes and sharing
Lead with testability: "Without DI, UserViewModel creates its own Retrofit — I can't test it without a real network. With DI, I inject FakeUserRepository and tests run in milliseconds offline." This is the answer interviewers want, not a textbook definition.
Hilt is Google's opinionated DI framework built on top of Dagger 2. It eliminates the massive boilerplate of Dagger while keeping compile-time code generation and type safety.
// Dagger 2 — powerful but verbose // Must manually define: @Component(modules = [NetworkModule::class, DatabaseModule::class]) interface AppComponent { fun inject(activity: MainActivity) fun inject(service: SyncService) fun userViewModelFactory(): UserViewModelFactory // Must declare every injection point manually } // Also need: SubComponents, Component.Builder, ViewModelFactory... // Estimate: 50+ lines of boilerplate per app just for components // Hilt — same power, zero component boilerplate @HiltAndroidApp class MyApp : Application() // Hilt generates the component @AndroidEntryPoint class MainActivity : AppCompatActivity() // auto-injects @HiltViewModel class UserViewModel @Inject constructor(...) // Hilt generates (at compile time): // - App-level Dagger Component // - Activity/Fragment components with proper scopes // - ViewModelFactory for all @HiltViewModel classes // Zero of this is written by the developer // Under the hood: Hilt IS Dagger // Hilt generates Dagger code from its annotations // @HiltAndroidApp → generates Hilt_MyApp extends MyApp with full Dagger component // All Dagger features available (Subcomponents, Multibindings, etc.) // Why Hilt won: // ✅ Standard — one way to do DI in Android, not 5 different patterns // ✅ Integrated — works with ViewModel, WorkManager, Navigation // ✅ Testing — HiltRule for instrumented tests, TestInstallIn for unit tests
- Dagger 2: manual Component definitions, injection declarations, SubComponents — 100s of boilerplate lines
- Hilt: generates Dagger components from @HiltAndroidApp and @AndroidEntryPoint
- Same compile-time safety: Hilt generates Dagger code — errors still caught at build time
- Standard scopes: SingletonComponent, ActivityComponent, ViewModelComponent — predefined
- Integrated testing: @HiltAndroidTest, TestInstallIn — first-class test support
"Hilt IS Dagger under the hood — it generates the boilerplate Dagger needs. You get all of Dagger's compile-time safety with none of the Component/SubComponent ceremony. That's why Google recommends it as the standard."
Hilt's scope annotations control how long a dependency lives. Using too broad a scope causes memory leaks; too narrow causes wasteful re-creation of expensive objects.
// SCOPE HIERARCHY (broadest → narrowest): // @Singleton — lives for entire Application lifetime @InstallIn(SingletonComponent::class) @Provides @Singleton fun provideDatabase(@ApplicationContext ctx: Context): AppDatabase = ... // Created once. Use for: OkHttp, Retrofit, Room, shared repos // @ActivityRetainedScoped — survives rotation, dies with Activity // Same lifetime as ViewModel. Use for: per-Activity caches @InstallIn(ActivityRetainedComponent::class) @Provides @ActivityRetainedScoped fun provideSession(): UserSession = UserSession() // @ViewModelScoped — lives as long as the ViewModel // Each ViewModel gets its OWN instance of the dependency @InstallIn(ViewModelComponent::class) @Binds @ViewModelScoped abstract fun bindRepo(impl: UserRepositoryImpl): UserRepository // @ActivityScoped — dies on every rotation! // Use only for non-data Activity-level state (snackbar helpers, etc.) // @FragmentScoped — dies when Fragment is destroyed // @ServiceScoped — lives as long as the Service // WRONG SCOPE CONSEQUENCES: // ❌ @Singleton holding Activity Context → MEMORY LEAK @Provides @Singleton fun provideManager(activity: Activity): SomeManager = SomeManager(activity) // Singleton outlives Activity → Activity can't be GC'd // ❌ No scope where @Singleton needed → new Retrofit per injection @Provides // no @Singleton! fun provideRetrofit(): Retrofit = Retrofit.Builder().build () // New Retrofit instance every time it's injected — 10 instances!
- @Singleton: one instance for app lifetime — Retrofit, Room, shared repositories
- @ActivityRetainedScoped: survives rotation — use for per-Activity singletons like sessions
- @ViewModelScoped: each ViewModel gets its own — correct for stateful per-screen repos
- @ActivityScoped: dies on rotation — avoid for data dependencies
- Wrong scope: too broad = memory leak; no scope = expensive re-creation every injection
The memory leak question is a favourite: "@Singleton holding an Activity Context — what's wrong?" The Singleton lives for the app's lifetime but holds a reference to the Activity, which should be destroyed. GC can't collect it → memory leak. Fix: use @ApplicationContext in @Singleton deps.
@Provides executes code to build a dependency. @Binds declares which implementation satisfies an interface — zero code, just a mapping. @Binds is more efficient and preferred for interface bindings.
// @Provides — for third-party or complex construction @Module @InstallIn(SingletonComponent::class) object NetworkModule { @Provides @Singleton fun provideOkHttp(): OkHttpClient = OkHttpClient.Builder() .addInterceptor (AuthInterceptor()) .connectTimeout(30, TimeUnit.SECONDS) . build () // needs code to construct — @Provides required @Provides @Singleton fun provideApi(client: OkHttpClient): UserApi = Retrofit.Builder().client (client).build ().create (UserApi::class.java) } // @Binds — for your own classes with @Inject constructor // Module MUST be abstract class, method MUST be abstract // No body — just declares the mapping @Module @InstallIn(SingletonComponent::class) abstract class RepositoryModule { @Binds @Singleton abstract fun bindUserRepo(impl: UserRepositoryImpl): UserRepository @Binds abstract fun bindAnalytics(impl: FirebaseAnalytics): AnalyticsTracker } // Mixing both in the same module — companion object pattern @Module @InstallIn(SingletonComponent::class) abstract class AppModule { @Binds @Singleton abstract fun bindRepo(impl: UserRepositoryImpl): UserRepository companion object { @Provides @Singleton // @Provides inside companion of abstract class fun provideDb(@ApplicationContext ctx: Context): AppDatabase = Room.databaseBuilder (ctx, AppDatabase::class.java, "app.db").build () } } // Efficiency: @Binds = direct delegation, @Provides = creates a Provider class // @Binds generates LESS code — prefer it whenever possible
- @Provides: runs code — use for Retrofit, OkHttp, Room, or any complex construction
- @Binds: declares mapping — use when impl has @Inject constructor, generates less code
- @Binds requires abstract class + abstract function — no implementation body allowed
- Companion object: mix @Binds and @Provides in one module cleanly
- Rule: if you own the class and can add @Inject constructor, use @Binds. Otherwise @Provides.
The reason @Binds is preferred is efficiency: Dagger generates a simple delegation for @Binds but a full Provider class for @Provides. Less generated code = faster builds and smaller APK. Use @Provides only when you actually need to execute construction code.
Hilt provides two testing strategies: TestInstallIn for replacing modules in unit/instrumented tests, and HiltAndroidTest for full instrumented test flows. Both allow swapping real deps for fakes.
// UNIT TESTS — no Hilt, constructor injection of fakes class UserViewModelTest { private val fakeRepo = FakeUserRepository() private val vm = UserViewModel(fakeRepo) // manual injection // No Hilt needed — just pass the fake directly } // INSTRUMENTED TESTS — @HiltAndroidTest // testImplementation("com.google.dagger:hilt-android-testing") // Replace production module with test module @TestInstallIn( components = [SingletonComponent::class], replaces = [RepositoryModule::class] // replaces production module ) @Module abstract class FakeRepositoryModule { @Binds @Singleton abstract fun bindRepo(fake: FakeUserRepository): UserRepository } // Instrumented test class @HiltAndroidTest @RunWith(AndroidJUnit4::class) class UserScreenTest { @get:Rule val hiltRule = HiltAndroidRule(this) @get:Rule val composeRule = createAndroidComposeRule<HiltTestActivity>() @Inject lateinit var fakeRepo: FakeUserRepository // inject the fake @Before fun setUp() { hiltRule.inject () } @Test fun showsUserName() { fakeRepo.setUser(User("1", "Alice")) composeRule.onNodeWithText ("Alice").assertIsDisplayed () } } // Per-test module replacement — @UninstallModules @HiltAndroidTest @UninstallModules(NetworkModule::class) // remove production module for this test class SpecificTest { /* define replacement module inside */ }
- Unit tests: no Hilt needed — just construct ViewModel with fake dependencies directly
- @TestInstallIn: replaces a production module with a test module — applies to all tests in the app
- @HiltAndroidTest: enables Hilt injection in instrumented tests — needs HiltAndroidRule
- @UninstallModules: remove a specific production module for one test class
- HiltTestActivity: use instead of real Activity in Compose instrumented tests
The key insight: for unit tests, don't use Hilt at all — just pass fakes to the constructor. Hilt in unit tests adds complexity for no benefit. Only use @HiltAndroidTest for instrumented tests that need the full Android + DI stack. This separation keeps unit tests fast and simple.
When two @Provides methods return the same type, Hilt can't decide which to inject. Custom @Qualifier annotations distinguish between bindings — type-safe unlike @Named strings.
// Problem: two CoroutineDispatchers — Hilt can't differentiate @Provides fun provideIo(): CoroutineDispatcher = Dispatchers.IO @Provides fun provideMain(): CoroutineDispatcher = Dispatchers.Main // Build error: multiple bindings for CoroutineDispatcher // Solution: Custom @Qualifier annotations @Qualifier @Retention(AnnotationRetention.BINARY) annotation class IoDispatcher @Qualifier @Retention(AnnotationRetention.BINARY) annotation class MainDispatcher @Qualifier @Retention(AnnotationRetention.BINARY) annotation class DefaultDispatcher @Module @InstallIn(SingletonComponent::class) object DispatcherModule { @Provides @Singleton @IoDispatcher fun provideIoDispatcher(): CoroutineDispatcher = Dispatchers.IO @Provides @Singleton @MainDispatcher fun provideMainDispatcher(): CoroutineDispatcher = Dispatchers.Main @Provides @Singleton @DefaultDispatcher fun provideDefaultDispatcher(): CoroutineDispatcher = Dispatchers.Default } // Inject with qualifier class UserRepository @Inject constructor( @IoDispatcher private val ioDispatcher: CoroutineDispatcher, private val api: UserApi ) { suspend fun getUser(id: String) = withContext(ioDispatcher) { api.getUser (id) } } // Test module — swap IoDispatcher for TestDispatcher @TestInstallIn(components = [SingletonComponent::class], replaces = [DispatcherModule::class]) @Module object TestDispatcherModule { @Provides @IoDispatcher fun provideTestIo(): CoroutineDispatcher = StandardTestDispatcher() }
- Custom @Qualifier: annotation-based disambiguation — compile-time safe, unlike @Named strings
- AnnotationRetention.BINARY: annotation preserved in bytecode — required for Dagger/Hilt
- Dispatcher injection: makes all coroutines in all classes testable via TestDispatcher
- @Named alternative: built-in string qualifier — simpler but typo-prone ("I0" vs "IO")
- Test swapping: TestInstallIn replaces DispatcherModule — all coroutines become test-controlled
Injecting dispatchers with @Qualifier is a production best practice that enables deterministic coroutine tests. "@IoDispatcher in production = Dispatchers.IO; in tests = StandardTestDispatcher(). Now every suspend function in every class is test-controllable without changing any production code."
Build-variant-specific DI uses Gradle source sets combined with Hilt modules — each variant gets its own module binding the right implementation. Zero if/else in production code.
// Interface in main source set interface AnalyticsTracker { fun track(event: String, params: Map<String, Any> = emptyMap()) fun setUserId(id: String) } // src/release/java/AnalyticsModule.kt — production class FirebaseAnalyticsTracker @Inject constructor() : AnalyticsTracker { override fun track(event: String, params: Map<String, Any>) { Firebase.analytics.logEvent (event,bundleOf (*params.entries .map { it.key to it.value }.toTypedArray ())) } override fun setUserId(id: String) { Firebase.analytics.setUserId (id) } } @Module @InstallIn(SingletonComponent::class) abstract class AnalyticsModule { @Binds @Singleton abstract fun bindAnalytics(impl: FirebaseAnalyticsTracker): AnalyticsTracker } // src/debug/java/AnalyticsModule.kt — SAME class name, different source set! class LogAnalyticsTracker @Inject constructor() : AnalyticsTracker { override fun track(event: String, params: Map<String, Any>) { Log.d("Analytics", "Event: $event | $params") // just logs } override fun setUserId(id: String) { Log. d ("Analytics", "User: $id") } } @Module @InstallIn(SingletonComponent::class) abstract class AnalyticsModule { // same name, picked by Gradle per build variant @Binds @Singleton abstract fun bindAnalytics(impl: LogAnalyticsTracker): AnalyticsTracker } // ViewModel — zero knowledge of debug vs release class HomeViewModel @Inject constructor( private val analytics: AnalyticsTracker // Firebase in prod, Log in debug ) : ViewModel()
- Source set DI: same class name in src/debug and src/release — Gradle picks the right one
- Zero production code changes: ViewModel injects AnalyticsTracker — doesn't know the impl
- No BuildConfig.DEBUG checks: Hilt wires the correct impl at build time
- Works for: analytics, crash reporting, network interceptors, feature flags
- Same technique applies to product flavors: free/pro/enterprise each get their own impl
The key: same interface name, same Hilt module class name, DIFFERENT source sets. Gradle's source set resolution picks src/debug or src/release automatically. No if (BuildConfig.DEBUG) checks anywhere in business code — clean, maintainable separation.
Hilt in multi-module requires each module to declare its own @Module classes, and only :app needs @HiltAndroidApp. Cross-module dependencies are wired through standard @InstallIn modules.
// :core:network/build.gradle.kts dependencies { implementation("com.google.dagger:hilt-android:2.51") ksp("com.google.dagger:hilt-compiler:2.51") } // :core:network — defines NetworkModule (installed in Singleton) @Module @InstallIn(SingletonComponent::class) object NetworkModule { @Provides @Singleton fun provideRetrofit(): Retrofit = Retrofit.Builder().build () } // :feature:home — defines its own module, uses network @Module @InstallIn(SingletonComponent::class) abstract class HomeModule { @Binds @Singleton abstract fun bindHomeRepo(impl: HomeRepositoryImpl): HomeRepository } // :app — only needs @HiltAndroidApp and @AndroidEntryPoint @HiltAndroidApp class MyApp : Application() // Hilt auto-discovers ALL @Module @InstallIn classes across ALL modules // No manual registration — just having them on the classpath is enough // ❌ PITFALL 1: @HiltAndroidApp in library module // Only :app should have @HiltAndroidApp // Library modules use hilt-android without the full app plugin // ❌ PITFALL 2: Missing hilt plugin in feature module build.gradle // plugins { id("dagger.hilt.android.plugin") } — needed in every module using @Inject // ❌ PITFALL 3: Using KAPT instead of KSP // kapt("com.google.dagger:hilt-compiler") → slow // ksp("com.google.dagger:hilt-compiler") → 30-50% faster // ❌ PITFALL 4: @AndroidEntryPoint in library modules for non-Activity/Fragment // Only Activity, Fragment, Service, BroadcastReceiver, ContentProvider need it
- Module discovery: Hilt auto-discovers @Module classes from all modules on the classpath
- @HiltAndroidApp: only in :app — library modules don't need it
- Feature modules: each declares its own @Module — installed in Singleton scope
- KSP vs KAPT: use KSP in all modules — 30-50% faster annotation processing
- Hilt plugin: must be applied in every module that uses @Inject or @HiltViewModel
The auto-discovery mechanism is key: you don't register modules anywhere — Hilt finds all @Module @InstallIn classes across your entire classpath at compile time. This means :feature:home's HomeModule is automatically included in :app's Dagger component without any explicit registration.
Constructor injection is always preferred — it makes dependencies explicit, enables immutability, and works with null-safety. Field injection is a necessary workaround for classes the framework instantiates.
// Constructor injection — PREFERRED class UserRepository @Inject constructor( // @Inject on constructor private val api: UserApi, // immutable — val, not var private val dao: UserDao // cannot be null — type-safe ) // Benefits: // - Dependencies visible from class signature // - Class is always fully initialised // - Easy to test: just pass fakes to constructor // - Immutable: val, not var // Field injection — NECESSARY for framework classes @AndroidEntryPoint class MainActivity : AppCompatActivity() { @Inject lateinit var analytics: AnalyticsTracker // Android creates Activity // Android creates Activity via reflection — no constructor control // Hilt injects fields after super.onCreate() } // ❌ Avoid field injection in non-framework classes class UserRepository { @Inject lateinit var api: UserApi // bad! lateinit = nullable risk // Repository can be used before injection — NullPointerException // Testing: must manually set fields, not pass to constructor } // When framework creates the class (must use field injection): // Activity, Fragment, Service, BroadcastReceiver, ContentProvider // When YOU create the class (use constructor injection): // Repository, ViewModel, UseCase, Mapper, Validator — everything else // Method injection — rarely used class MyClass { @Inject fun injectDependencies(api: UserApi) { /* ... */ } }
- Constructor injection: explicit, immutable, null-safe — always preferred for your own classes
- Field injection: required for Activity, Fragment, Service — Android creates these via reflection
- lateinit var: field injection risk — class usable before injection, potential NPE
- Testing: constructor injection = pass fakes; field injection = must set fields manually
- @AndroidEntryPoint: enables field injection in Android framework classes
Rule: "If Android creates it, you must use field injection (@AndroidEntryPoint). If you create it, always use constructor injection." Activity, Fragment, Service → field injection. Repository, ViewModel, UseCase → constructor injection. This distinction shows you understand Hilt, not just copy-paste it.
@EntryPoint lets you inject dependencies into classes that Hilt doesn't support directly — like custom View classes, ContentProviders, or third-party framework components.
// Problem: Custom View — Hilt doesn't support @AndroidEntryPoint on View class UserAvatarView(context: Context, attrs: AttributeSet?) : View(context, attrs) { // Can't use @AndroidEntryPoint — Android creates Views // Can't use constructor injection — View has fixed constructors } // Solution: @EntryPoint — create a custom entry point @EntryPoint @InstallIn(SingletonComponent::class) interface ImageLoaderEntryPoint { fun imageLoader(): ImageLoader // declare what you need } class UserAvatarView(context: Context, attrs: AttributeSet?) : View(context, attrs) { private val imageLoader: ImageLoader by lazy { EntryPointAccessors .fromApplication (context.applicationContext , ImageLoaderEntryPoint::class.java) .imageLoader () } } // Another use case: ContentProvider class UserContentProvider : ContentProvider() { @EntryPoint @InstallIn(SingletonComponent::class) interface UserProviderEntryPoint { fun userRepository(): UserRepository } private val userRepo: UserRepository by lazy { EntryPointAccessors .fromApplication (context!!.applicationContext , UserProviderEntryPoint::class.java) .userRepository () } } // When to use @EntryPoint: // - Custom Views that need Hilt-managed dependencies // - ContentProvider (created before Application.onCreate) // - Third-party framework classes you can't annotate // - Non-Android classes that need Hilt deps at runtime
- @EntryPoint: manual injection point for classes Hilt doesn't support with @AndroidEntryPoint
- EntryPointAccessors.fromApplication(): retrieves the entry point from the app component
- Custom Views: most common @EntryPoint use case in Android
- ContentProvider: created before Application.onCreate() — needs special handling
- Lazy access: always lazy — ensures Component is initialized before first use
@EntryPoint is the "escape hatch" for Hilt — for cases where @AndroidEntryPoint doesn't apply. The most common real-world use: a custom View that needs to load images with an ImageLoader managed by Hilt. Knowing this exists (and when to use it) signals advanced Hilt knowledge.
Koin is a lightweight DI framework using a Kotlin DSL. It's simpler to set up and multiplatform-ready, but catches DI errors at runtime instead of compile time.
// Koin — runtime DSL-based DI val networkModule = module { single { OkHttpClient.Builder().build () } single { Retrofit.Builder().client (get ()).build () } single {get <Retrofit>().create (UserApi::class.java) } } val repoModule = module { single<UserRepository> { UserRepositoryImpl(get (),get ()) } viewModel { UserViewModel(get ()) } } class MyApp : Application() { override fun onCreate() { super.onCreate () startKoin { androidContext(this@MyApp); modules(networkModule, repoModule) } } } // Comparison: // Hilt Koin // Error detection Compile-time → Runtime (crash on first injection) // Setup time Hours → Minutes // Code generation Yes (KSP) → No (service locator) // Multiplatform Android-only → KMP ready // Startup cost None → Small (graph construction) // Learning curve Steeper → Gentle // Choose Hilt when: // - Android-only project, large team, production app // - Safety over speed of setup // - Need fine-grained scope control // Choose Koin when: // - Kotlin Multiplatform project (Koin supports iOS, Desktop) // - Small project / prototype // - Team prefers simpler, less ceremonial DI
- Koin: runtime service locator — no code generation, DSL-based, easy to set up
- Hilt: compile-time Dagger wrapper — errors at build time, zero runtime overhead
- Koin's key advantage: Kotlin Multiplatform support — Hilt is Android-only
- Koin's main risk: missing binding crashes at runtime — discovered by users, not CI
- 2025 recommendation: Hilt for Android-only, Koin for KMM projects
"Hilt fails at build time if I forget a binding. Koin fails at runtime — in front of users. For a production app with a team and CI, build-time safety wins every time. I'd only choose Koin for a KMP project where Hilt's Android dependency doesn't work."
Dynamic dependencies based on runtime state shouldn't be injected at construction time — inject a factory or a repository that encapsulates the state logic. DI is for static wiring, not runtime decisions.
// ❌ WRONG — trying to inject auth-state-dependent dep at construction @HiltViewModel class ProfileViewModel @Inject constructor( private val repo: UserRepository // which repo? depends on login state! ) // ✅ SOLUTION 1: Inject a factory that creates the right impl interface UserRepositoryFactory { fun create(isAuthenticated: Boolean): UserRepository } class UserRepositoryFactoryImpl @Inject constructor( private val authRepo: AuthenticatedUserRepository, private val guestRepo: GuestUserRepository ) : UserRepositoryFactory { override fun create(isAuthenticated: Boolean) = if (isAuthenticated) authRepo else guestRepo } @HiltViewModel class ProfileViewModel @Inject constructor( private val session: SessionManager, private val repoFactory: UserRepositoryFactory ) : ViewModel() { private val repo: UserRepository by lazy { repoFactory.create (session.isLoggedIn ()) } } // ✅ SOLUTION 2: Single repository that handles both states internally class SmartUserRepository @Inject constructor( private val session: SessionManager, private val apiService: UserApiService ) : UserRepository { override suspend fun getProfile(): UserProfile = if (session.isLoggedIn ()) apiService.getAuthenticatedProfile (session.getToken ()!!) else UserProfile.guest () // default guest profile }
- DI wires static structure: runtime decisions belong inside classes, not in DI config
- Factory pattern: inject a factory, call it with runtime state to get the right impl
- Smart repository: single impl handles both states internally — cleaner for simple cases
- SessionManager injection: inject the session state observer, not the auth-dependent result
- Avoid: @Provides with if/else based on runtime state — DI graph built at compile time
The key insight: "DI is for wiring static structure, not runtime decisions." The DI graph is built at compile time — you can't have a Hilt binding that says 'give me AuthRepo if logged in, GuestRepo if not.' Instead, inject the SessionManager and make the decision inside the class.
Manual DI is wiring dependencies yourself using a container or factory — no annotation processing or framework. It's appropriate for small projects, libraries, or KMP modules where Hilt's Android dependency is a problem.
// Manual DI — App Container pattern class AppContainer(private val context: Context) { // Build dependency graph manually private val okHttp: OkHttpClient by lazy { OkHttpClient.Builder().build () } private val retrofit: Retrofit by lazy { Retrofit.Builder().client (okHttp).build () } private val userApi: UserApi by lazy { retrofit.create (UserApi::class.java) } private val db: AppDatabase by lazy { Room.databaseBuilder (context, AppDatabase::class.java, "app.db").build () } val userRepository: UserRepository by lazy { UserRepositoryImpl(userApi, db.userDao ()) } } class MyApp : Application() { val container by lazy { AppContainer(this) } } // Access in Activity class MainActivity : AppCompatActivity() { private val repo by lazy { (application as MyApp).container.userRepository } private val vm: UserViewModel by viewModels { UserViewModelFactory(repo) } } // When manual DI makes sense: // ✅ Small app (1-2 dev, < 20 classes) — Hilt setup overhead not worth it // ✅ Pure Kotlin library module — no Android dependency // ✅ KMP shared module — Hilt is Android-only // ✅ Learning DI concepts before frameworks // ❌ Large production app, team > 3 — Hilt pays back quickly
- Manual DI: build object graph yourself — full control, zero framework overhead
- AppContainer: centralise creation — single place to manage singletons
- lazy: deferred creation — expensive objects (Room, Retrofit) created only when first needed
- When appropriate: small apps, KMP shared modules, pure Kotlin libraries
- Limitation: no compile-time validation, manual scope management, scales poorly
Google's official Android documentation teaches manual DI first, then Hilt. Understanding manual DI shows you understand the CONCEPTS (object graphs, scope, lifetime) not just the annotations. "I understand what Hilt generates under the hood because I've done it manually."
Multibindings let multiple modules contribute to a single collection — a Set or Map — without any module knowing about the others. Perfect for plugin architectures and interceptor chains.
// @IntoSet — contribute to a Set<T> // Use case: OkHttp interceptors contributed by separate modules // :core:auth — contributes auth interceptor @Module @InstallIn(SingletonComponent::class) object AuthModule { @Provides @IntoSet fun provideAuthInterceptor(session: SessionManager): Interceptor = AuthInterceptor(session) } // :core:logging — contributes logging interceptor @Module @InstallIn(SingletonComponent::class) object LoggingModule { @Provides @IntoSet fun provideLoggingInterceptor(): Interceptor = HttpLoggingInterceptor() } // :core:network — receives the complete Set @Provides @Singleton fun provideOkHttp(interceptors: Set<@JvmSuppressWildcards Interceptor>): OkHttpClient { val builder = OkHttpClient.Builder() interceptors.forEach { builder.addInterceptor (it) } return builder.build () } // Adding a new interceptor: add @IntoSet in its module — zero other changes! // @IntoMap — contribute to a Map<Key, T> // Use case: ViewModel factory per route key @Module @InstallIn(ViewModelComponent::class) object ViewModelModule { @Provides @IntoMap @StringKey("UserViewModel") fun provideUserVm(vm: UserViewModel): ViewModel = vm @Provides @IntoMap @StringKey("HomeViewModel") fun provideHomeVm(vm: HomeViewModel): ViewModel = vm }
- @IntoSet: multiple modules contribute to a Set without knowing each other
- @IntoMap: contribute to a Map with a key — enables keyed plugin architectures
- @JvmSuppressWildcards: required on injected Set/Map to avoid Kotlin generics issues
- Interceptor chain pattern: each feature module adds its interceptor — network module receives all
- Open/Closed: add new interceptor/plugin without modifying existing code — textbook O in SOLID
Multibindings demonstrate the Open/Closed Principle with DI. "Adding a new OkHttp interceptor means adding @IntoSet in one new module. NetworkModule doesn't change. No other module knows about the new interceptor. This is exactly the plugin architecture that scales to large teams."
Circular dependencies are a build-time error in Hilt/Dagger — the graph literally can't be constructed. The fix is to break the cycle by introducing an abstraction, a lazy reference, or restructuring responsibilities.
// The circular dependency — Hilt/Dagger BUILD ERROR class AuthRepository @Inject constructor( private val userRepo: UserRepository // needs UserRepository ) class UserRepository @Inject constructor( private val authRepo: AuthRepository // needs AuthRepository → CYCLE! ) // Build fails: "Found a dependency cycle" // Fix 1: Extract shared dependency — break cycle with a third class class TokenStorage @Inject constructor(...) // shared, no deps on A or B class AuthRepository @Inject constructor(private val storage: TokenStorage) class UserRepository @Inject constructor(private val storage: TokenStorage) // Both depend on TokenStorage — no cycle // Fix 2: Lazy injection — defer resolution until first use class AuthRepository @Inject constructor( private val userRepo: dagger.Lazy<UserRepository> // Dagger.Lazy breaks cycle ) { fun doSomething() = userRepo.get ().someMethod () // resolved at call time } // Fix 3: Redesign — question whether the dependency is actually needed // Often a circular dep signals an SRP violation // "Why does UserRepository need AuthRepository?" // "Why does AuthRepository need UserRepository?" // Likely UserRepository should take a token directly, not AuthRepository class UserRepository @Inject constructor( private val tokenProvider: TokenProvider // interface, not AuthRepository )
- Hilt/Dagger detects cycles at compile time — build fails with "Found a dependency cycle"
- Fix 1: extract shared state — both classes depend on a third shared dependency
- Fix 2: dagger.Lazy<T> — defers instantiation until first use, breaks the cycle
- Fix 3: redesign — circular dep often signals SRP violation or wrong abstraction level
- Root cause: usually one class knows too much — split responsibilities differently
A circular dependency is almost always a design smell, not just a DI problem. "If A needs B and B needs A, what does that tell us? One of them probably needs to be split — extract the shared concept into a third class that both depend on." Fix the design, not just the DI wiring.
Service locator is a global registry where classes PULL their dependencies. DI PUSHES dependencies into classes. Service locator hides dependencies and makes testing harder — it's considered an anti-pattern.
// SERVICE LOCATOR — class pulls dependencies from a registry object ServiceLocator { var userRepository: UserRepository = UserRepositoryImpl() var analytics: Analytics = FirebaseAnalytics() } class UserViewModel : ViewModel() { private val repo = ServiceLocator.userRepository // PULLS from global // Problems: // 1. Hidden dependency — no way to know what ViewModel needs from signature // 2. Testing: must modify global ServiceLocator before each test // 3. Thread-safety: global mutable state // 4. Compile-time: wrong type → runtime crash, not build error } // DEPENDENCY INJECTION — dependencies pushed in class UserViewModel @Inject constructor( private val repo: UserRepository // PUSHED from outside ) : ViewModel() { // Explicit: anyone reading this knows it needs UserRepository // Testing: val vm = UserViewModel(FakeUserRepository()) // Compile-time: Hilt catches missing bindings at build time } // Koin — technically a service locator (with DI-like DSL) // get() pulls from Koin's registry at runtime val vm: UserViewModel by viewModel() // resolves from Koin registry // Why service locator is called an anti-pattern: // - Violates "tell, don't ask" principle // - Hides coupling — class silently depends on global state // - Testing requires global state mutation — fragile, order-dependent
- Service locator: class asks a registry for dependencies — PULL model
- DI: dependencies provided to the class — PUSH model
- Hidden deps: service locator hides what a class needs — DI makes it explicit via constructor
- Testing: service locator requires global state mutation before each test — fragile
- Koin is technically a service locator: but with structured DSL, it's a pragmatic trade-off
The key difference: "DI makes dependencies part of the class interface — you can see what it needs. Service locator hides dependencies inside the class body — you have to read the implementation to know what it depends on." Explicit over implicit is the principle.
Migrating from manual DI to Hilt is an incremental process — never a big-bang rewrite. Convert the dependency graph leaf-first, validate as you go, and replace the AppContainer incrementally.
// Current state — manual AppContainer class AppContainer(private val context: Context) { val okHttp by lazy { OkHttpClient() } val retrofit by lazy { Retrofit.Builder().client (okHttp).build () } val userApi by lazy { retrofit.create (UserApi::class.java) } val userRepo by lazy { UserRepositoryImpl(userApi) } } // Migration steps: // Step 1: Add Hilt dependencies + @HiltAndroidApp @HiltAndroidApp class MyApp : Application() { val container by lazy { AppContainer(this) } // keep during migration } // Step 2: Convert leaf dependencies first (no deps of their own) @Module @InstallIn(SingletonComponent::class) object NetworkModule { @Provides @Singleton fun provideOkHttp(): OkHttpClient = OkHttpClient() @Provides @Singleton fun provideRetrofit(client: OkHttpClient): Retrofit = Retrofit.Builder().client (client).build () @Provides @Singleton fun provideUserApi(retrofit: Retrofit): UserApi = retrofit.create (UserApi::class.java) } // Step 3: Add @Inject constructor to UserRepositoryImpl class UserRepositoryImpl @Inject constructor(private val api: UserApi) : UserRepository // Step 4: Migrate ViewModels to @HiltViewModel one by one // Step 5: Migrate Activities to @AndroidEntryPoint // Step 6: Remove AppContainer once all deps are in Hilt // Validate after each step: run all tests, check CI passes
- Incremental: leaf classes first — no deps of their own, safest to convert
- Keep AppContainer during migration: hybrid state is fine, remove when fully migrated
- @Inject constructor: add to your own classes — zero framework lock-in
- @HiltViewModel one by one: validate each ViewModel works before moving to the next
- Validate continuously: run tests after every class migrated — catch issues early
Leaf-first is the key strategy: start with classes that have no dependencies (OkHttpClient, Room builder) — they're easiest to wrap in @Provides. Then work up the dependency chain. Never try to migrate an Activity and its entire dep graph in one PR.
Dagger's Lazy<T> defers dependency creation until first call to get() — the dependency is still injected at construction, but the instance is created lazily. Different from Kotlin's lazy which controls property initialization.
// dagger.Lazy<T> — defer dependency CREATION until first use class UserViewModel @Inject constructor( private val heavyService: dagger.Lazy<HeavyImageProcessingService> ) : ViewModel() { fun processImage(uri: Uri) { // HeavyImageProcessingService not created until here heavyService.get ().process (uri) } // If processImage() never called → service never instantiated } // Kotlin lazy — defers property access, not DI injection class UserViewModel @Inject constructor( private val repo: UserRepository // injected at construction ) : ViewModel() { private val formattedDate by lazy { // Kotlin lazy — property init SimpleDateFormat("dd/MM/yyyy").format (Date()) } } // Key differences: // dagger.Lazy: controls when the INJECTED OBJECT is created // kotlin lazy: controls when a PROPERTY VALUE is computed // Use cases for dagger.Lazy: // 1. Expensive objects only sometimes needed // 2. Break circular dependencies (lazy breaks the cycle) // 3. Optional features — inject but only create if feature flag enabled // Provider<T> — related concept, creates a new instance on each get() class SessionManager @Inject constructor( private val requestBuilder: Provider<AuthRequest> // new instance each time ) { fun makeRequest() = requestBuilder.get () // fresh AuthRequest every call }
- dagger.Lazy<T>: defers object CREATION — injected at construction, instance created on .get()
- Provider<T>: creates NEW instance every .get() — for transient/prototype-scoped objects
- Kotlin lazy: defers property COMPUTATION — no DI involvement
- Use case: expensive, rarely-needed services; breaking circular deps
- dagger.Lazy is thread-safe: first call creates, subsequent calls return same instance
dagger.Lazy is the answer to "how do you break a circular dependency in Dagger?" Wrap one side in Lazy<T> — Dagger defers the instantiation, allowing the graph to be constructed. The circular reference is resolved at runtime on first .get() call.
WorkManager Workers are created by the system, not by your code — you need a special Hilt worker factory to inject dependencies. This is one of Hilt's built-in integrations.
// Dependencies // implementation("androidx.hilt:hilt-work:1.2.0") // ksp("androidx.hilt:hilt-compiler:1.2.0") // Step 1: Use @HiltWorker and @AssistedInject on Worker @HiltWorker class SyncWorker @AssistedInject constructor( @Assisted appContext: Context, @Assisted workerParams: WorkerParameters, private val syncRepository: SyncRepository, // ✅ injected! private val userRepo: UserRepository // ✅ injected! ) : CoroutineWorker(appContext, workerParams) { override suspend fun doWork(): Result { return try { val users = userRepo.getAll () syncRepository.sync (users) Result.success () } catch (e: Exception) { Result.retry () } } } // Step 2: Register HiltWorkerFactory in Application @HiltAndroidApp class MyApp : Application(), Configuration.Provider { @Inject lateinit var workerFactory: HiltWorkerFactory override val workManagerConfiguration: Configuration get() = Configuration.Builder() .setWorkerFactory (workerFactory) .build () } // Step 3: Remove WorkManager default initializer from Manifest // AndroidManifest.xml <provider android:name="androidx.startup.InitializationProvider"> <meta-data android:name="androidx.work.WorkManagerInitializer" tools:node="remove" /> <!-- remove default init --> </provider> // Enqueue as normal WorkManager.getInstance (context) .enqueue (OneTimeWorkRequestBuilder<SyncWorker>().build ())
- @HiltWorker + @AssistedInject: Worker constructor has both assisted (system) and injected params
- @Assisted: marks params provided by WorkManager — Context and WorkerParameters
- HiltWorkerFactory: must be set as the WorkManager factory — replaces default creation
- Configuration.Provider: custom initialization with HiltWorkerFactory on Application
- Remove default initializer: prevent WorkManager from self-initializing before Hilt is ready
Forgetting to remove the default WorkManager initializer from the Manifest is the most common mistake. WorkManager initializes itself via Jetpack Startup before your Application.onCreate() runs — so HiltWorkerFactory isn't available yet and injection fails at runtime.
Assisted injection mixes Hilt-provided dependencies with runtime parameters — for objects that need both static DI-managed deps AND dynamic values known only at creation time.
// Problem: ProductDetailViewModel needs both repo (DI) AND productId (runtime) // Can't inject productId via Hilt — it's known only when user taps a product // Solution: @AssistedInject + @AssistedFactory class ProductDetailViewModel @AssistedInject constructor( @Assisted val productId: String, // runtime — not from DI private val productRepo: ProductRepository, // from DI private val cartRepo: CartRepository // from DI ) : ViewModel() { @AssistedFactory interface Factory { fun create(productId: String): ProductDetailViewModel } } // In the Composable — use the factory @Composable fun ProductDetailScreen( productId: String, viewModel: ProductDetailViewModel = hiltViewModel( creationCallback = { factory: ProductDetailViewModel.Factory -> factory.create (productId) } ) ) { // viewModel.productId == productId } // Pre-Hilt 2.49 — manual factory in Fragment @AndroidEntryPoint class ProductDetailFragment : Fragment() { @Inject lateinit var factory: ProductDetailViewModel.Factory private val productId by navArgs<ProductDetailArgs>() private val viewModel by viewModels { viewModelFactory { initializer { factory.create (productId.id) } } } }
- @AssistedInject: mark constructor that mixes DI and runtime params
- @Assisted: mark the runtime parameters — these come from the caller, not Hilt
- @AssistedFactory: interface with a create() method — Hilt generates the implementation
- hiltViewModel(creationCallback): Compose-first API for assisted injection in Hilt 2.49+
- Use cases: ViewModel needing nav args, Worker needing job-specific data, scoped caches
Assisted injection solves the "ViewModel needs productId" problem. Before it existed, you'd pass productId via SavedStateHandle (navigation args). Both work — but @AssistedInject makes the dependency explicit in the constructor, which is cleaner and more testable.
The objection to Hilt is usually about build time overhead from annotation processing. The answer: migrate annotation processors from KAPT to KSP (2x faster), use Hilt's incremental processing, and benchmark before vs after. Hilt's compile-time safety -- catching missing bindings before the app runs -- is worth the setup cost.
// KAPT (slow) → KSP (fast) migration for Hilt // build.gradle.kts: replace kapt with ksp plugins { alias(libs.plugins.ksp) } // add KSP plugin dependencies { implementation(libs.hilt.android) ksp(libs.hilt.compiler) // was: kapt(libs.hilt.compiler) ksp(libs.hilt.compiler.testing) } // Hilt incremental processing -- gradle.properties ksp.incremental=true ksp.incremental.apt=true // Component hierarchy -- compile-time validated @HiltAndroidApp class MyApp : Application() // generates SingletonComponent @AndroidEntryPoint class MainActivity : ComponentActivity() // ActivityComponent
- Build time concern: migrate KAPT → KSP for Hilt -- the single biggest build improvement, often 30-60s on clean builds
- Compile-time safety: Hilt catches missing bindings at build time -- Koin finds them at runtime as a crash
- ksp.incremental=true: only reprocesses files that changed -- incremental builds are dramatically faster
- The real cost of manual DI: as the graph grows, manual factories become hundreds of lines of boilerplate that Hilt generates automatically
- Testing: @UninstallModules and @BindValue make swapping test fakes trivial -- manual DI requires constructor parameter threading
The right answer acknowledges the senior dev is correct: "The principles are sound — Hilt adds automated enforcement and ecosystem integration on top. The question isn't DI vs no-DI, it's manual-DI vs automated-DI. At 50+ classes with a team, the compile-time safety and scope management automation justify Hilt."
Hilt doesn't natively support nullable injection — if a binding is missing, it's a build error. Optional dependencies are handled by providing a null or default implementation, or using Kotlin optional types with special handling.
// Hilt/Dagger: @Nullable injection with Optional<T> @Module @InstallIn(SingletonComponent::class) object OptionalModule { // Provide null for optional feature @Provides fun provideCrashlytics(): Optional<FirebaseCrashlytics> = if (BuildConfig.ENABLE_CRASHLYTICS) Optional.of (FirebaseCrashlytics.getInstance ()) else Optional.empty () } class CrashReporter @Inject constructor( private val crashlytics: Optional<FirebaseCrashlytics> ) { fun report(e: Throwable) { crashlytics.ifPresent { it.recordException (e) } } } // Cleaner Kotlin approach: provide a no-op implementation interface CrashTracker { fun record(e: Throwable) } class NoOpCrashTracker @Inject constructor() : CrashTracker { override fun record(e: Throwable) { } // does nothing } class FirebaseCrashTracker @Inject constructor() : CrashTracker { override fun record(e: Throwable) { FirebaseCrashlytics.getInstance ().recordException (e) } } // Bind NoOp in debug, Firebase in release — via source set modules // src/debug/java/CrashModule.kt → @Binds NoOpCrashTracker // src/release/java/CrashModule.kt → @Binds FirebaseCrashTracker // Consumers inject CrashTracker — never null, always has an impl class UserViewModel @Inject constructor( private val crash: CrashTracker // always safe to call )
- Avoid nullable injection: Hilt requires a binding — missing binding is always a build error
- Optional<T>: Java's Optional wrapped in a @Provides method — presents or absent
- No-op pattern: preferred — provide a do-nothing implementation instead of null
- Source set modules: different impl per build variant — clean, no null checks
- Null object pattern: NoOpCrashTracker — callers never check for null, design is always valid
The no-op pattern is cleaner than Optional: "Instead of Optional<CrashTracker> with null checks everywhere, provide NoOpCrashTracker in debug and FirebaseCrashTracker in release. Callers always call crash.record(e) — no null checks, no Optional.ifPresent — just works."
DI code review questions test whether you can spot scope mistakes, context leaks, incorrect annotation usage, and structural issues simultaneously.
// ❌ BUGGY CODE — find all 5 issues @HiltAndroidApp class MyApp : Application() @Module @InstallIn(SingletonComponent::class) object AppModule { @Provides // Bug 1: missing @Singleton! fun provideDatabase(context: Context): AppDatabase = // Bug 2: unqualified Context! Room.databaseBuilder (context, AppDatabase::class.java, "app.db").build () @Binds // Bug 3: @Binds in object (non-abstract)! fun bindRepo(impl: UserRepositoryImpl): UserRepository = impl } @HiltViewModel class UserViewModel(private val repo: UserRepository) : ViewModel() // Bug 4: missing @Inject! class UserActivity : AppCompatActivity() { // Bug 5: missing @AndroidEntryPoint! @Inject lateinit var analytics: Analytics } // ✅ FIXED VERSION @Module @InstallIn(SingletonComponent::class) abstract class AppModule { // Fix 3: abstract class for @Binds @Binds @Singleton abstract fun bindRepo(impl: UserRepositoryImpl): UserRepository companion object { @Provides @Singleton // Fix 1: @Singleton fun provideDatabase( @ApplicationContext ctx: Context // Fix 2: @ApplicationContext ): AppDatabase = Room.databaseBuilder (ctx, AppDatabase::class.java, "app.db").build () } } @HiltViewModel class UserViewModel @Inject constructor( // Fix 4: @Inject private val repo: UserRepository ) : ViewModel() @AndroidEntryPoint // Fix 5 class UserActivity : AppCompatActivity()
- Bug 1: missing @Singleton on Room — new database instance per injection (multiple databases!)
- Bug 2: bare Context — should be @ApplicationContext to prevent Activity context leaking in Singleton
- Bug 3: @Binds in object — @Binds requires abstract class and abstract function
- Bug 4: @HiltViewModel without @Inject constructor — Hilt can't wire the ViewModel
- Bug 5: field injection without @AndroidEntryPoint — @Inject fields never populated
Spotting all 5: "I see missing @Singleton on database, unqualified Context in Singleton (memory leak risk), @Binds in object instead of abstract class, @HiltViewModel without @Inject constructor, and missing @AndroidEntryPoint for field injection." Systematic enumeration shows real-world Hilt experience.
Hilt provides two qualifier annotations to inject Android Context correctly — preventing the most common memory leak of using Activity context where Application context is appropriate.
// Context types in Android: // Application Context — lives as long as the app, no UI // Activity Context — tied to Activity lifecycle, has UI theming // @ApplicationContext — safe for @Singleton dependencies @Module @InstallIn(SingletonComponent::class) object AppModule { @Provides @Singleton fun provideDatabase( @ApplicationContext context: Context // ✅ app-scoped, no leak ): AppDatabase = Room.databaseBuilder (context, AppDatabase::class.java, "app.db").build () @Provides @Singleton fun provideNotificationManager( @ApplicationContext context: Context ): NotificationManager = context.getSystemService (NotificationManager::class.java)!! } // @ActivityContext — only valid in ActivityComponent or narrower @Module @InstallIn(ActivityComponent::class) object ActivityModule { @Provides @ActivityScoped fun provideLayoutInflater( @ActivityContext context: Context // ✅ Activity context — for UI inflation ): LayoutInflater = LayoutInflater.from (context) } // ❌ LEAK: Using Activity context in @Singleton @Provides @Singleton fun provideThemeHelper(context: Context): ThemeHelper = // Which Context? Hilt error! ThemeHelper(context) // If Activity: leak! // Without qualifier: Hilt build error — ambiguous Context // With @ApplicationContext: safe // With @ActivityContext in SingletonComponent: build error — scope mismatch
- @ApplicationContext: provides Application-scoped context — safe for @Singleton dependencies
- @ActivityContext: provides Activity-scoped context — only valid in ActivityComponent or narrower
- Unqualified Context: Hilt build error — must specify which context type
- Scope mismatch: @ActivityContext in SingletonComponent → build error (activity dies before singleton)
- When to use Activity context: layout inflation, themed dialogs, activity-aware resources
"When in doubt, use @ApplicationContext. You ONLY need @ActivityContext when you specifically need Activity-scoped features: themed resources, layout inflation with activity theme, or Activity-specific system services. Room, Retrofit, SharedPreferences — all take Application context."
DI is a pattern with real costs — annotation processing overhead, boilerplate, and learning curve. Knowing when NOT to use it shows architectural maturity over dogmatic application.
// Cases where DI adds more complexity than value: // 1. Utility classes / pure functions — no state, no dependencies object DateFormatter { fun format(ts: Long): String = SimpleDateFormat("dd/MM").format (Date(ts)) } // No DI needed — just call DateFormatter.format(ts). It's a function. // 2. Data classes / value objects data class Money(val amount: Double, val currency: String) { operator fun plus(other: Money) = Money(amount + other.amount, currency) } // Data has no external dependencies — DI would be wrong here // 3. Simple scripts / one-off tools // A Gradle task, a migration script, a CLI tool // DI setup overhead exceeds the benefit for 50-line programs // 4. Sealed class hierarchies / algebraic types sealed class UiState<out T> { object Loading : UiState<Nothing>() data class Success<T>(val data: T) : UiState<T>() } // These are type definitions, not services — DI doesn't apply // 5. Inlined/inline functions — compile away entirely inline fun <reified T> fromJson(json: String): T = Gson().fromJson (json, T::class.java) // The DI rule: use DI when: // ✅ Object has state that needs to be shared // ✅ Object needs to be swapped for a fake in tests // ✅ Object has expensive construction (DB, network) // ✅ Object has a lifecycle (scope, creation, destruction) // ❌ Skip DI: stateless utilities, data classes, type definitions
- Utility functions: stateless pure functions — just call them, DI adds zero value
- Data classes / value objects: represent data, not services — no external dependencies
- Simple scripts: DI overhead exceeds benefit for short-lived, small programs
- Type definitions: sealed classes, enums — these are types, not services
- DI signal: does the class have state that needs sharing, lifecycle management, or test-swapping?
Knowing when NOT to use a pattern demonstrates mastery. "DI is for services — objects with state, lifecycle, and external dependencies. DateFormatter with a static format() method? Just call it. UserRepository with network and database deps that need mocking? DI it. The question is always: do I need to control this object's creation and lifetime?"
@HiltViewModel triggers compile-time code generation — Hilt creates a ViewModelFactory that Dagger wires automatically. Understanding what's generated explains why certain patterns work and others don't.
// What you write: @HiltViewModel class UserViewModel @Inject constructor( private val repo: UserRepository, private val saved: SavedStateHandle ) : ViewModel() // What Hilt generates (conceptually): // 1. UserViewModel_HiltModules — @Module that binds the VM // 2. UserViewModel_Factory — implements ViewModelProvider.Factory class UserViewModel_Factory( private val repoProvider: Provider<UserRepository>, private val savedProvider: Provider<SavedStateHandle> ) : ViewModelProvider.Factory { override fun <T : ViewModel> create(modelClass: Class<T>): T { return UserViewModel(repoProvider.get (), savedProvider.get ()) as T } } // 3. Hilt injects this factory into @AndroidEntryPoint components // 4. hiltViewModel() or by viewModels() picks it up automatically // Why SavedStateHandle is auto-injected: // Hilt's ViewModelComponent provides SavedStateHandle from ViewModelStoreOwner // This is why you DON'T need @Provides for SavedStateHandle @HiltViewModel class SearchViewModel @Inject constructor( private val saved: SavedStateHandle // just declare it — Hilt provides it ) : ViewModel() { val query = saved.getStateFlow ("q", "") } // Common mistake: @HiltViewModel without @Inject constructor @HiltViewModel class BrokenViewModel(private val repo: UserRepository) : ViewModel() // Build error: @HiltViewModel must have @Inject constructor // Hilt can't generate a factory without knowing how to create it
- @HiltViewModel generates a ViewModelProvider.Factory at compile time — no reflection
- Generated factory uses Provider<T> for each constructor parameter
- SavedStateHandle: automatically provided by Hilt's ViewModelComponent — just declare it
- @Inject constructor is mandatory — Hilt needs it to know the constructor signature
- hiltViewModel() in Compose simply retrieves the Hilt-generated factory from the component
"@HiltViewModel generates a ViewModelProvider.Factory at compile time using Provider<T> for each dependency. This is why it's type-safe and fast — no reflection, no runtime graph traversal. The factory is wired into the Activity/Fragment's ViewModelStore automatically by @AndroidEntryPoint."
Large Hilt graphs slow down because KSP/Dagger must process and validate the entire component tree. Strategic module restructuring and build configuration tuning are the main levers.
// Step 1: Measure annotation processing time // ./gradlew :app:kspDebugKotlin --profile // Check build/reports/profile — how long does KSP take? // Step 2: KAPT → KSP (if not done yet) // kapt("com.google.dagger:hilt-compiler:2.51") → slow // ksp("com.google.dagger:hilt-compiler:2.51") → 30-50% faster // Step 3: Reduce @Singleton overuse // Every @Singleton binding is part of the app-level component // Dagger validates the ENTIRE Singleton graph on every build // Move to narrower scopes where possible: @ViewModelScoped // only in ViewModelComponent @ActivityScoped // only in ActivityComponent // Fewer @Singleton bindings = smaller graph to validate // Step 4: Split large @InstallIn(SingletonComponent) modules // Instead of one giant AppModule with 30 @Provides: @Module @InstallIn(SingletonComponent::class) object NetworkModule { /* ... */ } @Module @InstallIn(SingletonComponent::class) object DatabaseModule { /* ... */ } @Module @InstallIn(SingletonComponent::class) abstract class RepoModule { /* ... */ } // Dagger processes modules in parallel — split enables more parallelism // Step 5: Gradle optimisations // gradle.properties ksp.incremental=true // KSP incremental processing ksp.incremental.apt=true // incremental for mixed KAPT/KSP org.gradle.caching=true // cache KSP outputs // Step 6: Avoid @InstallIn on test modules in production source // Test modules compiled into production = larger validation graph
- KAPT→KSP: first thing to try — 30-50% reduction in annotation processing time
- KSP incremental: only reprocesses changed files — huge win on incremental builds
- Reduce @Singleton scope: fewer singleton bindings = smaller validation graph
- Split large modules: Dagger can process multiple smaller modules in parallel
- Gradle caching: KSP outputs cached — unchanged modules never reprocessed
The biggest win after KSP migration is incremental KSP: ksp.incremental=true means only changed files trigger reprocessing. Without it, any module change forces full KSP re-run on that module. Combined with Gradle caching, unchanged modules are never touched.
Hilt's components form a strict hierarchy — child components can access all bindings from parent components, but parents can't access child bindings. This enforces proper scoping and lifetime management.
// Hilt Component Hierarchy: // SingletonComponent (Application) // └── ActivityRetainedComponent // └── ViewModelComponent // └── ActivityComponent // └── FragmentComponent // └── ViewWithFragmentComponent // └── ServiceComponent // Inheritance: child can use parent's bindings // OkHttpClient is @Singleton → accessible in ViewModelComponent @HiltViewModel class UserViewModel @Inject constructor( private val client: OkHttpClient // @Singleton — accessible from child ) : ViewModel() // Parent CAN'T use child's bindings // @ActivityScoped in a @Singleton-installed module → BUILD ERROR @Module @InstallIn(SingletonComponent::class) // ❌ object WrongModule { @Provides @ActivityScoped // SCOPE MISMATCH — Activity dies before Singleton fun provideHelper(): ActivityHelper = ActivityHelper() } // ViewModelComponent — between ActivityRetained and Activity // @ViewModelScoped deps are NOT shared between ViewModels @Module @InstallIn(ViewModelComponent::class) abstract class ViewModelModule { @Binds @ViewModelScoped abstract fun bindRepo(impl: UserRepositoryImpl): UserRepository } // UserViewModel gets its OWN UserRepositoryImpl // ProductViewModel gets its OWN UserRepositoryImpl // They are NOT shared — correct for stateful repos // ActivityRetainedComponent — survives rotation // Parent of both ViewModelComponent AND ActivityComponent // @ActivityRetainedScoped = shared across ViewModel + Activity of same instance
- Hierarchy: Singleton → ActivityRetained → (ViewModel | Activity → Fragment)
- Child inherits parent: ViewModel can inject @Singleton OkHttpClient
- Parent can't use child: Singleton can't access @ActivityScoped — Activity may be dead
- @ViewModelScoped: each ViewModel gets its own instance — not shared between ViewModels
- Scope mismatch = build error: Hilt catches this at compile time
The interview test: "Can a @Singleton dependency inject an @ActivityScoped dependency?" No — Singleton lives longer than Activity, so this would be a scope mismatch. Hilt catches it at build time. The rule: a dependency can only be injected into components of equal or narrower lifetime.
Hilt's DI graph is synchronous — all @Provides methods run on the main thread at injection time. Async-init SDKs need a wrapper pattern that bridges async init with synchronous DI.
// Problem: SDK initialises asynchronously class SomeSDK { companion object { fun initialise(ctx: Context, callback: (SomeSDK) -> Unit) { /* async */ } } } // Solution 1: Suspend initialisation in App.onCreate with coroutines @HiltAndroidApp class MyApp : Application() { override fun onCreate() { super.onCreate () // Don't block onCreate — initialise lazily } } // Solution 2: Deferred / lazy holder pattern class SdkHolder @Inject constructor( @ApplicationContext private val ctx: Context ) { private val _sdk = CompletableDeferred<SomeSDK>() val sdk: Deferred<SomeSDK> = _sdk fun initialise() { SomeSDK.initialise (ctx) { sdk -> _sdk.complete (sdk) } } } // Inject SdkHolder — it's available immediately (it's a wrapper) @Singleton class AnalyticsRepository @Inject constructor( private val sdkHolder: SdkHolder ) { suspend fun track(event: String) { sdkHolder.sdk.await ().track (event) // suspends until SDK ready } } // Kick off init in Application.onCreate: @HiltAndroidApp class MyApp : Application() { @Inject lateinit var sdkHolder: SdkHolder override fun onCreate() { super.onCreate () sdkHolder.initialise () // starts async init } }
- @Provides is synchronous: runs on main thread — never block it with async init
- CompletableDeferred<T>: bridges async callback → coroutine suspend — SDK wrapper
- Holder pattern: inject the holder immediately, await the SDK on first use
- Application.onCreate: kick off async init early — SDK likely ready before first use
- Alternative: App Startup library — schedule initialiser in background thread
The core insight: "DI provides the wrapper immediately. The wrapper suspends internally until the async SDK is ready. Callers just call sdkHolder.sdk.await() — they don't know or care that the SDK was async." This separates the async concern from the DI concern cleanly.
Hilt's predefined components cover 95% of Android use cases. Custom components (via @DefineComponent) are needed for unusual lifecycle objects — like a logged-in user session scope.
// Standard Hilt components cover most cases: // SingletonComponent, ActivityRetainedComponent, ViewModelComponent, // ActivityComponent, FragmentComponent, ServiceComponent // Custom component with @DefineComponent — for unusual lifecycles // Example: UserScope — exists only when user is logged in // Step 1: Define the scope annotation @Scope @Retention(AnnotationRetention.RUNTIME) annotation class UserScope // Step 2: Define the component (child of SingletonComponent) @UserScope @DefineComponent(parent = SingletonComponent::class) interface UserComponent { @DefineComponent.Builder interface Builder { fun bindUser(@BindsInstance user: User): Builder fun build(): UserComponent } } // Step 3: Install modules in UserComponent @Module @InstallIn(UserComponent::class) object UserModule { @Provides @UserScope fun provideUserPrefs(user: User): UserPreferences = UserPreferences(user.id) } // Step 4: Manage the component lifecycle @Singleton class UserComponentManager @Inject constructor( private val builder: UserComponent.Builder ) { var userComponent: UserComponent? = null private set fun login(user: User) { userComponent = builder.bindUser (user).build () } fun logout() { userComponent = null } // destroys all @UserScope instances } // When to create custom components: // ✅ User session scope — exists only while logged in // ✅ Flow scope — exists only for a multi-step wizard // ❌ Most cases — predefined Hilt components are sufficient
- @DefineComponent: creates a custom Hilt component with its own scope and lifetime
- @BindsInstance: inject runtime values (like User object) at component creation time
- Custom scope: all @UserScope objects are created fresh per UserComponent instance
- Logout = null: destroying the UserComponent destroys all scoped instances
- Rarely needed: only for lifecycles not covered by Hilt's predefined components
The user-session scope is the canonical custom component example. "When the user logs in, we create a UserComponent that holds user-specific singletons (preferences, cache). When they log out, we null it out — all scoped objects are garbage collected. No manual cleanup needed."
Auth interceptors are a classic DI design problem — the interceptor needs the session token, the session needs the API, and the API needs OkHttp. Circular dependency risk and proper scoping are the challenges.
// AuthInterceptor — gets token from SessionManager class AuthInterceptor @Inject constructor( private val session: dagger.Lazy<SessionManager> // Lazy breaks circular dep! ) : Interceptor { override fun intercept(chain: Interceptor.Chain): Response { val token = session.get ().getToken () // Lazy.get() on first use val request = if (token != null) { chain.request().newBuilder () .header("Authorization", "Bearer $token") . build () } else chain.request() return chain.proceed (request) } } // Network module — wires it all together @Module @InstallIn(SingletonComponent::class) object NetworkModule { @Provides @Singleton fun provideOkHttp(authInterceptor: AuthInterceptor): OkHttpClient = OkHttpClient.Builder() .addInterceptor (authInterceptor) .build () // AuthInterceptor uses Lazy<SessionManager> // SessionManager may depend on OkHttpClient // Lazy breaks the cycle: OkHttp built first, SessionManager created later on demand } // TokenRefresh interceptor — retry 401 with fresh token class TokenRefreshInterceptor @Inject constructor( private val session: dagger.Lazy<SessionManager> ) : Interceptor { override fun intercept(chain: Interceptor.Chain): Response { val response = chain.proceed (chain.request()) if (response.code == 401) { response.close () session.get ().refreshToken () // blocking call here is intentional return chain.proceed (chain.request()) } return response } }
- dagger.Lazy<SessionManager>: breaks circular dependency — OkHttp built before SessionManager
- AuthInterceptor is @Singleton: one interceptor instance, stateless (reads token per request)
- 401 refresh interceptor: retry pattern — closes old response before retrying
- @IntoSet pattern: add multiple interceptors via multibindings for cleaner module composition
- Separate auth vs logging interceptors: single responsibility, each testable independently
The Lazy trick for circular dependencies: "OkHttpClient needs AuthInterceptor. AuthInterceptor needs SessionManager. SessionManager needs OkHttpClient." Without Lazy, this is a Dagger build error. dagger.Lazy<SessionManager> breaks the cycle — OkHttp builds first, SessionManager is created lazily on first token read.
@BindsInstance injects runtime values into the Hilt component graph at construction time — values known only at runtime (like the Application or a User object) that can't be created by Hilt itself.
// Hilt automatically uses @BindsInstance for Application and Context // That's how @ApplicationContext works — bound via @BindsInstance at app start // In practice, @BindsInstance is used in custom components (see Q30) // But you can also inject it via Hilt's Application component setup // Example: Configuration value known at app start data class AppConfig(val apiBaseUrl: String, val enableLogging: Boolean) // Option 1: @Provides it from Application @Module @InstallIn(SingletonComponent::class) object ConfigModule { @Provides @Singleton fun provideConfig(@ApplicationContext ctx: Context): AppConfig = AppConfig( apiBaseUrl = ctx.getString (R.string.api_base_url), enableLogging = BuildConfig.DEBUG ) } // Option 2: Custom component with @BindsInstance for user session @UserScope @DefineComponent(parent = SingletonComponent::class) interface UserComponent { @DefineComponent.Builder interface Builder { fun bindUser(@BindsInstance user: LoggedInUser): Builder fun build(): UserComponent } } // LoggedInUser is now available for injection in @UserScope deps // Option 3: For testing — bind test values @HiltAndroidTest @UninstallModules(ConfigModule::class) class ApiTest { @Module @InstallIn(SingletonComponent::class) object TestConfigModule { @Provides fun provideTestConfig(): AppConfig = AppConfig( apiBaseUrl = "http://localhost:8080", // local test server enableLogging = true ) } }
- @BindsInstance: inject runtime values into the component at creation — not via @Provides
- @ApplicationContext: Hilt uses @BindsInstance internally for Context and Application
- Custom components: primary use case for @BindsInstance (user session, flow-scoped data)
- AppConfig from resources: @Provides from ApplicationContext — clean way to inject config
- Testing: @UninstallModules + replacement module to inject test values
"@BindsInstance is how you inject things that exist at runtime but can't be constructed by Dagger — like the currently logged-in User object. Dagger can't create a User; you give it to the component builder at login time. This is the canonical use case."
Multiple Retrofit instances require qualifiers to distinguish them. The pattern is qualifier annotations + separate @Provides methods — each qualifier represents a different API client configuration.
// Step 1: Define qualifiers for each API @Qualifier @Retention(AnnotationRetention.BINARY) annotation class MainApi @Qualifier @Retention(AnnotationRetention.BINARY) annotation class ThirdPartyApi // Step 2: Provide each OkHttp + Retrofit combo separately @Module @InstallIn(SingletonComponent::class) object NetworkModule { // Main API OkHttp — with auth interceptor @Provides @Singleton @MainApi fun provideMainOkHttp(authInterceptor: AuthInterceptor): OkHttpClient = OkHttpClient.Builder() .addInterceptor (authInterceptor) .connectTimeout (30, TimeUnit.SECONDS) .build () // Third-party API OkHttp — different timeout, API key header @Provides @Singleton @ThirdPartyApi fun provideThirdPartyOkHttp(): OkHttpClient = OkHttpClient.Builder() .addInterceptor { chain -> chain.proceed (chain.request().newBuilder () .header ("X-API-Key", BuildConfig.THIRD_PARTY_KEY) .build ()) } .connectTimeout (10, TimeUnit.SECONDS) .build () @Provides @Singleton @MainApi fun provideMainRetrofit(@MainApi client: OkHttpClient): Retrofit = Retrofit.Builder().client (client).baseUrl ("https://api.myapp.com").build () @Provides @Singleton @ThirdPartyApi fun provideThirdPartyRetrofit(@ThirdPartyApi client: OkHttpClient): Retrofit = Retrofit.Builder().client (client).baseUrl ("https://api.thirdparty.com").build () @Provides @Singleton fun provideUserApi(@MainApi retrofit: Retrofit): UserApi = retrofit.create (UserApi::class.java) @Provides @Singleton fun provideWeatherApi(@ThirdPartyApi retrofit: Retrofit): WeatherApi = retrofit.create (WeatherApi::class.java) }
- Qualifier per API: @MainApi, @ThirdPartyApi — each wraps a distinct OkHttp+Retrofit pair
- Qualifier propagation: @MainApi OkHttp → @MainApi Retrofit → MainApi endpoints
- API services are unqualified: UserApi is unambiguous since there's only one binding
- No qualifier on final API interfaces: only needed where ambiguity exists
- Separate concerns: each API has its own interceptor chain, timeout, and base URL
The qualifier must propagate through the chain: @MainApi OkHttp → @MainApi Retrofit → UserApi (no qualifier needed since there's only one UserApi). The qualifier resolves ambiguity at the OkHttp and Retrofit levels; the concrete API interfaces are unambiguous.
When Android kills the process and restores it, the entire Hilt graph is rebuilt from scratch. @Singleton instances are NEW objects — any state stored in them is lost. Only Bundle-backed state survives.
// Process death: OS kills the process // Process restoration: OS recreates Activity from back stack // What HAPPENS to Hilt on process death + restore: // 1. Application.onCreate() runs again // 2. @HiltAndroidApp triggers → NEW SingletonComponent created // 3. ALL @Singleton instances are NEW objects — state LOST // 4. Activity.onCreate() runs with saved Bundle // 5. @AndroidEntryPoint → new Activity + Fragment components // 6. ViewModel IS recreated (ViewModelStore wiped by process death) // 7. SavedStateHandle IS restored from Bundle — survives! // WRONG: storing state in a @Singleton @Singleton class CartManager @Inject constructor() { private val items = mutableListOf<CartItem>() // ❌ LOST on process death! fun addItem(item: CartItem) { items.add (item) } } // RIGHT: state in Room or DataStore (persisted to disk) @Singleton class CartRepository @Inject constructor( private val dao: CartDao // Room persists to SQLite — survives process death ) { fun observeCart() = dao.observeAll () suspend fun addItem(item: CartItem) = dao.insert (item) } // SavedStateHandle — survives process death via Bundle @HiltViewModel class SearchViewModel @Inject constructor( private val saved: SavedStateHandle ) : ViewModel() { var query by saved.saveable { mutableStateOf("") } // survives process death } // Test process death: Developer Options → Don't keep activities
- Process death: entire Hilt graph rebuilt from scratch — all @Singleton instances are new
- In-memory state: any data stored in @Singleton fields is LOST — use Room or DataStore
- SavedStateHandle: Bundle-backed, survives process death — safe for user-typed state
- ViewModel: also recreated on process death — not just rotation
- Test: "Don't keep activities" in Developer Options simulates process death aggressively
"@Singleton lives as long as the Application process — which is NOT as long as users expect. Android kills the process freely when backgrounded. Any state in a @Singleton field disappears. The rule: if data must survive, it goes in Room, DataStore, or SavedStateHandle — not in a Hilt-managed field."
This is a nuanced technical debate. @Inject is from javax.inject (JSR-330) — a standard annotation not tied to Hilt. But there are legitimate concerns about Hilt-specific annotations leaking into domain code.
// @Inject — NOT Hilt-specific import javax.inject.Inject // Standard JSR-330 — supported by Dagger, Hilt, Guice, Koin class UserRepository @Inject constructor( private val api: UserApi, private val dao: UserDao ) // This class can be used with Hilt, Dagger, Guice, or manual DI // Not Hilt-locked — @Inject is a universal DI standard // These ARE Hilt-specific and should stay out of domain: // @HiltViewModel — presentation layer only // @AndroidEntryPoint — Android framework classes only // @InstallIn — module configuration, not business classes // @HiltAndroidApp — Application class only // Domain layer — zero Hilt annotations (correct) // domain/src/main/kotlin/PlaceOrderUseCase.kt class PlaceOrderUseCase @Inject constructor( // javax.inject.Inject — standard private val orderRepo: OrderRepository, private val inventoryRepo: InventoryRepository ) { // No Hilt imports — only @Inject from javax // Testable without Hilt: PlaceOrderUseCase(FakeOrderRepo(), FakeInventoryRepo()) } // The colleague's concern applies to: class DomainUseCase @Inject constructor() { @Inject lateinit var repo: UserRepository // ❌ field injection in domain — wrong! } // Field injection IS problematic in domain — it's Hilt-triggered, not standard // Constructor @Inject = fine everywhere // Field @Inject = only in Android framework classes
- @Inject is JSR-330 standard — not Hilt-specific, works with Dagger, Guice, Koin
- Hilt-specific annotations: @HiltViewModel, @AndroidEntryPoint, @InstallIn — these should stay in presentation/framework layers
- Domain classes: @Inject constructor is fine — zero Hilt lock-in
- Field injection in domain: wrong — that's framework-dependent and the real concern
- Testability unchanged: class with @Inject constructor is instantiated manually in tests — no Hilt needed
"Your colleague is partially right — Hilt-specific annotations like @HiltViewModel shouldn't leak into domain code. But @Inject is JSR-330 standard — it's been around since 2009. PlaceOrderUseCase with @Inject constructor is no more 'Hilt-coupled' than a Kotlin data class is 'JVM-coupled'."
DI testing has three levels matching the testing pyramid — each validates a different aspect of the dependency graph, from unit tests that don't use Hilt at all to integration tests that validate the full graph.
// LEVEL 1: Unit tests — no Hilt, constructor injection with fakes class GetUserUseCaseTest { private val fakeRepo = FakeUserRepository() private val useCase = GetUserUseCase(fakeRepo) // direct construction @Test fun returnsUser() = runTest { fakeRepo.setUser(User("1", "Alice")) assertEquals("Alice", useCase("1").name) } } // LEVEL 2: Integration tests — partial Hilt, replace specific modules @HiltAndroidTest @RunWith(AndroidJUnit4::class) class UserRepositoryIntegrationTest { @get:Rule val hiltRule = HiltAndroidRule(this) @Inject lateinit var repo: UserRepository @Before fun setUp() { hiltRule.inject () } @Test fun repoReturnsUser() = runTest { // Uses real Room (in-memory) + fake API via @TestInstallIn val user = repo.getUser ("1") assertNotNull(user) } } // LEVEL 3: Graph validation test — verify the full DI graph compiles correctly @HiltAndroidTest @RunWith(AndroidJUnit4::class) class HiltGraphValidationTest { @get:Rule val hiltRule = HiltAndroidRule(this) @Inject lateinit var mainRepo: UserRepository @Inject lateinit var analytics: AnalyticsTracker @Inject lateinit var session: SessionManager @Before fun setUp() { hiltRule.inject () } @Test fun allDependenciesProvided() { assertNotNull(mainRepo) assertNotNull(analytics) assertNotNull(session) // Just verifying injection succeeded — graph is valid } } // This test catches: missing bindings, scope mismatches, circular deps // that only appear at runtime (e.g. when a module is conditionally included)
- Level 1 (unit): no Hilt — construct classes directly, fast, no Android needed
- Level 2 (integration): partial Hilt with @TestInstallIn — test real interactions with controlled data
- Level 3 (graph validation): inject all major dependencies — verify the full graph is wired correctly
- Graph validation test: catches missing bindings in optional/conditional modules
- Hilt catches most errors at build time — runtime graph tests for dynamic/conditional modules
Most Hilt errors are build-time — Dagger validates the graph at compile time. But conditional modules (enabled per flavor or environment) may have valid graph in one config and missing bindings in another. A graph validation test in each test variant catches this early in CI.
Full Hilt graph initialisation in every UI test is expensive. Strategic use of test isolation, fake modules, and test scoping significantly reduces setup time.
// Problem: full graph built per test class — slow! // Each @HiltAndroidTest class creates a new Application with full graph // Strategy 1: Shared @TestInstallIn modules — apply globally, not per class // Put in src/test/java/ — applies to ALL test classes automatically @TestInstallIn( components = [SingletonComponent::class], replaces = [NetworkModule::class] ) @Module object FakeNetworkModule { @Provides @Singleton fun provideApi(): UserApi = FakeUserApi() // instant, no HTTP } // Eliminates real network setup time for ALL tests // Strategy 2: Use TestCoroutineDispatcher — avoids real delays @TestInstallIn(components = [SingletonComponent::class], replaces = [DispatcherModule::class]) @Module object TestDispatcherModule { @Provides @Singleton @IoDispatcher fun provideIo(): CoroutineDispatcher = UnconfinedTestDispatcher() } // Strategy 3: Split test classes — avoid one giant test class // Hilt rebuilds graph per test CLASS, not per test METHOD // Group tests that share the same module overrides // Strategy 4: In-memory Room — faster than real SQLite setup @TestInstallIn(components = [SingletonComponent::class], replaces = [DatabaseModule::class]) @Module object TestDatabaseModule { @Provides @Singleton fun provideDb(@ApplicationContext ctx: Context): AppDatabase = Room.inMemoryDatabaseBuilder (ctx, AppDatabase::class.java) .allowMainThreadQueries () // allowed in tests .build () } // Strategy 5: Don't use @HiltAndroidTest for unit-level logic // Only use it for tests that need the full Activity + Compose stack
- @TestInstallIn globally: replace expensive modules (real network, real DB) for ALL tests at once
- FakeApi: instant responses — eliminates HTTP round-trip overhead per test
- TestDispatcher: eliminates real delay() waits — tests run instantly
- In-memory Room: faster than file-backed SQLite — no disk I/O setup
- Split test classes: Hilt rebuilds per class — grouping related tests reduces graph rebuilds
The biggest win: put FakeNetworkModule + TestDatabaseModule + TestDispatcherModule in src/androidTest/ with @TestInstallIn. They apply to ALL @HiltAndroidTest classes automatically. Real network and disk I/O eliminated — test run time cuts by 50-70%.
Hilt + Compose Navigation integration lets you scope ViewModels to a NavGraph — shared across multiple screens within a flow. This is the correct pattern for multi-step flows like onboarding or checkout.
// Single screen ViewModel — default, scoped to composable destination @Composable fun HomeScreen(vm: HomeViewModel = hiltViewModel()) { // vm is scoped to the HomeScreen destination // Destroyed when navigating away } // NavGraph-scoped ViewModel — shared across multiple destinations in a nested graph // Step 1: Define nested NavGraph fun NavGraphBuilder.checkoutGraph (navController: NavController) { navigation(startDestination = "cart", route = "checkout") { composable("cart") { entry -> val parentEntry = remember(entry) { navController.getBackStackEntry ("checkout") // NavGraph entry } val checkoutVm: CheckoutViewModel = hiltViewModel(parentEntry) CartScreen(checkoutVm) } composable("payment") { entry -> val parentEntry = remember(entry) { navController.getBackStackEntry("checkout") } val checkoutVm: CheckoutViewModel = hiltViewModel(parentEntry) // SAME CheckoutViewModel as CartScreen — shared state! PaymentScreen(checkoutVm) } } } // In multi-module: CheckoutViewModel in :feature:checkout @HiltViewModel class CheckoutViewModel @Inject constructor( private val cartRepo: CartRepository, private val paymentRepo: PaymentRepository, private val saved: SavedStateHandle ) : ViewModel() { var selectedAddress by mutableStateOf<Address?>(null) var selectedPayment by mutableStateOf<PaymentMethod?>(null) } // CheckoutViewModel lives for the entire "checkout" NavGraph lifetime
- hiltViewModel(): scopes ViewModel to the current destination — destroyed on navigation away
- hiltViewModel(parentEntry): scopes to the NavGraph — shared across all destinations in the graph
- navController.getBackStackEntry("route"): retrieves the NavGraph back stack entry
- remember(entry): prevents re-lookup on recomposition — always wraps getBackStackEntry calls
- Multi-module: CheckoutViewModel in :feature:checkout, accessed from any destination in that graph
hiltViewModel(parentEntry) is the answer to "how do I share state between onboarding/checkout steps?" The ViewModel lives as long as the NavGraph, not the individual screen. When the user completes or backs out of the flow, the ViewModel is destroyed and its resources are released.
Unbounded @Singleton caches are a classic memory problem. The DI solution involves scope narrowing, eviction policies, and WeakReference strategies — each with different trade-offs.
// Problem: @Singleton cache that grows forever @Singleton class ImageCache @Inject constructor() { private val cache = mutableMapOf<String, Bitmap>() // ❌ grows forever! fun put(key: String, bitmap: Bitmap) { cache[key] = bitmap } } // Fix 1: LRU cache with eviction policy @Singleton class ImageCache @Inject constructor() { private val maxMemory = Runtime.getRuntime ().maxMemory () / 8 private val cache = object : LruCache<String, Bitmap>(maxMemory.toInt ()) { override fun sizeOf(key: String, value: Bitmap) = value.byteCount } } // Fix 2: Narrow scope — ActivityRetainedScoped instead of Singleton // Cache per Activity instance — cleared on Activity finish @Module @InstallIn(ActivityRetainedComponent::class) object CacheModule { @Provides @ActivityRetainedScoped fun provideImageCache(): ImageCache = ImageCache() } // Cache cleared when user navigates away from the Activity // Fix 3: Delegate to Coil/Glide — use a battle-tested library cache @Provides @Singleton fun provideImageLoader(@ApplicationContext ctx: Context): ImageLoader = ImageLoader.Builder(ctx) .memoryCache { MemoryCache.Builder(ctx).maxSizePercent (0.25).build () } .build () // Coil manages its own cache with proper eviction // Fix 4: WeakReference values — GC can evict when memory pressure private val cache = WeakHashMap<String, WeakReference<Bitmap>>()
- LruCache: size-bounded cache with automatic eviction — the right tool for image caches
- Scope narrowing: @ActivityRetainedScoped — cache cleared when Activity is finished
- Library delegation: Coil/Glide handle cache management correctly — don't reinvent it
- WeakReference: GC-friendly — entries evicted under memory pressure automatically
- Diagnosis: AndroidStudio Memory Profiler → Heap Dump → look for unexpectedly large objects
The DI insight: changing from @Singleton to @ActivityRetainedScoped in ONE @Provides method fixes the memory leak — no other code changes needed. This is DI's power: scope management is centralised. In a non-DI codebase you'd have to hunt through 20 files to find all usages.
App Startup runs initializers before Hilt's component is built — meaning Hilt-injected dependencies aren't available in Startup Initializer classes. Understanding this interaction prevents subtle initialisation bugs.
// Lifecycle order: // 1. ContentProvider.onCreate() — App Startup runs here // 2. Application.onCreate() — Hilt builds graph here (@HiltAndroidApp) // 3. Activity.onCreate() — Hilt injects fields here (@AndroidEntryPoint) // ❌ WRONG: using Hilt injection in an Initializer class AnalyticsInitializer : Initializer<Analytics> { @Inject lateinit var config: AppConfig // ❌ Hilt not ready yet! override fun create(ctx: Context): Analytics { // config not injected — NPE at runtime return Analytics(config.apiKey) } } // ✅ CORRECT: Initializer creates its own dependencies class AnalyticsInitializer : Initializer<Analytics> { override fun create(ctx: Context): Analytics { val apiKey = ctx.getString (R.string.analytics_key) // get from Context return Firebase.initialize (ctx).also { Firebase.analytics.setAnalyticsCollectionEnabled (true) } } override fun dependencies() = emptyList<Class<*>>() } // ✅ Then provide from Application (after Hilt is ready) @Module @InstallIn(SingletonComponent::class) object AnalyticsModule { @Provides @Singleton fun provideFirebaseAnalytics(): FirebaseAnalytics = Firebase.analytics // already initialised by App Startup } // App Startup initialises Firebase → Hilt provides it to the graph
- ContentProvider runs before Application.onCreate: App Startup happens before Hilt is ready
- No @Inject in Initializer: Hilt graph doesn't exist yet — use Context only
- Two-phase init: App Startup initialises the SDK, Hilt provides it to the graph afterward
- Dependency ordering: Initializer.dependencies() controls Startup order, independent of Hilt
- WorkManager conflict: disable auto-init if using HiltWorkerFactory (see Q19)
"App Startup runs in a ContentProvider, which executes before Application.onCreate(). Hilt builds its graph in Application.onCreate(). So any Initializer that tries to use @Inject will crash with NPE — the graph doesn't exist yet. Initializers must be self-contained; Hilt picks up their products afterward."
Database configuration is a perfect use case for Hilt's source-set module pattern combined with build type configuration — test gets in-memory, release gets WAL mode with encryption.
// Interface for database configuration interface DatabaseConfig { val useInMemory: Boolean val enableWalMode: Boolean val encryptionKey: String? } // src/main/java/DatabaseModule.kt — shared builder logic @Module @InstallIn(SingletonComponent::class) object DatabaseModule { @Provides @Singleton fun provideDatabase( @ApplicationContext ctx: Context, config: DatabaseConfig // injected — bound per build type ): AppDatabase { val builder = if (config.useInMemory) Room.inMemoryDatabaseBuilder (ctx, AppDatabase::class.java) else Room.databaseBuilder (ctx, AppDatabase::class.java, "app.db") if (config.enableWalMode) builder.setJournalMode (JournalMode.WRITE_AHEAD_LOGGING) config.encryptionKey?.let { builder.openHelperFactory (SupportFactory(it.toByteArray ())) } return builder.build () } } // src/release/java/DatabaseConfigModule.kt @Module @InstallIn(SingletonComponent::class) object DatabaseConfigModule { @Provides @Singleton fun provideConfig(): DatabaseConfig = object : DatabaseConfig { override val useInMemory = false override val enableWalMode = true override val encryptionKey = BuildConfig.DB_ENCRYPTION_KEY } } // src/androidTest/java/DatabaseConfigModule.kt @TestInstallIn(components = [SingletonComponent::class], replaces = [DatabaseConfigModule::class]) @Module object DatabaseConfigModule { @Provides fun provideConfig(): DatabaseConfig = object : DatabaseConfig { override val useInMemory = true override val enableWalMode = false override val encryptionKey = null } }
- DatabaseConfig interface: separates configuration from construction — testable and swappable
- Shared builder logic: DatabaseModule uses the injected config — same code for all variants
- Source set modules: release gets WAL + encryption, tests get in-memory via @TestInstallIn
- WAL mode: Write-Ahead Logging improves concurrent read performance in production
- Encryption key: from BuildConfig — never hardcoded in the source file
The elegant part: DatabaseModule is identical in all variants. Only DatabaseConfig changes. This keeps the complex Room builder code in one place while allowing full environment-specific configuration. Adding a new configuration option = add to the interface + update the two concrete implementations.
Hilt's @Singleton and the classic Singleton pattern both ensure one instance, but differ fundamentally in how the instance is created, shared, and replaced. Hilt's approach is superior for testability and maintainability.
// Classic Singleton pattern — global static instance object UserRepositoryInstance { private val instance = UserRepositoryImpl(RetrofitApi(), RoomDatabase()) // Problems: // 1. Hard to test — can't replace with fake // 2. Hidden dependencies — creates its own Retrofit and Room // 3. Lives forever — never garbage collected // 4. Thread safety — object initialisation may have races // 5. No lifecycle — no way to "reset" between tests } // Hilt @Singleton — managed single instance @Module @InstallIn(SingletonComponent::class) abstract class RepoModule { @Binds @Singleton abstract fun bindRepo(impl: UserRepositoryImpl): UserRepository } // Hilt @Singleton advantages: // ✅ Testable: @TestInstallIn replaces with fake // ✅ Explicit deps: constructor shows all dependencies // ✅ Lifecycle: destroyed when Application is killed // ✅ Thread safe: Dagger double-checked locking under the hood // ✅ Interface: binds to interface, not concrete class // Common pitfall: @Singleton when you want @ActivityRetainedScoped // Result: state leaks between user sessions if app not fully restarted @Singleton // ❌ should be @ActivityRetainedScoped for per-user state class UserPreferencesCache @Inject constructor() { var cachedPrefs: UserPrefs? = null // previous user's prefs still cached! } // Rule: @Singleton only for truly stateless services or app-level resources // Anything with user state → @ActivityRetainedScoped or narrower
- Classic Singleton: static object — untestable, hidden deps, lives forever, no lifecycle
- Hilt @Singleton: Dagger-managed — testable, explicit deps, tied to Application lifecycle
- Thread safety: Hilt generates double-checked locking — classic singletons must handle manually
- Interface binding: Hilt @Singleton binds to interface — classic singletons expose concrete class
- State leak pitfall: @Singleton with user state — previous user's data visible to next user
The state leak is a real production bug: user A logs out, user B logs in — @Singleton UserPreferencesCache still holds user A's prefs. Fix: move user-specific state to @ActivityRetainedScoped and clear it on logout. DI makes this a one-line scope change.
Library modules should not force DI frameworks on consumers. The correct pattern is to expose factory functions or constructor injection while optionally providing a Hilt module as a separate artifact.
// Library design — DI-framework agnostic // :library:analytics — the actual library // Only uses javax.inject.Inject (standard, not Hilt-specific) class AnalyticsTracker @Inject constructor( private val apiKey: String, private val dispatcher: CoroutineDispatcher = Dispatchers.IO ) { fun track(event: String) { /* ... */ } } // Library also provides a factory for non-DI consumers object AnalyticsTrackerFactory { fun create(apiKey: String): AnalyticsTracker = AnalyticsTracker(apiKey) } // :library:analytics-hilt — separate optional artifact // Consumers who use Hilt can include this // implementation("com.mylib:analytics-hilt:1.0") @Module @InstallIn(SingletonComponent::class) object AnalyticsHiltModule { @Provides @Singleton fun provideAnalytics(@ApplicationContext ctx: Context): AnalyticsTracker = AnalyticsTrackerFactory.create (ctx.getString (R.string.analytics_key)) } // Consumer without Hilt — uses factory directly val analytics = AnalyticsTrackerFactory.create (apiKey = "my-key") // Consumer with Hilt — includes analytics-hilt artifact @HiltViewModel class HomeViewModel @Inject constructor( private val analytics: AnalyticsTracker // auto-provided by analytics-hilt ) : ViewModel()
- Library core: uses only javax.inject.Inject — works with any DI framework or no framework
- Factory pattern: provides factory functions for non-DI consumers
- Optional Hilt artifact: separate artifact with @Module — consumers opt-in
- Two artifacts: :analytics (DI-agnostic) and :analytics-hilt (Hilt bindings) — consumer chooses
- This is the pattern used by Room, Retrofit, and other Jetpack libraries
This is exactly how Room works: room-runtime is DI-agnostic, room-hilt is the optional Hilt integration. Your library core shouldn't know about Hilt. Put the @Module in a separate artifact named -hilt. Consumers who use Hilt get the integration; others use the factory. No one is forced into Hilt.
@DisableInstallInCheck suppresses Hilt's requirement that every @Module must have @InstallIn. It's needed when sharing module classes between Hilt and non-Hilt DI contexts — like in library modules or Kotlin Multiplatform code.
// Normal Hilt requirement: every @Module must have @InstallIn @Module // ❌ Build error: @Module must have @InstallIn or use @DisableInstallInCheck object SomeModule { /* ... */ } // @DisableInstallInCheck — tells Hilt to ignore this module @DisableInstallInCheck @Module object SharedModule { @Provides fun provideParser(): JsonParser = MoshiJsonParser() } // This module can be included by BOTH Hilt and standard Dagger components // Use case 1: Library with Dagger + optional Hilt support // The library uses vanilla Dagger internally // App consumers can include it in their Hilt component with @InstallIn // Use case 2: Shared module between test and production components @DisableInstallInCheck @Module object JsonModule { @Provides fun provideMoshi(): Moshi = Moshi.Builder().build () } // Used in production via @InstallIn: @Module(includes = [JsonModule::class]) @InstallIn(SingletonComponent::class) object ProductionModule // Used in tests via @TestInstallIn: @Module(includes = [JsonModule::class]) @TestInstallIn(components = [SingletonComponent::class], replaces = [ProductionModule::class]) object TestModule // When to use @DisableInstallInCheck: // ✅ Library modules shared between Hilt and non-Hilt consumers // ✅ Modules included by other modules (not installed directly) // ✅ Migration from Dagger to Hilt — intermediate state // ❌ Production app modules — always use @InstallIn properly
- @DisableInstallInCheck: opts a @Module out of Hilt's @InstallIn requirement
- Primary use: library modules that need to work with both Hilt and vanilla Dagger
- Module inclusion: a @DisableInstallInCheck module can be included by a @InstallIn module
- Migration aid: during Dagger → Hilt migration, legacy modules can use this temporarily
- Rare in app code: production app modules should always have @InstallIn
"@DisableInstallInCheck is a library author's tool, not a production app tool. If you find yourself using it in an app module, it's a signal that something in the architecture needs rethinking. The typical correct use: a utility module (JSON parsing, date formatting) that can be included in either Hilt or non-Hilt components."
A DI design review tests whether you can systematically identify anti-patterns, explain the consequences, and propose clean alternatives — the senior developer lens.
// Pattern A: @Singleton holding a coroutine scope @Singleton class DataSyncService @Inject constructor() { private val scope = CoroutineScope(Dispatchers.IO) // ❌ never cancelled! } // REJECT: scope leaks coroutines. Fix: inject viewModelScope or use GlobalScope with care // Better: CoroutineScope(SupervisorJob() + Dispatchers.IO) cancelled in Application.onTerminate // Pattern B: ViewModel directly constructing UseCases @HiltViewModel class UserViewModel @Inject constructor(private val repo: UserRepository) : ViewModel() { private val getUser = GetUserUseCase(repo) // ❌ VM creating its own deps } // REJECT: use cases should be injected. Prevents mocking/testing use case independently // Fix: @HiltViewModel constructor(private val getUser: GetUserUseCase) // Pattern C: @Provides that catches and swallows exceptions @Provides @Singleton fun provideAnalytics(): Analytics? = try { Firebase.analytics } catch (e: Exception) { null } // ❌ silent failure! // REJECT: injection failure should crash loudly — never silently return null from @Provides // Fix: use no-op pattern (NoOpAnalytics) or let it throw to surface the error early // Pattern D: constructor injection everywhere in domain and data class PlaceOrderUseCase @Inject constructor(private val repo: OrderRepository) { } class OrderRepositoryImpl @Inject constructor(private val api: OrderApi) : OrderRepository // ✅ APPROVE: clean, testable, standard pattern // Pattern E: @Singleton object with companion that does complex initialisation @Module @InstallIn(SingletonComponent::class) object SecurityModule { @Provides @Singleton fun provideKeyStore(): KeyStore = KeyStore.getInstance ("AndroidKeyStore").apply { load(null) } } // ✅ APPROVE: complex initialisation belongs in @Provides, not in field init
- Pattern A: REJECT — coroutine scope in @Singleton never cancelled, potential memory leak
- Pattern B: REJECT — ViewModel creating use cases bypasses DI, untestable, breaks DI contract
- Pattern C: REJECT — silent null injection hides init failures, propagates NPE later
- Pattern D: APPROVE — correct constructor injection, clean, testable
- Pattern E: APPROVE — complex KeyStore init in @Provides is exactly right
Design reviews test pattern recognition. Walk through each systematically: "A — scope leak. B — VM creating its own deps breaks DI. C — silent failure hides bugs. D — correct. E — correct." Explaining the consequence (not just 'wrong') shows why the pattern matters in production.
BroadcastReceiver, ContentProvider, and Service each have unique lifecycle quirks — particularly ContentProvider which initialises before the Hilt graph. Each requires a different approach.
// Service — @AndroidEntryPoint works normally @AndroidEntryPoint class SyncService : Service() { @Inject lateinit var syncRepo: SyncRepository // ✅ injected in onCreate override fun onCreate() { super.onCreate () } // injection happens here } // BroadcastReceiver — @AndroidEntryPoint but STATELESS // CRITICAL: BroadcastReceiver is NOT guaranteed to live long enough for async work @AndroidEntryPoint class ConnectivityReceiver : BroadcastReceiver() { @Inject lateinit var syncManager: SyncManager // ✅ but must use goAsync() override fun onReceive(ctx: Context, intent: Intent) { val pending = goAsync() // extend lifecycle for async work CoroutineScope(Dispatchers.IO).launch { try { syncManager.scheduleSync () } finally { pending.finish () // must call finish() or receiver is killed } } } } // ContentProvider — CANNOT use @AndroidEntryPoint! // ContentProvider.onCreate() runs BEFORE Application.onCreate() // → Hilt graph doesn't exist yet → injection fails class UserContentProvider : ContentProvider() { // ❌ @AndroidEntryPoint here = NPE on injection // ✅ Use @EntryPoint instead (lazy access after graph is ready) @EntryPoint @InstallIn(SingletonComponent::class) interface ProviderEntryPoint { fun userRepository(): UserRepository } private val repo by lazy { EntryPointAccessors.fromApplication (context!!.applicationContext , ProviderEntryPoint::class.java).userRepository () } }
- Service: @AndroidEntryPoint works — injection in onCreate() like Activity/Fragment
- BroadcastReceiver: @AndroidEntryPoint works but use goAsync() for any async work
- ContentProvider: CANNOT use @AndroidEntryPoint — initialises before Hilt graph
- ContentProvider fix: @EntryPoint with lazy access — graph is ready by first query() call
- BroadcastReceiver is stateless: new instance per broadcast — only inject, don't store state
ContentProvider is the most important edge case: "ContentProvider.onCreate() runs before Application.onCreate() — Hilt doesn't exist yet. @AndroidEntryPoint will crash. Use @EntryPoint with lazy access instead — by the time first query() is called, the Application has initialised Hilt."
KMP and Hilt coexist when you keep Hilt strictly in the Android platform layer. Shared business logic uses constructor injection with standard interfaces — no Hilt annotations in commonMain.
// KMP project structure // :shared // ├── commonMain — shared business logic // ├── androidMain — Android-specific implementations // └── iosMain — iOS-specific implementations // :androidApp — Android entry point (uses Hilt) // commonMain — NO Hilt, NO android.* imports // Uses constructor injection only (javax.inject or custom) interface UserRepository { // interface in commonMain suspend fun getUser(id: String): User } class GetUserUseCase( // no @Inject — just constructor private val repo: UserRepository ) { suspend operator fun invoke(id: String) = repo.getUser (id) } // androidMain — Android implementation class AndroidUserRepository @Inject constructor( // @Inject OK in androidMain private val api: UserApi, private val dao: UserDao ) : UserRepository { override suspend fun getUser(id: String) = api.getUser (id) } // :androidApp — Hilt wiring @Module @InstallIn(SingletonComponent::class) abstract class SharedModule { @Binds @Singleton abstract fun bindRepo(impl: AndroidUserRepository): UserRepository companion object { @Provides @Singleton fun provideUseCase(repo: UserRepository): GetUserUseCase = GetUserUseCase(repo) } } // iOS: Koin (supports KMP) or manual DI // val repo = IosUserRepository() // val getUser = GetUserUseCase(repo)
- commonMain: zero Hilt annotations — pure constructor injection for maximum portability
- androidMain: @Inject on Android implementations — Hilt wires these in :androidApp
- :androidApp: Hilt @Module binds Android impls to shared interfaces
- @Provides for shared classes: GetUserUseCase has no @Inject — provide it manually
- iOS: Koin (KMP-ready) or manual DI — each platform handles its own wiring
The architecture rule: "Hilt is an Android-only framework — it lives in :androidApp and androidMain only. commonMain knows nothing about Hilt, just constructor injection. The shared business logic is portable; Android wiring is platform-specific." This is the separation that makes KMP work.
2024-25 brought significant Hilt improvements — KSP stability, assisted injection improvements, Compose integration, and new testing APIs that simplify test setup considerably.
// 1. KSP full support (stable since 2024) // All Hilt annotations now work with KSP — KAPT deprecated // build.gradle.kts: plugins { id("com.google.devtools.ksp") } dependencies { ksp("com.google.dagger:hilt-compiler:2.51") // not kapt! } // 2. hiltViewModel with creationCallback (Hilt 2.49+) // Replaces the manual parentEntry pattern for assisted injection @Composable fun ProductScreen(productId: String) { val vm: ProductViewModel = hiltViewModel( creationCallback = { factory: ProductViewModel.Factory -> factory.create (productId) // cleaner than NavBackStackEntry lookup } ) } // 3. @HiltViewModel with @AssistedInject — fully supported @HiltViewModel(assistedFactory = ProductViewModel.Factory::class) class ProductViewModel @AssistedInject constructor( @Assisted val productId: String, private val repo: ProductRepository ) : ViewModel() { @AssistedFactory interface Factory { fun create(productId: String): ProductViewModel } } // 4. Improved test APIs — testImplementation("com.google.dagger:hilt-android-testing:2.51") // HiltAndroidRule improvements — more granular control // @BindValue — inject test doubles inline without @TestInstallIn @HiltAndroidTest class UserTest { @get:Rule val hiltRule = HiltAndroidRule(this) @BindValue val fakeRepo: UserRepository = FakeUserRepository() // No separate @TestInstallIn module needed! }
- KSP stable: KAPT deprecated — migrate all hilt-compiler to ksp() for 30-50% build improvement
- hiltViewModel(creationCallback): cleaner assisted injection in Compose — no NavBackStackEntry
- @HiltViewModel(assistedFactory): official Hilt support for assisted injection in ViewModels
- @BindValue: inject test doubles inline in @HiltAndroidTest — replaces many @TestInstallIn usages
- Hilt 2.51: current stable as of 2024 — all features above available
"The biggest practical changes: KSP is now fully stable (migrate from KAPT immediately — it's faster), and @BindValue simplifies test setup dramatically. Instead of a separate @TestInstallIn module for each test, you can declare fake bindings inline in the test class." Applied knowledge of recent updates signals you're current.
A DI health check is a systematic audit of common anti-patterns. Each red flag maps to a specific risk — memory leaks, testability gaps, or architectural violations.
// RED FLAGS TO SEARCH FOR: // 1. GlobalScope in @Singleton classes // grep -r "GlobalScope" app/src/main @Singleton class X @Inject constructor() { val scope = GlobalScope // ❌ leaks beyond Application lifecycle } // 2. Activity/Context stored in @Singleton // grep -r "private val context: Activity" — when injected into @Singleton @Singleton class Y(private val ctx: Activity) // ❌ memory leak // 3. Missing @Singleton on expensive objects // Look for: @Provides fun provideRetrofit() — without @Singleton @Provides fun provideRetrofit(): Retrofit // ❌ new instance per injection // 4. @Inject field in non-framework classes // grep -r "@Inject" --include="*Repository.kt" class Repo { @Inject lateinit var api: Api } // ❌ field injection in non-Android class // 5. @InstallIn(SingletonComponent) with @ActivityScoped annotation @Provides @ActivityScoped // ❌ scope mismatch — build error but catch in review fun provideX(): X // 6. Circular references via non-Lazy // detekt rule: forbid circular patterns // TOOLS: // gradle :app:kspDebugKotlin — runs Hilt validation, shows binding errors // detekt — custom rules for DI anti-patterns // LeakCanary — detects Context/Activity memory leaks at runtime // Android Memory Profiler — heap dumps for unbounded @Singleton caches // grep / IDE Find Usages — search for known anti-patterns manually
- GlobalScope in @Singleton: coroutines that outlive the app — LeakCanary + grep
- Activity context in @Singleton: memory leak after rotation — LeakCanary
- Missing @Singleton on Retrofit/Room: multiple instances — code review + profiler
- Field injection in non-Android classes: hidden Hilt coupling — grep for @Inject on non-Activity/Fragment
- Scope mismatches: Hilt catches at build time, but review catches the pattern before it compiles
Start with automated tooling: "I run LeakCanary in debug builds permanently — it catches Context leaks immediately. Then I do a targeted grep for GlobalScope, Activity context in @Singleton, and missing @Singleton on Retrofit/OkHttp. Finally a detekt rule for field injection outside Android components."
A complete DI architecture walkthrough demonstrates the full application of every concept — from dependency on KSP to scope decisions to testing strategy. This is the senior system-design answer.
// Complete DI architecture for a 10-module app // STEP 1: Build setup — every module // Convention plugin: myapp.android.feature plugins { id("com.google.devtools.ksp") } // KSP, not KAPT plugins { id("dagger.hilt.android.plugin") } dependencies { implementation("com.google.dagger:hilt-android:2.51") ksp("com.google.dagger:hilt-compiler:2.51") } // STEP 2: Module structure for @Modules // :core:network → NetworkModule (@Singleton: OkHttp, Retrofit, APIs) // :core:database → DatabaseModule (@Singleton: Room, DAOs) // :core:session → SessionModule (@Singleton: SessionManager interface) // :feature:* → each has its own module binding its repositories // STEP 3: Scope decisions // @Singleton: OkHttp, Retrofit, Room, shared repositories // @ActivityRetainedScoped: per-Activity caches, user session objects // @ViewModelScoped: stateful per-VM repositories // Unscoped (default): stateless use cases, formatters, validators // STEP 4: Qualifier strategy @Qualifier @Retention(AnnotationRetention.BINARY) annotation class IoDispatcher @Qualifier @Retention(AnnotationRetention.BINARY) annotation class MainDispatcher @Qualifier @Retention(AnnotationRetention.BINARY) annotation class MainApi // if multiple APIs // STEP 5: Testing infrastructure // src/androidTest/ global replacements: // @TestInstallIn: FakeNetworkModule, TestDatabaseModule, TestDispatcherModule // src/test/kotlin: FakeUserRepository, FakeOrderRepository in :core:testing // STEP 6: Source set modules for build variants // src/debug/ : DebugModule (LogAnalytics, no-op crash reporter) // src/release/: ProductionModule (Firebase analytics, Crashlytics) // STEP 7: Hilt version catalog entries // [versions] hilt = "2.51" // [libraries] hilt-android = { module = "dagger:hilt-android", version.ref = "hilt" } // [plugins] hilt = { id = "dagger.hilt.android.plugin", version.ref = "hilt" } // Decision: KAPT or KSP? // → KSP always. KAPT deprecated, KSP 2x faster, fully supported in 2024
- Convention plugin: hilt setup in one place — consistent across all 10 feature modules
- Module boundaries: each :core module owns its @Module, :app owns nothing — discovered automatically
- Scope strategy: think about lifetime first, then pick scope — default unscoped unless sharing is needed
- Global test modules: FakeNetworkModule in androidTest applies to all @HiltAndroidTest classes
- Source sets: debug/release variants handled cleanly via separate module class files
Walk through decisions in this order: "1) KSP not KAPT. 2) Convention plugin for consistent setup. 3) Module per :core module, auto-discovered by Hilt. 4) Scope decisions driven by lifetime, not by convenience. 5) Global fake modules in androidTest. 6) Source sets for variants." This structured walkthrough shows system-design maturity.
Account-switching invalidates stateful dependencies. The correct pattern is to scope dependencies to the account session and rebuild them when the account changes — not to mutate existing injected instances.
// Problem: PagingSource tied to account ID changes when user switches account // ❌ WRONG: mutating an injected @Singleton with new account data @Singleton class FeedRepository @Inject constructor() { var accountId: String = "" // ❌ mutable state in singleton fun getPager() = Pager(PagingConfig(20)) { FeedPagingSource(accountId) } } // ✅ CORRECT: reactive account-aware pager @HiltViewModel class FeedViewModel @Inject constructor( private val session: SessionManager, private val api: FeedApi ) : ViewModel() { val feedPager: Flow<PagingData<FeedItem>> = session.currentAccount .flatMapLatest { account -> Pager(PagingConfig(pageSize = 20)) { FeedPagingSource(api, account.id) // new source per account }.flow } .cachedIn (viewModelScope) // cache across recompositions } // PagingSource — account-scoped, created fresh per account switch class FeedPagingSource( private val api: FeedApi, private val accountId: String ) : PagingSource<Int, FeedItem>() { override suspend fun load(params: LoadParams<Int>): LoadResult<Int, FeedItem> { return try { val page = params.key ?: 1 val items = api.getFeed (accountId, page) LoadResult.Page (items, prevKey = if (page == 1) null else page - 1, nextKey = page + 1) } catch (e: Exception) { LoadResult.Error (e) } } override fun getRefreshKey(state: PagingState<Int, FeedItem>) = state.anchorPosition }
- flatMapLatest: cancels current pager and creates a new one whenever account changes
- Never mutate injected singletons: account-specific state belongs in the reactive stream
- cachedIn(viewModelScope): prevents page re-fetch on every recomposition
- PagingSource created inline: Hilt injects api, accountId comes from session stream
- Session as StateFlow: reactive source of truth for current account — emit on switch
The key insight: "PagingSource is not injectable — it's a factory product with runtime parameters. flatMapLatest on the account stream creates a fresh Pager whenever the account changes. Hilt provides FeedApi, the runtime accountId comes from SessionManager. Clean separation."
Understanding generated code reveals why Hilt is fast and type-safe. Dagger generates Provider classes and a Dagger component that wires everything — zero reflection at runtime.
// What you write: @Module @InstallIn(SingletonComponent::class) object NetworkModule { @Provides @Singleton fun provideRetrofit(): Retrofit = Retrofit.Builder().build () } class UserRepository @Inject constructor(private val retrofit: Retrofit) // What Dagger generates (simplified): // 1. Factory for UserRepository public final class UserRepository_Factory : Factory<UserRepository> { private val retrofitProvider: Provider<Retrofit> constructor(r: Provider<Retrofit>) { retrofitProvider = r } override fun get(): UserRepository = UserRepository(retrofitProvider.get ()) } // 2. @Singleton wraps Factory in DoubleCheck (thread-safe lazy init) private val retrofitProvider = DoubleCheck.provider ( NetworkModule_ProvideRetrofitFactory.create() ) // 3. Generated DaggerSingletonComponent public final class DaggerSingletonComponent : SingletonComponent { // All providers stored as fields private val retrofitProvider = DoubleCheck. provider (...) private val userRepositoryProvider = UserRepository_Factory(retrofitProvider) fun getUserRepository(): UserRepository = userRepositoryProvider.get () } // DoubleCheck: implements standard double-checked locking — thread-safe singleton // Zero reflection — all wiring is plain method calls // Provider.get() called at injection time, not graph construction time
- Factory per class: Dagger generates a _Factory class for every @Inject constructor
- DoubleCheck: wraps @Singleton providers — thread-safe lazy initialisation
- DaggerSingletonComponent: the generated component holds all providers as fields
- Zero reflection: all wiring is plain Java/Kotlin method calls — fast, AOT-optimisable
- Provider.get(): called at injection time — lazy by default, eager only for @Singleton
"Dagger generates a factory class for every injectable type — no reflection at runtime. The component is a plain Kotlin class with provider fields. This is why Dagger/Hilt performs better than Koin at runtime: every dependency lookup is a direct method call, not a registry search."
'Cannot be provided without an @Provides-annotated method' means Hilt can't find a way to create the type you're requesting. The three root causes: missing @Inject constructor, missing @Provides method in a module, or the type is provided in a different component scope than where you're injecting it.
// Cause 1: missing @Inject on the class constructor class UserRepository /* missing @Inject */ constructor( private val dao: UserDao ) // Fix: class UserRepository @Inject constructor(private val dao: UserDao) // Cause 2: interface -- Hilt does not know which implementation to use @Module @InstallIn(SingletonComponent::class) abstract class RepositoryModule { @Binds abstract fun bindUserRepo(impl: UserRepositoryImpl): UserRepository } // Cause 3: wrong component scope -- ViewModelComponent cannot access ActivityComponent bindings // Fix: move the binding to SingletonComponent or the correct scope
- Missing @Inject: Hilt can't see a class that doesn't opt in -- every class in the graph needs @Inject constructor or a @Provides method
- Interface binding: Hilt can't pick an implementation automatically -- use @Binds in an abstract @Module to map interface → implementation
- Wrong component: @ActivityScoped binding requested from @ViewModelComponent -- scopes don't overlap, move to SingletonComponent
- The error message always names the missing type -- search your codebase for that type and check for @Inject or @Provides
- Hilt component hierarchy: SingletonComponent → ActivityRetainedComponent → ViewModelComponent → ActivityComponent → FragmentComponent
Read the error bottom-up: "Component → class that needs it → dependency chain → what's missing at the top." Then check: (1) @Inject constructor, (2) @Provides method, (3) @Binds for interfaces, (4) @InstallIn in correct component, (5) correct classpath in multi-module. Five checks, in that order, solves 95% of MissingBinding errors.
Provider, Lazy, and direct injection differ in when the dependency is created and whether it can be created multiple times. Choosing wrong leads to unexpected object creation or memory issues.
// Direct injection — instance created at injection time class UserViewModel @Inject constructor( private val repo: UserRepository // created immediately when ViewModel is created ) // Use when: you always need this dependency, creation is cheap // dagger.Lazy<T> — created on first .get(), then cached class ProfileViewModel @Inject constructor( private val heavyService: dagger.Lazy<HeavyImageProcessor> ) : ViewModel() { fun processImage(uri: Uri) = heavyService.get ().process (uri) // Created on first processImage() call, same instance on second call } // Use when: expensive to create, not always needed, circular dep resolution // javax.inject.Provider<T> — new instance every .get() call class RequestManager @Inject constructor( private val requestFactory: Provider<NetworkRequest> ) { fun send(data: ByteArray) { val request = requestFactory.get () // NEW instance every call request.execute (data) } } // Use when: unscoped objects that must be fresh per use // e.g. HttpRequest, Connection, Ticket — stateful, single-use objects // Summary table: // Created when? Cached? Use case // Direct T Injection time Yes Always needed // dagger.Lazy<T> First .get() call Yes Sometimes needed / circular // Provider<T> Every .get() call No Fresh instance per use // @Singleton + Lazy = still one instance (DoubleCheck is inside Lazy) // @Singleton + Provider = still one instance (Provider.get() returns same) // Unscoped + Provider = truly new instance every .get()
- Direct: created at injection time — use for always-needed, cheap-to-create deps
- Lazy: deferred creation, cached after first call — expensive or optional deps
- Provider: fresh instance on every get() — stateful single-use objects like requests
- Scoping interacts: @Singleton with Provider still returns the same (scoped) instance
- Circular dep: dagger.Lazy breaks cycles; Provider doesn't help with cycles
The Provider pitfall: "unscoped + Provider gives you a new instance every get(). @Singleton + Provider still gives you the same instance — the scope wins. This surprises many developers. Provider only gives truly new instances for unscoped bindings."
Conditional features use product flavors, build types, or optional module inclusion — not runtime conditionals inside @Provides. The condition is resolved at build time, keeping the DI graph static and type-safe.
// Scenario: AR feature — only in "pro" flavor, not in "free" // Interface in main source — always present interface ArFeature { fun isAvailable(): Boolean suspend fun launchAr(sceneUri: String) } // src/free/java/ArModule.kt — no-op implementation class UnavailableArFeature @Inject constructor() : ArFeature { override fun isAvailable() = false override suspend fun launchAr(sceneUri: String) { } // no-op } @Module @InstallIn(SingletonComponent::class) abstract class ArModule { @Binds abstract fun bindAr(impl: UnavailableArFeature): ArFeature } // src/pro/java/ArModule.kt — real implementation class RealArFeature @Inject constructor( private val arCore: ArCore ) : ArFeature { override fun isAvailable() = ArCoreApk.getInstance ().checkAvailability(/* ctx */) == Availability.SUPPORTED_INSTALLED override suspend fun launchAr(sceneUri: String) { arCore. launch (sceneUri) } } @Module @InstallIn(SingletonComponent::class) abstract class ArModule { @Binds @Singleton abstract fun bindAr(impl: RealArFeature): ArFeature } // Consumer — zero knowledge of flavor @HiltViewModel class ProductViewModel @Inject constructor( private val ar: ArFeature ) : ViewModel() { fun onArButtonClick(uri: String) { if (ar.isAvailable ()) viewModelScope.launch { ar.launchAr (uri) } } }
- Same module class name, different source sets: Gradle picks the right one per flavor
- No-op for missing features: UnavailableArFeature returns false / does nothing — no null checks
- Consumer agnostic: ProductViewModel calls ar.isAvailable() — doesn't know flavor
- No runtime BuildConfig checks: condition resolved at build time via source sets
- Dynamic Feature alternative: ArFeature in a separate DFM — downloaded on demand
"Build-time condition beats runtime condition every time for DI. Same interface, different source-set implementations. Free flavor: isAvailable() returns false — button hidden. Pro flavor: real implementation. The ViewModel never knows which flavor it's in."
@BindValue lets you inject test doubles directly as fields in your test class — no separate module class needed. It's lighter-weight than @TestInstallIn for per-test customisation.
// Old approach — @TestInstallIn (still valid for global replacements) @TestInstallIn( components = [SingletonComponent::class], replaces = [RepositoryModule::class] ) @Module abstract class FakeRepositoryModule { @Binds abstract fun bind(fake: FakeUserRepository): UserRepository } // Applies to ALL tests — can't vary per test class // New approach — @BindValue (per-test-class, inline) @HiltAndroidTest @UninstallModules(RepositoryModule::class) class UserScreenTest { @get:Rule val hiltRule = HiltAndroidRule(this) @get:Rule val composeRule = createAndroidComposeRule<HiltTestActivity>() // @BindValue declares a test binding inline — no separate module! @BindValue val userRepo: UserRepository = FakeUserRepository() @Before fun setUp() { hiltRule.inject() } @Test fun showsUserName() { (userRepo as FakeUserRepository). setUser (User("1", "Alice")) composeRule.onNodeWithText ("Alice").assertIsDisplayed () } } // @BindValue with @Named qualifier @BindValue @Named("test_url") val baseUrl: String = "http://localhost:8080" // @BindValue key differences from @TestInstallIn: // @BindValue: per test CLASS — can customise between test classes // @TestInstallIn: applies to ALL tests in the module — global replacement // @BindValue requires @UninstallModules to remove production binding // @BindValue: the field IS the binding — Hilt uses it directly via the field value
- @BindValue: inject a test field directly as a Hilt binding — no module class needed
- @UninstallModules: required when using @BindValue to remove the production binding first
- Per-class flexibility: different test classes can have different @BindValue values
- Works with qualifiers: @BindValue @Named("url") val baseUrl: String = "..."
- @TestInstallIn for global: stable fakes applied to ALL tests; @BindValue for per-class customisation
"@BindValue is the Hilt 2.49+ answer to verbose test setup. Instead of a separate @TestInstallIn module for every fake, declare it as a field in the test class. The trade-off: you need @UninstallModules to remove the production binding, which @TestInstallIn handles automatically."
Rate limiting is a cross-cutting concern that should be invisible to callers. The Decorator pattern combined with DI is the cleanest approach — wrap the real repository without changing its interface.
// Interface — unchanged interface WeatherRepository { suspend fun getCurrentWeather(city: String): Weather } // Real implementation class WeatherRepositoryImpl @Inject constructor( private val api: WeatherApi ) : WeatherRepository { override suspend fun getCurrentWeather(city: String) = api.getWeather (city) } // Decorator — adds rate limiting transparently class RateLimitedWeatherRepository @Inject constructor( @RealImpl private val delegate: WeatherRepository, // qualifier needed here @IoDispatcher private val dispatcher: CoroutineDispatcher ) : WeatherRepository { private val mutex = Mutex() private var lastCallTime = 0L private val rateLimitMs = 5_000L override suspend fun getCurrentWeather(city: String): Weather = withContext(dispatcher) { mutex.withLock { val now = System.currentTimeMillis () val elapsed = now - lastCallTime if (elapsed < rateLimitMs)delay (rateLimitMs - elapsed) lastCallTime = System.currentTimeMillis () } delegate.getCurrentWeather (city) } } // Qualifier for the real (unwrapped) implementation @Qualifier @Retention(AnnotationRetention.BINARY) annotation class RealImpl @Module @InstallIn(SingletonComponent::class) abstract class WeatherModule { @Binds @Singleton @RealImpl abstract fun bindReal(impl: WeatherRepositoryImpl): WeatherRepository @Binds @Singleton // unqualified = what callers inject abstract fun bindRateLimited(impl: RateLimitedWeatherRepository): WeatherRepository }
- Decorator + DI: rate limiting added transparently — callers inject WeatherRepository unchanged
- @RealImpl qualifier: distinguishes the decorated from the decorating binding
- Mutex: ensures only one inflight call at a time — thread-safe rate gate
- Two @Binds: @RealImpl for the inner; unqualified for the outer (what callers get)
- Test: inject @RealImpl directly to test rate-limiting behaviour independently
The qualifier pattern for decorators: the real impl gets @RealImpl, the decorator gets no qualifier. Callers inject unqualified WeatherRepository — they get the rate-limited version. The decorator itself injects @RealImpl — it gets the unwrapped version. Two @Binds, zero code changes to callers.
Hilt manages object lifetime but not coroutine lifetime — a @Singleton class that creates its own CoroutineScope has a scope that's never cancelled. Proper coroutine scope management in DI requires deliberate design.
// ❌ Problem: @Singleton with CoroutineScope that's never cancelled @Singleton class NotificationService @Inject constructor() { private val scope = CoroutineScope(Dispatchers.IO + SupervisorJob()) // scope.cancel() never called — coroutines leak! } // ✅ Solution 1: Inject a pre-built application scope @Module @InstallIn(SingletonComponent::class) object CoroutineScopeModule { @Provides @Singleton fun provideApplicationScope(): CoroutineScope = CoroutineScope(SupervisorJob() + Dispatchers.Default) // Cancelled in Application.onTerminate (or simply accepted as process-lifetime) } @Singleton class NotificationService @Inject constructor( private val appScope: CoroutineScope // injected — shared, managed ) { fun startPolling() { appScope.launch { /* polling work */ } } } // ✅ Solution 2: Use structured concurrency — let caller own the scope @Singleton class DataSyncService @Inject constructor( private val repo: SyncRepository ) { // No scope stored — caller provides lifecycle suspend fun sync() { repo.sync () } // caller suspends in their own scope } // ✅ Solution 3: Qualifier for testability @Qualifier @Retention(AnnotationRetention.BINARY) annotation class ApplicationScope @Provides @Singleton @ApplicationScope fun provideAppScope(): CoroutineScope = CoroutineScope(SupervisorJob() + Dispatchers.Default) // In tests: @TestInstallIn replaces with TestScope
- @Singleton with its own scope: never cancelled — all launched coroutines are orphaned
- Inject CoroutineScope: one app-level scope injected everywhere — consistent lifetime
- @ApplicationScope qualifier: testable — replace with TestScope in tests
- Structured concurrency: suspend functions let callers provide scope — no stored scope
- SupervisorJob: app-level scope should have SupervisorJob — failures don't cancel sibling coroutines
"A @Singleton that creates CoroutineScope(IO) internally is a coroutine leak — the scope lives forever but is never cancelled. Inject a shared @ApplicationScope instead, or use suspend functions so callers provide the scope. Prefer suspend functions — that's structured concurrency by design."
Use cases should return results, not trigger UI. The ViewModel translates use-case outcomes to UI events. This maintains clean layer separation while still triggering one-time UI actions.
// ❌ WRONG: UseCase directly triggering UI class PlaceOrderUseCase @Inject constructor( private val uiEvents: UiEventBus // ❌ domain knows about UI! ) { suspend operator fun invoke() { uiEvents.emit(ShowDialog("Order placed!")) // domain → UI coupling } } // ✅ CORRECT: UseCase returns a rich result, ViewModel maps to UI events sealed class OrderResult { data class Success(val orderId: String, val requiresConfirmation: Boolean) : OrderResult() data class InsufficientStock(val items: List<String>) : OrderResult() data class PaymentDeclined(val reason: String) : OrderResult() } class PlaceOrderUseCase @Inject constructor( private val orderRepo: OrderRepository, private val paymentRepo: PaymentRepository ) { suspend operator fun invoke(cart: Cart, method: PaymentMethod): OrderResult { if (!orderRepo. checkStock (cart)) return OrderResult.InsufficientStock (cart.items ()) val payment = paymentRepo.charge (cart.total (), method) ?: return OrderResult.PaymentDeclined ("Card declined") return OrderResult.Success (payment.orderId, payment.requiresConfirmation) } } // ViewModel translates domain result → one-time UI event class CheckoutViewModel @Inject constructor( private val placeOrder: PlaceOrderUseCase ) : ViewModel() { private val _events = Channel<CheckoutEvent>(Channel.BUFFERED) val events = _events.receiveAsFlow () fun checkout(cart: Cart, method: PaymentMethod) = viewModelScope.launch { when (val result =placeOrder (cart, method)) { is OrderResult.Success -> _events.send (CheckoutEvent.NavigateToSuccess (result.orderId)) is OrderResult.InsufficientStock -> _events.send (CheckoutEvent.ShowStockError (result.items)) is OrderResult.PaymentDeclined -> _events.send (CheckoutEvent.ShowPaymentError (result.reason)) } } }
- UseCase returns sealed result: domain stays pure — no UI knowledge
- ViewModel as translator: maps domain outcomes to Channel events for the UI
- Sealed OrderResult: exhaustive when forces ViewModel to handle all outcomes
- Channel.BUFFERED: one-time events delivered exactly once — no replay on rotation
- Domain testable independently: PlaceOrderUseCase tested with fakes, no UI mock needed
"UseCase returns a sealed class — every outcome is explicit. ViewModel exhaustively handles each case and maps to Channel events. Domain stays clean: PlaceOrderUseCase has zero imports from the UI layer. This is the correct direction of the dependency arrow."
Shared DI modules cause test contamination when not handled carefully. The :core:testing module pattern — providing fakes alongside the real module — is the clean solution used in Now in Android.
// Module structure // :core:data — production repositories // :core:testing — fake implementations (shared test fixtures) // :core:testing/FakeUserRepository.kt class FakeUserRepository @Inject constructor() : UserRepository { private var users = mutableListOf<User>() fun setUsers(list: List<User>) { users = list.toMutableList () } override suspend fun getUser(id: String) = users.firstOrNull { it.id == id } ?: throw NoSuchElementException(id) override fun observeUsers(): Flow<List<User>> = flowOf(users) } // :core:testing/TestDataModule.kt — Hilt module for tests @TestInstallIn( components = [SingletonComponent::class], replaces = [DataModule::class] ) @Module abstract class TestDataModule { @Binds @Singleton abstract fun bindUserRepo(fake: FakeUserRepository): UserRepository @Binds @Singleton abstract fun bindOrderRepo(fake: FakeOrderRepository): OrderRepository } // Any test module adds :core:testing as dependency: // :feature:home/build.gradle.kts // androidTestImplementation(project(":core:testing")) // TestDataModule is automatically picked up by Hilt via @TestInstallIn // Feature tests — no module setup needed, just inject and use @HiltAndroidTest class HomeScreenTest { @Inject lateinit var fakeUserRepo: FakeUserRepository // inject the fake directly @Before fun setUp() { hiltRule.inject () fakeUserRepo.setUsers (listOf (User("1", "Alice"))) // configure per test } }
- :core:testing module: fakes live here, shared across ALL feature test modules
- TestDataModule with @TestInstallIn: automatically replaces production module for all tests
- androidTestImplementation(":core:testing"): adds fakes and TestDataModule to test classpath
- Inject the fake directly: @Inject lateinit var fakeRepo — configure per test via setUsers()
- Zero duplication: fakes written once, used in 10 feature test modules without copy-paste
"This is exactly the pattern in Google's Now in Android. :core:testing provides FakeUserRepository and TestDataModule. Any feature that adds androidTestImplementation(':core:testing') gets the fakes automatically — no per-feature fake setup. Write the fake once, use it everywhere."
hiltViewModel() uses Hilt's ViewModelFactory, while viewModel() uses the default factory. Understanding when to use each affects both functionality and testability of Compose screens.
// hiltViewModel() — uses Hilt's generated factory // Requires: @HiltViewModel + @Inject constructor on ViewModel // Requires: @AndroidEntryPoint on the Activity hosting the NavHost @Composable fun HomeScreen( vm: HomeViewModel = hiltViewModel() // Hilt injects all constructor deps ) { // HomeViewModel is correctly scoped to this NavBackStackEntry // Destroyed when navigating away from HomeScreen } // viewModel() — uses default or custom factory, no Hilt @Composable fun SimpleScreen( vm: SimpleViewModel = viewModel() // only works for zero-arg or factory ViewModels ) // SimpleViewModel cannot have injected dependencies without a custom factory // Use for: test-only, no-dep ViewModels, or custom factory provision // NavGraph-scoped ViewModel — shared between destinations @Composable fun CartScreen(navController: NavController) { val parentEntry = remember(navController) { navController.getBackStackEntry("checkout_graph") } val checkoutVm: CheckoutViewModel = hiltViewModel(parentEntry) // shared with PaymentScreen in same graph } // Assisted injection via hiltViewModel creationCallback @Composable fun ProductDetail(productId: String) { val vm: ProductDetailViewModel = hiltViewModel( creationCallback = { factory: ProductDetailViewModel.Factory -> factory. create (productId) } ) } // Testing: provide fake ViewModel in tests @Composable fun HomeScreen(vm: HomeViewModel = hiltViewModel()) { // In tests: @TestInstallIn replaces the ViewModel's repo with fake // OR use viewModel { HomeViewModel(FakeRepo()) } for pure Compose tests }
- hiltViewModel(): requires @HiltViewModel + @AndroidEntryPoint on Activity — full DI wiring
- viewModel(): no Hilt — only for zero-dep or custom-factory ViewModels
- NavGraph scope: hiltViewModel(parentEntry) shares VM across the graph lifetime
- creationCallback: hiltViewModel() supports assisted injection natively since Hilt 2.49
- Testing: @TestInstallIn replaces deps; or pass ViewModel in the function signature for pure Compose tests
"hiltViewModel() is the standard for production Compose screens — it hooks into Hilt's factory, respects Navigation back stack scoping, and supports assisted injection via creationCallback. viewModel() is for simple cases or tests where you want full control over the ViewModel instance."
Expiring cache is a data concern that belongs in the Repository layer. DI wires the time source as an injectable dependency — making the cache testable without real time passage.
// Injectable time source — testable clock interface Clock { fun nowMs(): Long } class SystemClock @Inject constructor() : Clock { override fun nowMs() = System.currentTimeMillis () } class FakeClock(var time: Long = 0L) : Clock { override fun nowMs() = time fun advance(ms: Long) { time += ms } } // Caching repository class CachedProductRepository @Inject constructor( private val api: ProductApi, private val clock: Clock, @IoDispatcher private val dispatcher: CoroutineDispatcher ) : ProductRepository { private val cacheTtl = 10 * 60 * 1000L // 10 minutes private var cachedProducts: List<Product> = emptyList() private var lastFetchTime = 0L override suspend fun getProducts(): List<Product> = withContext(dispatcher) { if (clock.nowMs () - lastFetchTime < cacheTtl && cachedProducts.isNotEmpty ()) { cachedProducts // cache hit } else { api.getProducts ().also { cachedProducts = it lastFetchTime = clock.nowMs () } } } } // Test — advance fake clock to test expiry @Test fun cacheExpires_afterTenMinutes() = runTest { val fakeClock = FakeClock() val fakeApi = FakeProductApi() val repo = CachedProductRepository(fakeApi, fakeClock, UnconfinedTestDispatcher()) repo.getProducts () // first call — fetches fakeClock.advance (11 * 60 * 1000L) // advance 11 minutes repo.getProducts () // cache expired — fetches again assertEquals(2, fakeApi.callCount) }
- Injectable Clock: makes time-dependent logic testable — FakeClock advances deterministically
- @IoDispatcher: injectable dispatcher — TestDispatcher in tests makes timing predictable
- Cache TTL: 10-minute window checked against injected Clock — no real time.sleep in tests
- FakeClock.advance(): control time in tests without Thread.sleep() — instant, deterministic
- Repository concern: caching policy lives entirely in the data layer — ViewModel unchanged
"The key to testing time-based caches: inject a Clock interface. Production uses SystemClock. Tests use FakeClock where fakeClock.advance(11 * 60 * 1000) advances 11 minutes instantly. Without the injectable clock, testing cache expiry requires Thread.sleep(600_000) — 10 actual minutes."
Passing a ViewModel as a parameter through multiple composable layers (ViewModel drilling) tightly couples composables to a specific ViewModel — breaking reusability and testability. State hoisting with callbacks is the correct pattern.
// ❌ ANTI-PATTERN: ViewModel drilling @Composable fun ProfileScreen(vm: ProfileViewModel = hiltViewModel()) { ProfileContent(vm = vm) // passing VM down } @Composable fun ProfileContent(vm: ProfileViewModel) { // coupled to ViewModel ProfileHeader(vm = vm) // drilling deeper ProfileStats(vm = vm) } // ProfileContent can't be previewed, can't be tested standalone // It's tied to ProfileViewModel forever // ✅ CORRECT: state + callbacks, ViewModel at the top @Composable fun ProfileScreen(vm: ProfileViewModel = hiltViewModel()) { val state by vm.uiState.collectAsStateWithLifecycle () ProfileContent( state = state, // pass data, not ViewModel onEditClick = vm::onEditClick, onFollowClick = vm::onFollowClick ) } @Composable fun ProfileContent( // stateless — no ViewModel reference state: ProfileState, onEditClick: () -> Unit, onFollowClick: () -> Unit ) { ProfileHeader(name = state.name, onEditClick = onEditClick) ProfileStats(followers = state.followers, onFollowClick = onFollowClick) } // ProfileContent is now fully previewable and standalone-testable // @Preview — works because no ViewModel needed @Preview @Composable fun ProfileContentPreview() { ProfileContent( state = ProfileState(name = "Alice", followers = 1234), onEditClick = {}, onFollowClick = {} ) }
- ViewModel drilling: passing VM through N composable layers — each becomes coupled and untestable
- State hoisting: extract data and callbacks, pass them instead of the ViewModel
- Stateless composables: ProfileContent with state + callbacks — reusable, previewable
- Single ViewModel entry point: only ProfileScreen needs hiltViewModel() — rest are pure
- Testability: stateless composables tested with fake data, no Hilt needed
"The rule: ViewModel is injected once at the screen level. Every child composable below it receives plain data types and lambdas — not the ViewModel. If a child composable has ViewModel in its parameter list, it's a sign of drilling. Hoist state up, pass data down."
DI errors from new developers cluster into three categories: wrong scope, missing bindings, and test setup. Document the component hierarchy with a diagram, write a team DI guide covering the three most common mistakes, and enforce patterns with a custom Lint rule that flags raw 'new' instantiations of injected classes.
// Documented component scope guide for new devs // @Singleton → lives for app lifetime (database, network client) // @ActivityRetainedScoped → survives rotation (ViewModel dependencies) // @ViewModelScoped → lives for ViewModel lifetime // @ActivityScoped → lives for Activity lifetime // Failing fast: test all DI bindings in a single integration test @HiltAndroidTest class HiltBindingsTest { @get:Rule val hiltRule = HiltAndroidRule(this) @Inject lateinit var repo: UserRepository // verifies binding exists @Inject lateinit var api: ApiClient @Test fun allBindingsResolvable() { hiltRule.inject() // crashes here if any binding is missing } }
- Scope confusion is the #1 new-dev DI mistake: document the hierarchy (Singleton → ActivityRetained → ViewModel → Activity → Fragment) with examples
- HiltBindingsTest: inject every root-level dependency in one test -- a missing binding fails the test, not a production crash
- Missing @AndroidEntryPoint on Fragment/Activity: the second most common mistake -- Hilt silently will not inject if the annotation is absent
- Custom Lint rule: flag 'UserRepository()' or 'new UserRepository()' in code -- injected classes should never be instantiated manually
- Pair programming during onboarding: one DI session where a new dev traces a full injection chain from @HiltAndroidApp to the ViewModel
"The best guardrail is making the right pattern easier than the wrong one. Convention plugin: new module has correct setup automatically. Detekt: direct Dispatchers.IO flagged. CI kspDebugKotlin: graph errors caught before code review. The PR checklist is the last line of defence, not the first."
Security-sensitive DI requires careful scope management — encryption keys should never be stored in @Singleton fields that outlive their secure context. DI helps enforce security boundaries through scoping.
// ❌ DANGEROUS: encryption key in @Singleton field @Singleton class EncryptionService @Inject constructor() { private var key: ByteArray? = null // key cached in memory forever! fun setKey(k: ByteArray) { key = k } // attacker can read via heap dump } // ✅ CORRECT: AndroidKeyStore — key never leaves secure hardware class SecureEncryptionService @Inject constructor( @ApplicationContext private val ctx: Context ) { private val keyAlias = "com.app.encryption_key" // Key generated in KeyStore — never extracted to memory private val secretKey: SecretKey by lazy { val keyStore = KeyStore.getInstance ("AndroidKeyStore").apply {load (null) } if (!keyStore.containsAlias (keyAlias))generateKey () keyStore.getKey (keyAlias, null) as SecretKey } fun encrypt(data: ByteArray): ByteArray { val cipher = Cipher.getInstance ("AES/GCM/NoPadding") cipher.init (Cipher.ENCRYPT_MODE, secretKey) return cipher.iv + cipher.doFinal (data) // key never leaves keystore } } // Scope: @ActivityRetainedScoped for user-context operations // Key reference loaded per Activity session, not app-wide @Singleton @Module @InstallIn(ActivityRetainedComponent::class) abstract class SecurityModule { @Binds @ActivityRetainedScoped abstract fun bindEncryption(impl: SecureEncryptionService): EncryptionService } // BiometricPrompt — gate key usage behind auth // Key with setUserAuthenticationRequired(true) — only usable after biometric
- AndroidKeyStore: keys never extracted to JVM memory — hardware-backed security
- No ByteArray key storage: raw key bytes in @Singleton = heap dump attack vector
- @ActivityRetainedScoped: narrower scope for security objects — cleared when Activity finishes
- lazy key access: key alias only, actual SecretKey fetched from KeyStore on demand
- BiometricPrompt integration: setUserAuthenticationRequired(true) gates key use behind auth
"Security and scope are directly linked. A @Singleton that holds a raw key in a field = that key is reachable via a heap dump for the entire app session. Using AndroidKeyStore keeps the key in secure hardware — we only hold an alias. The key material never exists in JVM memory."
Real-time chat has multiple concurrent data sources — WebSocket events, local DB, and user actions. The DI design separates connection management, message persistence, and UI state into injectable, independently testable components.
// Component breakdown interface ChatWebSocket { val events: SharedFlow<ChatSocketEvent> suspend fun connect(roomId: String) suspend fun send(msg: OutgoingMessage) fun disconnect() } interface MessageRepository { fun observeMessages(roomId: String): Flow<List<Message>> suspend fun insertMessage(msg: Message) suspend fun markAsRead(messageId: String) } // Hilt module — all @Singleton: shared connection state @Module @InstallIn(SingletonComponent::class) abstract class ChatModule { @Binds @Singleton abstract fun bindWebSocket(impl: OkHttpChatWebSocket): ChatWebSocket @Binds @Singleton abstract fun bindMessages(impl: RoomMessageRepository): MessageRepository } // ViewModel — assembles all streams via assisted injection (roomId is runtime) @HiltViewModel(assistedFactory = ChatViewModel.Factory::class) class ChatViewModel @AssistedInject constructor( @Assisted val roomId: String, private val socket: ChatWebSocket, private val msgRepo: MessageRepository, private val saved: SavedStateHandle ) : ViewModel() { val messages = msgRepo.observeMessages (roomId) .stateIn (viewModelScope, SharingStarted.WhileSubscribed (5000),emptyList ()) val typingUsers: StateFlow<Set<String>> = socket.events .filterIsInstance <ChatSocketEvent.TypingStarted>() .map { it.userId } .stateIn (viewModelScope, SharingStarted.WhileSubscribed (),emptySet ()) @AssistedFactory interface Factory { fun create(roomId: String): ChatViewModel } init { viewModelScope.launch { socket.connect (roomId) } } override fun onCleared() { socket.disconnect () } }
- Interface per concern: ChatWebSocket, MessageRepository — independently injectable and testable
- @Singleton for shared connection: one WebSocket per app — multiple rooms reuse same connection
- @AssistedInject for roomId: runtime parameter injected at ViewModel creation time
- Reactive streams combined: messages from Room, typing from WebSocket — merged in ViewModel
- onCleared: disconnect WebSocket when ViewModel is destroyed — structured lifecycle
"ChatWebSocket is @Singleton — one connection for the whole app. ChatViewModel is @ViewModelScoped — each chat room gets its own ViewModel. @AssistedInject provides the roomId at creation time. Testing: FakeChatWebSocket emits events programmatically; FakeMessageRepository returns preset messages."
Hilt uses KSP (formerly KAPT) to generate component and factory code at compile time. This means changes to annotated classes trigger regeneration of the affected component. The interaction with incremental compilation is important: adding a new @Inject class only regenerates its module, not the entire component graph.
// Hilt annotation processing generates these files: // Hilt_MainActivity.java → base class with injection boilerplate // DaggerAppComponent.java → component wiring all @Provides/@Binds together // UserRepository_Factory.java → factory for @Inject constructor classes // See generated files in Android Studio: // Build → Generated Sources → ksp/debug/kotlin/ // Incremental KSP: only files with changed annotations are reprocessed @Module @InstallIn(SingletonComponent::class) object NetworkModule { @Provides @Singleton fun provideRetrofit(): Retrofit = Retrofit.Builder().build() } // change here → only NetworkModule regenerated, not the whole graph
- KSP generates: Hilt_X base classes (inject boilerplate), DaggerAppComponent (wire up the graph), Factory classes for @Inject constructors
- Incremental processing: change one @Module → only that module's generated code regenerates -- not the full component
- KSP is 2x faster than KAPT for Hilt: no Java stub generation step -- reads Kotlin AST directly
- Build cache: generated files are cached -- clean build on CI with cache hits skips generation entirely
- View generated code: Build → Generated Sources in Android Studio -- useful when debugging 'cannot be provided' errors
"The practical take: don't put everything in @Singleton. @ViewModelScoped changes only rebuild ViewModelComponent — smaller graph, faster KSP run. A giant AppModule with 30 @Provides means any change there rebuilds the entire SingletonComponent. Small focused modules = incremental KSP works as designed."
Services and Activities/ViewModels live in different components — they can't directly inject each other. The clean solution is a @Singleton SharedFlow event bus or a @Singleton StateHolder that both sides inject.
// Pattern: @Singleton StateHolder — both Service and ViewModel inject it @Singleton class DownloadStateHolder @Inject constructor() { private val _progress = MutableStateFlow<Map<String, Int>>(emptyMap()) val progress = _progress.asStateFlow() fun updateProgress(fileId: String, percent: Int) { _progress. update { it + (fileId to percent) } } fun remove(fileId: String) { _progress.update { it - fileId } } } // Service — injects StateHolder, updates progress @AndroidEntryPoint class DownloadService : Service() { @Inject lateinit var stateHolder: DownloadStateHolder private fun startDownload(fileId: String, url: String) { ioScope.launch { api.downloadFile (url).collect { progress -> stateHolder.updateProgress (fileId, progress) } stateHolder.remove (fileId) } } } // ViewModel — injects same StateHolder, observes updates @HiltViewModel class FilesViewModel @Inject constructor( private val stateHolder: DownloadStateHolder, // SAME @Singleton instance private val fileRepo: FileRepository ) : ViewModel() { val downloads = stateHolder.progress .stateIn (viewModelScope, SharingStarted.WhileSubscribed (5000),emptyMap ()) }
- @Singleton StateHolder: both Service and ViewModel inject the same instance
- StateFlow: thread-safe reactive bridge — Service writes, ViewModel observes
- No direct Service↔Activity reference: StateHolder decouples them completely
- update(): atomic StateFlow update — concurrent writes from multiple downloads safe
- Testing: FakeDownloadStateHolder with manual progress updates — test ViewModel independently
"Service and ViewModel can't inject each other — different component lifecycles. Inject a @Singleton StateHolder into both. Service writes progress to StateFlow, ViewModel observes it. The StateHolder is the mediator. Testing: manually call stateHolder.updateProgress() in ViewModel tests — no real Service needed."
Kotlin's primary constructor syntax makes constructor injection especially clean — @Inject on a single line wires all deps. Kotlin's type system (non-nullable types, data classes) interacts with Hilt in important ways.
// Kotlin primary constructor — cleanest injection syntax class UserRepository @Inject constructor( private val api: UserApi, // non-nullable — Hilt must provide private val dao: UserDao, // non-nullable — Hilt must provide @IoDispatcher private val dispatcher: CoroutineDispatcher ) // val = immutable — cannot be reassigned after injection // Non-nullable = Hilt MUST provide binding or BUILD FAILS // Kotlin's non-nullability enforced by Hilt: // String? parameter in constructor → Hilt injects null if no binding // String parameter in constructor → Hilt FAILS build if no String binding class ConfigManager @Inject constructor( private val apiKey: String // must have a @Provides String or @Named("api_key") String ) // Default parameter values — Hilt ignores them class AnalyticsService @Inject constructor( private val api: AnalyticsApi, private val batchSize: Int = 50 // ❌ Hilt ignores defaults — must provide Int binding! ) // Fix: wrapper class or @Provides method // Data class — cannot use @Inject (no constructor body) // @Inject data class User(val id: String) — WRONG! Use @Provides instead // Kotlin companion object — static @Provides class SomeClass { companion object { @JvmStatic // needed for Dagger to see it as Java static fun create(): SomeClass = SomeClass() } } // Prefer object module over companion — cleaner and no @JvmStatic needed
- val in constructor: immutable after injection — prevents accidental reassignment
- Non-nullable types: Hilt build fails if no binding — forced explicitness
- Default parameters: Hilt ignores defaults — all constructor params must have bindings
- Data class: no @Inject constructor — provide via @Provides factory method
- object module: preferred over companion object for @Provides — no @JvmStatic needed
"Kotlin's non-nullable types and Hilt's compile-time graph together create a very strong guarantee: if it compiles, all dependencies are provided and non-null. The most common Kotlin-specific mistake: default parameter values in @Inject constructors — Hilt ignores them and requires a binding."
Offline-first sync requires coordinating Room (source of truth), WorkManager (background sync), and Network (data source). DI wires these components cleanly while keeping each independently testable.
// Core sync interface — tells each repository to sync interface Syncable { suspend fun sync(): SyncResult } // Multibinding — each repository opts in to sync @Module @InstallIn(SingletonComponent::class) abstract class SyncModule { @Binds @IntoSet abstract fun bindUserSync(repo: UserRepository): Syncable @Binds @IntoSet abstract fun bindOrderSync(repo: OrderRepository): Syncable @Binds @IntoSet abstract fun bindProductSync(repo: ProductRepository): Syncable } // SyncManager — coordinates all Syncables @Singleton class SyncManager @Inject constructor( private val syncables: Set<@JvmSuppressWildcards Syncable>, @IoDispatcher private val dispatcher: CoroutineDispatcher ) { suspend fun syncAll(): List<SyncResult> = withContext(dispatcher) { syncables.map { syncable -> async { syncable.sync () } }.awaitAll () } } // Worker — uses Hilt's @HiltWorker @HiltWorker class SyncWorker @AssistedInject constructor( @Assisted context: Context, @Assisted params: WorkerParameters, private val syncManager: SyncManager ) : CoroutineWorker(context, params) { override suspend fun doWork(): Result = try { syncManager.syncAll () Result.success () } catch (e: Exception) { Result.retry () } } // Testing: FakeSyncable, FakeSyncManager — validate SyncWorker in isolation
- Syncable interface + @IntoSet multibinding: each repo self-registers — open/closed principle
- SyncManager: receives Set<Syncable> — coordinates all repos, unaware of their types
- async/awaitAll: parallel sync for all repos — not sequential, faster sync window
- @HiltWorker: SyncWorker receives SyncManager — no manual factory setup
- Adding new repo: add @IntoSet @Binds to that repo — SyncManager automatically includes it
"Multibinding makes sync extensible: adding a new repository to sync means adding one @IntoSet @Binds line. SyncManager doesn't change — it just receives a larger Set. This is the Open/Closed Principle in action. Tests can provide a Set with just one FakeSyncable to test specific sync logic."
Build configuration injection is a DI design problem — hardcoded strings in @Provides methods are a maintenance issue. The clean pattern separates config declaration from config consumption using a typed AppConfig object.
// AppConfig — typed container for environment-specific values data class AppConfig( val apiBaseUrl: String, val apiVersion: String, val enableStrictMode: Boolean, val logLevel: LogLevel ) // Build config constants defined in build.gradle.kts // android { defaultConfig { // buildConfigField("String", "API_BASE_URL", '"https://api.myapp.com"') // }} // @Provides factory — reads from BuildConfig at module load time @Module @InstallIn(SingletonComponent::class) object AppConfigModule { @Provides @Singleton fun provideAppConfig(): AppConfig = AppConfig( apiBaseUrl = BuildConfig.API_BASE_URL, apiVersion = BuildConfig.API_VERSION, enableStrictMode = BuildConfig.DEBUG, logLevel = if (BuildConfig.DEBUG) LogLevel.VERBOSE else LogLevel.ERROR ) } // Network module consumes AppConfig — no BuildConfig references here @Module @InstallIn(SingletonComponent::class) object NetworkModule { @Provides @Singleton fun provideRetrofit(config: AppConfig): Retrofit = Retrofit.Builder() .baseUrl (config.apiBaseUrl) // from injected config .build() } // Secure API keys — read from local.properties, not BuildConfig // Never check API keys into source control! // build.gradle.kts reads from local.properties: // val props = Properties().apply { load(rootProject.file("local.properties").inputStream()) } // buildConfigField("String", "ANALYTICS_KEY", '"${props["ANALYTICS_KEY"]}"') // Test override — @TestInstallIn replaces config @TestInstallIn(components = [SingletonComponent::class], replaces = [AppConfigModule::class]) @Module object TestAppConfigModule { @Provides fun provideTestConfig(): AppConfig = AppConfig( apiBaseUrl = "http://localhost:8080", // local mock server apiVersion = "v1", enableStrictMode = true, logLevel = LogLevel.VERBOSE ) }
- Typed AppConfig: single injection point for all env-specific config — not scattered BuildConfig calls
- AppConfigModule: translates BuildConfig into typed AppConfig — build system to DI boundary
- NetworkModule: consumes AppConfig — zero direct BuildConfig references, fully testable
- local.properties: API keys stored locally, never in source control
- @TestInstallIn: test config with localhost URL — same NetworkModule, different config
"Centralise all BuildConfig.* reads into one AppConfigModule. Everything else injects AppConfig — it never touches BuildConfig directly. In tests, @TestInstallIn provides TestAppConfigModule with localhost URLs. No BuildConfig references in test code — clean separation of concerns."
Hybrid DI (Hilt + manual) is a valid intermediate state during migration. The key is ensuring both systems reference the same singleton instances — not creating duplicate objects.
// The risk: two separate singleton instances // AppContainer.userRepository = UserRepositoryImpl(oldRetrofit) // Hilt @Singleton UserRepository = UserRepositoryImpl(hiltRetrofit) // → TWO different instances, DIFFERENT state, DATA INCONSISTENCY // Safe coexistence strategy: // Option 1: AppContainer reads from Hilt (Hilt owns, Container delegates) @HiltAndroidApp class MyApp : Application() { @Inject lateinit var userRepo: UserRepository // Hilt provides it val container by lazy { AppContainer(userRepo) // Container wraps Hilt singleton } } // Old Activity — uses container (still works, gets Hilt's instance) class OldActivity : AppCompatActivity() { private val repo by lazy { (application as MyApp).container.userRepo // ← Hilt's instance! } } // New Activity — uses @AndroidEntryPoint (same Hilt singleton) @AndroidEntryPoint class NewActivity : AppCompatActivity() { @Inject lateinit var repo: UserRepository // same singleton! } // Both OldActivity and NewActivity share the SAME UserRepository instance // Consistent data, no duplication // Migration checklist per Activity: // ☐ Add @AndroidEntryPoint // ☐ Replace container.xyz with @Inject fields // ☐ Run tests — verify same behavior // ☐ Remove from container once all usages migrated
- Two singletons = data inconsistency: the migration must share instances, not duplicate them
- Hilt owns, Container delegates: inject Hilt's singletons into AppContainer during migration
- Application @Inject field: Hilt injects into Application class — bridge to legacy AppContainer
- Both Activities get same instance: OldActivity via container, NewActivity via @Inject — same object
- Incremental removal: remove from container after all usages are migrated — clean exit
"The hybrid state's biggest danger: parallel singletons. If AppContainer creates its own Retrofit and Hilt creates its own, you have two separate UserRepository instances — different data, different state. Solution: inject Hilt's singletons INTO the AppContainer. One instance, two access paths."
KMP shared modules using Koin and Android apps using Hilt can coexist — the Android app bridges Koin's shared components into the Hilt graph. This avoids the impossible choice of forcing one DI framework everywhere.
// :shared commonMain — uses Koin (KMP-compatible) val sharedModule = module { single<UserRepository> { UserRepositoryImpl(get ()) } single { GetUserUseCase(get ()) } } // :androidApp — starts Koin with shared module @HiltAndroidApp class MyApp : Application() { override fun onCreate() { super.onCreate () startKoin { androidContext(this@MyApp) modules(sharedModule) } } } // Bridge: expose Koin's instances to Hilt via @Provides @Module @InstallIn(SingletonComponent::class) object KmpBridgeModule { // Get from Koin's graph → expose to Hilt's graph @Provides @Singleton fun provideUserRepository(): UserRepository = get() // Koin's get() @Provides @Singleton fun provideGetUserUseCase(): GetUserUseCase =get () } // Android @HiltViewModel — injects from Hilt, which got it from Koin @HiltViewModel class ProfileViewModel @Inject constructor( private val getUser: GetUserUseCase // ultimately from Koin shared module ) : ViewModel() // iOS: uses Koin directly (no Hilt) // class ProfileViewModel: ObservableObject { // let getUser: GetUserUseCase = get() // }
- KMP shared: Koin used in commonMain — KMP-compatible, works on iOS/Desktop too
- Bridge module: @Provides wraps Koin's get() — exposes shared deps to Hilt's graph
- Single source of truth: Koin manages the shared instance, Hilt reuses it via bridge
- No double instantiation: bridge creates Provider that delegates to Koin — same object
- Android ViewModels: inject from Hilt transparently — no Koin knowledge needed in Android code
"You don't have to choose between Hilt and Koin for KMP. Koin owns shared business logic — it's KMP-compatible. Hilt owns Android entry points and framework integration. KmpBridgeModule bridges them: @Provides fun provideUserRepository() = get() pulls from Koin, exposes to Hilt."
@Inject field NPE in an @AndroidEntryPoint class points to one of a small set of causes — most commonly accessing the field before super.onCreate() or a Hilt superclass mismatch.
// Hilt injection happens in super.onCreate() for Activities // Any access before that point = NPE // ❌ CRASH CAUSE 1: Accessing field before super.onCreate() @AndroidEntryPoint class MainActivity : AppCompatActivity() { @Inject lateinit var analytics: Analytics init { analytics.track("init") // ❌ CRASH — Hilt hasn't injected yet! } override fun onCreate(saved: Bundle?) { super. onCreate (saved) // ← injection happens HERE analytics.track ("created") // ✅ safe after super } } // ❌ CRASH CAUSE 2: Missing @HiltAndroidApp on Application class MyApp : Application() // forgot @HiltAndroidApp! // Error: "Hilt Activity must be attached to an @HiltAndroidApp Application" // But manifests sometimes have wrong application class → runtime crash // ❌ CRASH CAUSE 3: Base class not @AndroidEntryPoint open class BaseActivity : AppCompatActivity() // NOT annotated @AndroidEntryPoint class MainActivity : BaseActivity() // extending non-Hilt base // Fix: @AndroidEntryPoint on BaseActivity too // ❌ CRASH CAUSE 4: Field accessed in Fragment before onAttach() @AndroidEntryPoint class UserFragment : Fragment() { @Inject lateinit var repo: UserRepository // Injection happens in onAttach() for Fragments // Accessing repo before onAttach() → UninitializedPropertyAccessException } // Debug steps: // 1. Check stack trace — which line causes the NPE? // 2. Is it before super.onCreate() / onAttach()? // 3. Is @HiltAndroidApp on Application in correct Manifest entry? // 4. Does the base class also have @AndroidEntryPoint?
- Injection order: Activity injection in super.onCreate(), Fragment injection in onAttach()
- init block: runs before onCreate — @Inject fields not yet populated
- Missing @HiltAndroidApp: manifest declares wrong Application class — Hilt never initialises
- Base class missing annotation: @AndroidEntryPoint must be on the entire class hierarchy
- UninitializedPropertyAccessException: Kotlin's specific NPE for lateinit var — clearer than Java NPE
Stack trace analysis: "UninitializedPropertyAccessException: lateinit property analytics has not been initialised — line 12 MainActivity.init." That's init block access. Fix: move to onStart() or later. The Hilt lifecycle contract: super.onCreate() injects Activity; onAttach() injects Fragment. Never before.
DI mastery includes knowing its limits. Over-engineering with DI adds complexity without proportional benefit — the guiding principle is that DI should simplify the system, not complicate it.
// The principle: DI is a tool, not a religion // Use it where it provides clear value; skip it where it doesn't // ❌ DI is WRONG for: utility functions // Don't do: @Singleton class DateFormatter @Inject constructor() { fun format(ts: Long): String = /* ... */ "" } // Do: object DateFormatter { fun format(ts: Long) = ... } // ❌ DI is WRONG for: value objects and data classes data class Money(val amount: Double, val currency: String) // Money is data, not a service — never inject it via DI // ❌ DI is WRONG for: simple scripts / one-off tools // A 50-line Gradle task — DI setup overhead exceeds the benefit // ❌ DI is WRONG for: things that don't vary @Singleton class MathUtils @Inject constructor() { fun add(a: Int, b: Int) = a + b // ❌ will NEVER change — just use a function } // ✅ DI IS RIGHT for: services that VARY (testability) class UserViewModel @Inject constructor( private val repo: UserRepository // varies: real vs fake in tests ) // ✅ DI IS RIGHT for: shared expensive resources @Singleton fun provideOkHttp(): OkHttpClient // expensive, shared, one instance // ✅ DI IS RIGHT for: lifecycle management @ActivityRetainedScoped class UserSession // lifetime tied to Activity — automatically cleaned up // The DI heuristic: // Q1: "Can this dependency change?" (test vs prod) → DI it // Q2: "Is this expensive to create and shareable?" → DI it @Singleton // Q3: "Does this have a complex lifecycle?" → DI it with right scope // If all three are NO → don't inject it
- DI adds value for variation: real vs fake (testability), expensive shared objects, lifecycle management
- DI adds no value for: stateless utilities, value objects, things that never change
- Cost of DI: indirection, annotation processing, framework knowledge overhead
- Three-question heuristic: can it change? is it expensive + shared? does it have lifecycle?
- Architectural principle: DI is about inversion of control — only invert what benefits from being controlled externally
The senior answer to "when NOT to use DI" demonstrates the maturity to question your own tools. "DI is a tool for managing variation and lifetime. A stateless math utility will never vary and has no lifetime — DI adds complexity for zero gain. The smell: if you'd never write a fake for it in tests, you probably don't need to inject it."
25 questions covering Retrofit, OkHttp, interceptors, authentication, token refresh, error handling, and REST vs GraphQL for 2025-26 Android interviews.
Retrofit is a type-safe HTTP client for Android. You write an interface with annotated methods, and Retrofit generates the implementation at runtime using reflection and a dynamic proxy. Think of it as a bridge between your code and an HTTP server.
// Step 1 — Define your API interface interface UserApi { @GET("users/{id}") suspend fun getUser(@Path("id") id: String): User @POST("users") suspend fun createUser(@Body request: CreateUserRequest): User @GET("users") suspend fun getUsers(@Query("page") page: Int): List<User> } // Step 2 — Build the Retrofit instance (done once, @Singleton) val retrofit = Retrofit.Builder() .baseUrl ("https://api.example.com/") .addConverterFactory (GsonConverterFactory.create ()) // or Moshi, Kotlin Serialization .client (okHttpClient) .build() // Step 3 — Create the implementation (just a line) val userApi: UserApi = retrofit. create (UserApi::class.java) // Step 4 — Call it like a normal function val user = userApi.getUser ("123") // suspend — runs on IO thread // How it works under the hood: // retrofit.create() returns a Proxy object // Each method call → reads annotations → builds an HTTP Request // OkHttp executes the request → response parsed by Converter → your data type
- Type-safe: the compiler catches mismatched types at build time, not at runtime
- Converter: turns JSON (or XML, Protobuf) into your data classes automatically
- suspend support: built-in coroutine support — no callbacks needed
- Annotations drive everything: @GET, @POST, @Path, @Query, @Body, @Header
- Retrofit wraps OkHttp: it handles the HTTP protocol; Retrofit handles mapping
The simplest mental model: "Retrofit turns an interface into a working HTTP client. I describe what I want (GET /users/123), Retrofit figures out how to do it." The key benefit over raw OkHttp: no string-building, no manual JSON parsing — the compiler validates everything.
OkHttp is the actual HTTP engine — it makes the TCP connection, sends bytes, receives bytes. Retrofit sits on top and adds type safety. Think of OkHttp as the car engine, and Retrofit as the dashboard you interact with.
// OkHttp — the HTTP plumbing val client = OkHttpClient.Builder() .connectTimeout (30, TimeUnit.SECONDS) .readTimeout (30, TimeUnit.SECONDS) .writeTimeout (30, TimeUnit.SECONDS) .addInterceptor (HttpLoggingInterceptor().apply { level = HttpLoggingInterceptor.Level.BODY // logs full request/response }) .build () // Retrofit uses OkHttp internally val retrofit = Retrofit.Builder() .baseUrl ("https://api.example.com/") .client (client) // give Retrofit your configured OkHttp client .addConverterFactory (GsonConverterFactory.create ()) .build () // OkHttp's key features: // ✅ Connection pooling — reuses TCP connections (faster) // ✅ HTTP/2 support — multiplex multiple requests on one connection // ✅ GZIP compression — automatic response decompression // ✅ Caching — disk cache for GET responses // ✅ Interceptors — modify requests/responses in a pipeline // ✅ Transparent retry — retries on network glitches // Using OkHttp directly (without Retrofit) val request = Request.Builder().url ("https://api.example.com/users").build () val response = client.newCall (request).execute () // blocking val body = response.body?.string() // raw JSON string — no auto-parsing
- OkHttp: low-level HTTP — handles connections, protocols, sockets
- Retrofit: high-level API layer — handles type mapping and interface generation
- They're separate libraries but designed to work together
- You can use OkHttp without Retrofit (for raw requests), but not Retrofit without OkHttp
- One OkHttp client shared everywhere — connection pooling saves resources
"OkHttp is the car engine, Retrofit is the steering wheel." You always configure OkHttp first (interceptors, timeouts, certificates), then hand that client to Retrofit. They're both from Square, designed to work together perfectly.
An interceptor is a middleware that sits in the HTTP request/response pipeline. Every request flows through your interceptors before reaching the server, and every response flows back through them. Two types: Application interceptors and Network interceptors.
// Interceptor pipeline: App → [app interceptors] → OkHttp → [network interceptors] → Server // APPLICATION INTERCEPTOR — runs once, before caching // Best for: auth headers, logging, request modification class AuthInterceptor(private val tokenProvider: () -> String?) : Interceptor { override fun intercept(chain: Interceptor.Chain): Response { val token = tokenProvider() val request = if (token != null) { chain.request().newBuilder () .header ("Authorization", "Bearer $token") .build () } else chain.request() return chain.proceed (request) } } // NETWORK INTERCEPTOR — runs on the network layer, after caching // Best for: response modification, retry logic, cache control class CacheControlInterceptor : Interceptor { override fun intercept(chain: Interceptor.Chain): Response { val response = chain.proceed (chain.request()) return response.newBuilder () .header ("Cache-Control", "max-age=60") // cache for 60 seconds .build () } } // Register them on OkHttpClient val client = OkHttpClient.Builder() .addInterceptor (AuthInterceptor { tokenStore.token }) // application .addNetworkInterceptor (CacheControlInterceptor()) // network .addInterceptor (HttpLoggingInterceptor()) // application (logging) .build () // KEY DIFFERENCE: // addInterceptor() → application — runs ONCE, even for cached responses // addNetworkInterceptor() → network — skipped if response is cached
- Application interceptors: always run, even for cached responses — great for auth headers and logging
- Network interceptors: only run when a real network request happens — great for response tweaking
- Chain.proceed(): pass the (possibly modified) request forward — MUST be called
- Order matters: interceptors run in the order they're added
- Common application interceptors: auth, logging, API key header, retry
Simple rule: "Auth header and logging → application interceptor. Caching, retries, redirect handling → network interceptor." Most production apps only use application interceptors. The most important interceptor is always the auth one.
There are two completely different kinds of failures: the request never reached the server (network error), or the server responded but with a non-2xx status (HTTP error). Each needs different handling.
// NETWORK ERROR — IOException — no response from server at all // Causes: no internet, DNS failure, timeout, server unreachable // HTTP ERROR — HttpException — server responded with 4xx or 5xx // Causes: 401 Unauthorized, 404 Not Found, 500 Server Error // Pattern 1: try-catch (simple, clear) suspend fun getUser(id: String): Result<User> { return try { Result.success (api.getUser (id)) } catch (e: HttpException) { // Server responded with error code val errorBody = e.response()?.errorBody()?.string() Result.failure (ApiException(e.code(), errorBody)) } catch (e: IOException) { // No internet or timeout Result.failure (NetworkException("No internet connection")) } } // Pattern 2: safeApiCall wrapper (reusable across all repos) sealed class ApiResult<out T> { data class Success<T>(val data: T) : ApiResult<T>() data class HttpError(val code: Int, val message: String?) : ApiResult<Nothing>() data class NetworkError(val cause: Throwable) : ApiResult<Nothing>() } suspend fun <T> safeApiCall(call: suspend () -> T): ApiResult<T> = try { ApiResult.Success (call ()) } catch (e: HttpException) { ApiResult.HttpError (e.code(), e.response()?.errorBody()?.string()) } catch (e: IOException) { ApiResult.NetworkError (e) } // Usage — exhaustive when forces handling all cases when (val result =safeApiCall { api.getUser (id) }) { is ApiResult.Success ->showUser (result.data) is ApiResult.HttpError ->showError ("Server error ${result.code}") is ApiResult.NetworkError ->showError ("No internet") }
- IOException: no response — no internet, timeout, DNS failure, socket reset
- HttpException: server responded — 4xx client errors, 5xx server errors
- Never let exceptions propagate to the ViewModel raw — wrap in Result or sealed class
- errorBody(): contains the server's error message (JSON with details)
- safeApiCall wrapper: write once, use in every repository — consistent error handling
Always distinguish the two: "IOException means we never heard from the server — show a 'Check your internet' message. HttpException means we heard from the server but it said no — show the specific error like 'Item not found' for 404." Users need different messages for each.
A Bearer token is a string (usually a JWT) that proves the user is logged in. You send it in every request's Authorization header. An OkHttp interceptor is the cleanest way to add it automatically to every single request.
// The token looks like: "Authorization: Bearer eyJhbGciOiJIUzI1NiJ9.eyJ1c2VySWQiOiIxMjMifQ..." // TokenManager — stores the current token (injected via Hilt @Singleton) class TokenManager @Inject constructor( private val prefs: EncryptedSharedPreferences ) { fun getAccessToken(): String? = prefs.getString ("access_token", null) fun saveTokens(access: String, refresh: String) { prefs.edit ().putString ("access_token", access) .putString ("refresh_token", refresh).apply () } } // Auth interceptor — adds token to every request class AuthInterceptor @Inject constructor( private val tokenManager: dagger.Lazy<TokenManager> ) : Interceptor { override fun intercept(chain: Interceptor.Chain): Response { val token = tokenManager.get ().getAccessToken () val request = chain.request().newBuilder () .apply { if (token != null)header ("Authorization", "Bearer $token") } .build () return chain.proceed (request) } } // Wire it up in Hilt module @Provides @Singleton fun provideOkHttp(authInterceptor: AuthInterceptor): OkHttpClient = OkHttpClient.Builder() .addInterceptor (authInterceptor) .build () // For public endpoints (login, signup) — skip the token // Option: check request URL and skip adding header for "/auth/*" paths if (!chain.request().url.encodedPath.startsWith ("/auth")) { // only add header for non-auth endpoints }
- Bearer token: sent in Authorization header — server uses it to identify the user
- Interceptor approach: one place to add auth — all requests automatically include the token
- EncryptedSharedPreferences: store tokens securely — never in plain SharedPreferences
- dagger.Lazy: avoids circular dependency between OkHttp and TokenManager
- Skip for public endpoints: login/signup don't need a token — check URL pattern
"I store tokens in EncryptedSharedPreferences, read them in an AuthInterceptor, and add the Authorization header there. This means zero token management code in my repositories — they just call the API and the token is added automatically. Every new API endpoint gets auth for free."
When the access token expires, the server returns 401. A smart interceptor catches that 401, silently fetches a new token using the refresh token, and retries the original request — the user sees nothing.
// The flow: // Request → 401 Unauthorized → refresh token → get new access token → retry original request class TokenRefreshInterceptor @Inject constructor( private val tokenManager: TokenManager, private val authApi: dagger.Lazy<AuthApi> // Lazy to avoid circular dep ) : Interceptor { private val mutex = Mutex() // prevent multiple simultaneous refresh calls override fun intercept(chain: Interceptor.Chain): Response { val response = chain.proceed (chain.request()) // Not a token expiry — return response immediately if (response.code != 401) return response response.close () // ← IMPORTANT: must close before retrying // Refresh using runBlocking (interceptors are blocking by design) val newToken =runBlocking { mutex.withLock { // only one coroutine refreshes at a time // If another coroutine already refreshed, use its token val existing = tokenManager.getAccessToken () if (existing != null && existing != chain.request().header ("Authorization")) { return@withLock existing // token was already refreshed by another call } // Fetch fresh tokens val refreshToken = tokenManager.getRefreshToken () ?: return@withLock null val tokens = authApi.get ().refreshToken (RefreshRequest(refreshToken)) tokenManager.saveTokens (tokens.accessToken, tokens.refreshToken) tokens.accessToken } } if (newToken == null) { tokenManager.clearTokens () // refresh failed — force logout return chain.proceed (chain.request()) // will get 401 again → logout triggered } // Retry original request with new token val newRequest = chain.request().newBuilder () .header ("Authorization", "Bearer $newToken") .build () return chain.proceed (newRequest) } }
- 401 triggers refresh: access token expired — use refresh token to get new one
- Mutex prevents stampede: if 10 requests fail simultaneously, only one refreshes — others wait and use the new token
- response.close(): MUST close the 401 response before making another request — prevents leak
- Stale check: if another thread already refreshed, use that token instead of refreshing again
- Refresh fails → logout: if refresh token is also expired, clear all tokens and route to login
The Mutex is the key insight interviewers look for. "If the user has 10 parallel API calls and all get 401 simultaneously, without a Mutex you'd make 10 refresh calls. With Mutex, only 1 refreshes — the other 9 wait and automatically use the new token." This shows you understand concurrency in networking.
Kotlin Serialization is the 2025 recommendation — it's Kotlin-first, works with Kotlin Multiplatform, and is fast. Moshi is a great second choice. Gson is legacy — it has null-safety issues with Kotlin.
// ✅ BEST: Kotlin Serialization (2025 recommendation) // build.gradle.kts: // plugins { kotlin("plugin.serialization") } // implementation("org.jetbrains.kotlinx:kotlinx-serialization-json:1.7.0") // implementation("com.jakewharton.retrofit2:retrofit2-kotlinx-serialization-converter:1.0.0") @Serializable data class User( val id: String, val name: String, @SerialName("created_at") val createdAt: String // maps snake_case → camelCase ) val retrofit = Retrofit.Builder() .addConverterFactory (Json.asConverterFactory ("application/json".toMediaType ())) .build () // ✅ GOOD: Moshi — Kotlin-aware, null-safe data class User( val id: String, @Json(name = "created_at") val createdAt: String ) val moshi = Moshi.Builder().add (KotlinJsonAdapterFactory()).build () // ❌ AVOID: Gson — has Kotlin null-safety issues // Gson uses Java reflection — doesn't respect Kotlin's non-null types // This compiles but CRASHES at runtime: data class User(val name: String) // non-null // If API returns: {"name": null} // Gson sets name = null anyway → NPE when you use it // Kotlin Serialization / Moshi: throws an exception immediately ✅ (fail fast) // Comparison: // Gson Moshi KotlinX Serialization // Kotlin-aware ❌ ✅ ✅ // KMP support ❌ ❌ ✅ // Code generation ❌ ✅ ✅ // Null-safety ❌ ✅ ✅ // Google maintains ❌ ❌ ✅
- Kotlin Serialization: Kotlin-native, KMP-ready, null-safe, compile-time code gen — 2025 default
- Moshi: Kotlin-aware, null-safe, good performance — safe choice if already using it
- Gson: legacy — doesn't respect Kotlin null safety, can cause silent runtime NPEs
- @SerialName / @Json: map snake_case JSON to camelCase Kotlin fields
- Code generation vs reflection: both Moshi and KotlinX use code gen — faster than Gson's reflection
"I use Kotlin Serialization in all new projects — it's Kotlin-native, works with KMP, and uses code generation so there's no reflection overhead. Gson is the classic 'gotcha' interview topic: if the API sends null for a non-null Kotlin field, Gson sets it to null anyway and your app crashes later in a confusing place."
OkHttp has a built-in disk cache that stores GET responses. When offline, you can force OkHttp to serve stale cached data instead of failing with a network error — giving users something to look at.
// Set up OkHttp cache — 10MB on disk val cacheDir = File(context.cacheDir, "http_cache") val cache = Cache(cacheDir, 10 * 1024 * 1024) // 10 MB val client = OkHttpClient.Builder() .cache (cache) .addInterceptor (offlineCacheInterceptor (context)) .addNetworkInterceptor (onlineCacheInterceptor ()) .build () // ONLINE: tell OkHttp to cache responses for 5 minutes fun onlineCacheInterceptor() = Interceptor { chain -> chain.proceed (chain.request()).newBuilder () .header ("Cache-Control", "public, max-age=300") // 5 min cache .build () } // OFFLINE: when no internet, use cached data up to 7 days old fun offlineCacheInterceptor(context: Context) = Interceptor { chain -> var request = chain.request() if (!context.isConnected ()) { request = request.newBuilder () .header ("Cache-Control", "public, only-if-cached, max-stale=604800") .build () // use cache even if 7 days stale } chain.proceed (request) } // Helper fun Context.isConnected (): Boolean { val cm =getSystemService (ConnectivityManager::class.java) return cm.activeNetwork != null } // Server must send Cache-Control headers for this to work! // "Cache-Control: no-cache" → OkHttp won't cache // "Cache-Control: max-age=60" → cache for 60 seconds
- OkHttp cache: stores GET responses to disk — read them back when offline or when data is fresh
- max-age: how long to serve from cache before making a network request
- max-stale: how old a cached response can be when offline — you decide the trade-off
- only-if-cached: force OkHttp to use cache even if expired — avoids network error when offline
- Server cooperation: server must send Cache-Control headers; you can override them in network interceptor
OkHttp cache is great for simple offline support but not for full offline-first apps. "For product listings or news I use OkHttp cache — quick to implement. For anything user-specific (cart, orders) I use Room as the source of truth — OkHttp cache doesn't survive app restarts reliably."
The trick is using Retrofit's Response<T> wrapper which gives you access to both the success body and the raw error body — letting you parse the error into your error model even on non-2xx responses.
// API returns: 200 → { "user": {...} } // API returns: 400 → { "error": "Invalid email", "code": "INVALID_EMAIL" } // Error response model @Serializable data class ApiError(val error: String, val code: String) // Use Response<T> wrapper in Retrofit interface interface UserApi { @POST("users/register") suspend fun register(@Body req: RegisterRequest): Response<User> // Response<T> never throws — even 4xx/5xx come back as a Response object } // In repository — parse both success and error suspend fun register(email: String, password: String): ApiResult<User> { val response = api.register (RegisterRequest(email, password)) return if (response.isSuccessful ()) { val body = response.body() if (body != null) ApiResult.Success (body) else ApiResult.HttpError (200, "Empty response body") } else { // Parse error body into ApiError val errorJson = response.errorBody()?.string() val apiError = try { Json.decodeFromString <ApiError>(errorJson ?: "") } catch (e: Exception) { null } ApiResult.HttpError (response.code(), apiError?.error ?: "Unknown error") } } // ViewModel uses the result cleanly when (val result = userRepo.register (email, pass)) { is ApiResult.Success ->navigateToHome () is ApiResult.HttpError ->showError (result.message) // "Invalid email" else ->showError ("Network error") }
- Response<T>: Retrofit's wrapper — never throws on HTTP errors, gives you full control
- isSuccessful(): true for 200-299 response codes
- errorBody(): the raw error JSON from the server — parse it into your error model
- body(): null on error responses — always null-check it
- Two models: success type T in Response<T>, error parsed separately from errorBody()
"I use Response<T> when the API has rich error responses I need to show the user. It never throws — I check isSuccessful(), parse errorBody() on failure. For simple endpoints where I just need the data or an error message, the suspend fun without Response and a safeApiCall wrapper is cleaner."
SSL pinning ties your app to a specific server certificate or public key -- even a legitimate CA-issued certificate for the same domain will be rejected if it doesn't match the pinned value. This prevents man-in-the-middle attacks where an attacker intercepts traffic using a certificate from a compromised CA.
// CertificatePinner -- pin to specific certificate public key hash val pinner = CertificatePinner.Builder() .add("api.example.com", "sha256/AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=") .add("api.example.com", "sha256/BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB=") // backup pin .build() val client = OkHttpClient.Builder() .certificatePinner(pinner) .build() // Network Security Config (XML) -- declarative alternative // res/xml/network_security_config.xml // <network-security-config><domain-config><pin-set><pin digest="SHA-256">...</pin> // Get the hash for your certificate: // openssl s_client -connect api.example.com:443 | openssl x509 -pubkey -noout | openssl pkey -pubin -outform DER | openssl dgst -sha256 -binary | base64
- SSL pinning: reject any certificate that doesn't match the pinned hash -- even a valid CA-signed cert for the same domain
- Always pin at least two certificates: the current cert and a backup -- if only one is pinned and it expires, the app breaks for all users
- CertificatePinner vs Network Security Config: CertificatePinner is code-level (per OkHttpClient), NSC is manifest-level XML (applies app-wide)
- Pin rotation risk: when your cert expires you must ship an app update before the new cert takes effect -- plan rotation windows carefully
- Testing: use a proxy like Charles or mitmproxy -- a correctly pinned app will refuse its certificate, confirming pinning works
"SSL pinning is powerful but dangerous without planning. I always pin two hashes — current and a pre-generated backup. And I use Network Security Config with an expiration date so if something goes wrong, old pins auto-expire. Also, pin the intermediate CA hash instead of the leaf cert — intermediates change far less often."
Each annotation controls where your parameter ends up in the HTTP request. Understanding this lets you call any REST API correctly without guessing.
interface ProductApi { // @Path — replaces a placeholder in the URL // Result: GET /products/42 @GET("products/{id}") suspend fun getProduct(@Path("id") productId: Int): Product // @Query — appended as URL query parameters // Result: GET /products?page=2&limit=20&category=shoes @GET("products") suspend fun getProducts( @Query("page") page: Int = 1, @Query("limit") limit: Int = 20, @Query("category") category: String? = null // null = omitted from URL ): List<Product> // @Body — serialised as JSON in the request body // Result: POST /products with JSON body @POST("products") suspend fun createProduct(@Body product: CreateProductRequest): Product // @Header — adds a single request header // Result: GET /products with "X-Store-Id: 99" header @GET("products") suspend fun getStoreProducts(@Header("X-Store-Id") storeId: Int): List<Product> // @QueryMap — dynamic set of query params @GET("products/search") suspend fun search(@QueryMap filters: Map<String, String>): List<Product> // @FormUrlEncoded + @Field — form submission @FormUrlEncoded @POST("login") suspend fun login(@Field("email") email: String, @Field("password") pw: String): AuthResponse }
- @Path: substitutes a {placeholder} in the URL — use for resource identifiers
- @Query: appended as ?key=value to URL — use for filtering, pagination, sorting
- @Body: serialised to JSON and placed in request body — use for POST/PUT data
- @Header: adds one header to the request — use for per-request headers (not auth — use interceptor)
- @QueryMap: dynamic collection of query params — use for search filters with unknown keys
Memory trick: Path = in the URL road. Query = after the ? question mark. Body = in the package/envelope. Header = on the envelope label. The most common mistake: putting filtering parameters as @Path instead of @Query — that changes the URL structure.
REST (Representational State Transfer) is an architectural style for APIs. Resources (things) are identified by URLs. HTTP methods (verbs) describe what action to perform on that resource. Retrofit annotations map directly to these.
// REST — think of URLs as nouns and HTTP methods as verbs // URL: /users/123 = "the user with id 123" interface RestApi { // GET — READ data (safe, idempotent, cacheable) @GET("users") suspend fun getAllUsers(): List<User> @GET("users/{id}") suspend fun getUser(@Path("id") id: String): User // POST — CREATE a new resource @POST("users") suspend fun createUser(@Body user: CreateUserRequest): User // PUT — REPLACE entire resource (idempotent) @PUT("users/{id}") suspend fun replaceUser(@Path("id") id: String, @Body user: User): User // PATCH — UPDATE part of a resource @PATCH("users/{id}") suspend fun updateUser(@Path("id") id: String, @Body update: UserUpdate): User // DELETE — REMOVE a resource (idempotent) @DELETE("users/{id}") suspend fun deleteUser(@Path("id") id: String) } // Key concepts: // Idempotent: calling it multiple times = same result as calling once // GET/PUT/DELETE are idempotent; POST is NOT // "GET /orders" twice → same list (no side effects) // "POST /orders" twice → two separate orders created! // HTTP Status Codes to know: // 200 OK — success with body // 201 Created — POST succeeded, resource created // 204 No Content — success, no body (DELETE) // 400 Bad Request — client sent invalid data // 401 Unauthorized — not authenticated (no/invalid token) // 403 Forbidden — authenticated but not authorised // 404 Not Found — resource doesn't exist // 500 Server Error — server crashed
- GET: fetch data — safe, idempotent, cacheable — never has a request body
- POST: create — NOT idempotent (calling twice creates two resources)
- PUT: replace completely — idempotent (calling twice = same result)
- PATCH: partial update — only send the fields you want to change
- DELETE: remove — idempotent (deleting twice = resource still deleted)
The idempotency question is a common trap. "POST /payments twice → charged twice. PUT /payments/123 twice → same payment updated twice (same result). This is why payment APIs use POST with a unique idempotency key — the server deduplicates based on that key, preventing double charges."
REST (Representational State Transfer) is a URL-based API style where each endpoint represents a resource and HTTP verbs (GET, POST, PUT, DELETE) define operations. GraphQL is a query language where the client specifies exactly which fields it needs in a single request. REST is simpler to cache and easier to debug; GraphQL eliminates over-fetching and the n+1 problem.
// REST -- multiple round trips to assemble one screen val user = api.getUser(id) // GET /users/{id} val orders = api.getOrders(id) // GET /orders?userId={id} val prefs = api.getPrefs(id) // GET /preferences/{id} // GraphQL -- one request, client specifies exactly which fields val query = """ query GetDashboard(\$userId: ID!) { user(id: \$userId) { name email } orders(userId: \$userId) { id total status } preferences(userId: \$userId) { theme notifications } } """ // Apollo Kotlin client: val response = apolloClient.query(GetDashboardQuery(userId = id)).execute()
- REST over-fetching: GET /users/{id} returns 40 fields but the screen needs only name and avatar -- GraphQL lets the client request exactly those two
- REST n+1 problem: fetching a list of 20 orders requires 1 list request + 20 user detail requests -- GraphQL resolves nested data in one round trip
- REST advantages: HTTP caching works natively (GET responses cache), simpler to debug with curl, no client code generation needed
- GraphQL advantages: single endpoint, no versioning needed, type-safe schema, client-driven shape -- ideal for mobile where bandwidth matters
- Choose REST for: simple CRUD APIs, public APIs, teams unfamiliar with GraphQL. Choose GraphQL for: complex screens assembling data from many sources, mobile bandwidth sensitivity
"REST is simpler and better supported by caching infrastructure. GraphQL shines when different clients need different data shapes from the same API — mobile needs 3 fields, web needs 15. Instead of maintaining two REST endpoints, one GraphQL schema serves both. For a startup I'd default to REST; for a mature platform with many clients, GraphQL makes more sense."
Exponential backoff means waiting progressively longer between retries — 1s, 2s, 4s. This prevents hammering a struggling server, which would make things worse. Add jitter (random small delay) to prevent all clients retrying at the same moment.
// Approach 1: Repository-level retry with Kotlin Flow fun <T> retryWithBackoff( times: Int = 3, initialDelay: Long = 1_000, maxDelay: Long = 16_000, block: suspend () -> T ): T { var currentDelay = initialDelay repeat(times - 1) { attempt -> try { returnblock () } catch (e: IOException) { // Only retry on network errors, not HTTP errors Log.w ("Retry", "Attempt ${attempt + 1} failed: ${e.message}") } val jitter = (0..500).random () // random 0-500ms to spread retriesdelay (currentDelay + jitter) currentDelay =minOf (currentDelay * 2, maxDelay) // double but cap at max } returnblock () // last attempt — let exception propagate if it fails } // Usage in repository suspend fun syncData(): SyncResult =retryWithBackoff (times = 3) { api.sync () } // Approach 2: OkHttp interceptor (applies to all requests) class RetryInterceptor(private val maxRetries: Int = 3) : Interceptor { override fun intercept(chain: Interceptor.Chain): Response { var attempt = 0 while (true) { try { return chain.proceed (chain.request()) } catch (e: IOException) { if (++attempt >= maxRetries) throw e // exhausted — give up val waitMs = (1_000 * (attempt * attempt)).toLong() // 1s, 4s, 9s Thread.sleep (waitMs) // OkHttp interceptors are blocking } } } } // What to retry and what NOT to retry: // ✅ Retry: IOException (timeout, socket reset, server unreachable) // ✅ Retry: 500, 502, 503, 504 (server temporarily down) // ❌ Never retry: 400 Bad Request (wrong data — retrying won't help) // ❌ Never retry: 401 Unauthorized (need to refresh token first) // ❌ Never retry: 404 Not Found (resource doesn't exist)
- Exponential backoff: 1s → 2s → 4s → 8s — gives the server time to recover
- Jitter: random delay added to avoid thundering herd — all clients retrying at exact same time
- Only retry transient failures: IOExceptions and 5xx — never retry 4xx client errors
- Cap the delay: maxDelay prevents waiting forever on long outages
- Two approaches: OkHttp interceptor (all requests) vs repository-level (per operation)
Jitter is the key senior insight: "Without jitter, if 10,000 users all get a 503 at the same time, they all retry at t=1s, t=2s, t=4s simultaneously — creating a retry storm that makes the server problem worse. Random jitter spreads them out so the server can recover."
Kotlin Serialization is the modern way to parse JSON in Kotlin — no reflection, compile-time safe, KMP compatible. A few configuration options make it production-ready.
// build.gradle.kts // plugins { kotlin("plugin.serialization") } // implementation("org.jetbrains.kotlinx:kotlinx-serialization-json:1.7.0") // implementation("com.jakewharton.retrofit2:retrofit2-kotlinx-serialization-converter:1.0.0") // Configure Json instance — do this ONCE in your DI module val json = Json { ignoreUnknownKeys = true // ← CRITICAL: don't crash if API adds new fields isLenient = true // accept slight variations in JSON format encodeDefaults = false // don't send null fields in request bodies coerceInputValues = true // coerce null to default value for non-null fields } // Wire into Retrofit val retrofit = Retrofit.Builder() .addConverterFactory (json.asConverterFactory ("application/json".toMediaType ())) .build () // Data classes — annotate with @Serializable @Serializable data class Product( val id: String, val name: String, @SerialName("base_price") val basePrice: Double, // API sends "base_price", Kotlin uses camelCase @SerialName("image_url") val imageUrl: String? = null, // optional field — defaults to null @SerialName("is_available") val isAvailable: Boolean = true // default value if field absent ) // Polymorphic types — different subclasses from same field @Serializable sealed class Event { @Serializable @SerialName("order") data class OrderEvent(val orderId: String) : Event() @Serializable @SerialName("payment") data class PaymentEvent(val amount: Double) : Event() }
- @Serializable: marks a class for JSON serialization — required on every model class
- @SerialName: maps JSON field names (snake_case) to Kotlin property names (camelCase)
- ignoreUnknownKeys=true: critical for production — new API fields don't crash old app versions
- Default values: fields with defaults become optional in JSON — won't crash if absent
- No reflection: Kotlin Serialization uses compile-time code generation — faster than Gson
"ignoreUnknownKeys = true is the most important production setting. Without it, your app crashes when the backend team adds a new field to the API response — and old app versions can't be updated. With it, new fields are silently ignored."
HttpLoggingInterceptor from OkHttp logs every request and response. The key: only add it when BuildConfig.DEBUG is true — that way production builds never log sensitive data.
// HttpLoggingInterceptor — logs full HTTP traffic // implementation("com.squareup.okhttp3:logging-interceptor:4.12.0") @Module @InstallIn(SingletonComponent::class) object NetworkModule { @Provides @Singleton fun provideOkHttp(authInterceptor: AuthInterceptor): OkHttpClient = OkHttpClient.Builder() .addInterceptor (authInterceptor) // auth always .apply { if (BuildConfig.DEBUG) { // logging ONLY in debugaddInterceptor ( HttpLoggingInterceptor(HttpLoggingInterceptor.Logger.DEFAULT) .apply { level = HttpLoggingInterceptor.Level.BODY } ) } } .build () } // Log levels — choose based on what you need: // Level.NONE → nothing logged // Level.BASIC → "→ POST https://api.example.com/users (200 OK, 45ms)" // Level.HEADERS → above + all headers // Level.BODY → everything including request + response JSON // Production alternative — Timber with crash reporting val logger = HttpLoggingInterceptor { message -> Timber.tag("HTTP"). d (message) // routed through Timber (easy to disable) } // Sensitive data — redact auth tokens from logs class SanitisedLogger : HttpLoggingInterceptor.Logger { override fun log(message: String) { val sanitised = message.replace (Regex("Bearer [\\w.-]+"), "Bearer [REDACTED]") Timber.d(sanitised) } }
- BuildConfig.DEBUG: compile-time constant — the entire logging code is removed from release builds
- Level.BODY: most verbose — shows JSON bodies, great for debugging API integration
- Never log in production: API responses may contain PII (names, emails, tokens)
- Custom Logger: sanitise logs to redact auth tokens — they appear in headers and should never be logged
- Timber integration: route OkHttp logs through Timber for consistent logging infrastructure
"The BuildConfig.DEBUG check is not just good practice — it's security. API responses can contain user data, access tokens, and PII. Logging those in production would be a data breach risk. In debug mode I use Level.BODY to see everything; in production, nothing is logged."
File uploads use Multipart form data — the request body contains multiple "parts", each with its own headers and content. Retrofit makes this clean with @Multipart and @Part annotations.
// API interface for file upload interface UploadApi { @Multipart @POST("users/{id}/avatar") suspend fun uploadAvatar( @Path("id") userId: String, @Part avatar: MultipartBody.Part, // the file @Part("description") description: RequestBody // text field ): UploadResponse } // In your repository — convert file to MultipartBody.Part suspend fun uploadAvatar(userId: String, imageFile: File): UploadResponse { // Create request body from file val requestBody = imageFile.asRequestBody ("image/jpeg".toMediaType ()) // Wrap in MultipartBody.Part with field name "avatar" val avatarPart = MultipartBody.Part .createFormData ("avatar", imageFile.name, requestBody) // Text field alongside the file val descriptionBody = "Profile photo".toRequestBody ("text/plain".toMediaType ()) return api.uploadAvatar (userId, avatarPart, descriptionBody) } // Upload from URI (picked from gallery) suspend fun uploadFromUri(context: Context, uri: Uri): UploadResponse { val inputStream = context.contentResolver.openInputStream(uri)!! val bytes = inputStream. readBytes () inputStream.close () val requestBody = bytes.toRequestBody ("image/jpeg".toMediaType ()) val part = MultipartBody.Part.createFormData ("avatar", "photo.jpg", requestBody) return api.uploadAvatar (userId, part, "Profile photo".toRequestBody ()) } // For upload progress — use OkHttp's CountingRequestBody // Or use Firebase Storage / S3 pre-signed URLs for large files
- @Multipart: marks the request as multipart/form-data — required for file uploads
- MultipartBody.Part: wraps the file bytes with the field name and filename
- MediaType: tells the server what kind of file you're sending (image/jpeg, application/pdf)
- Text fields alongside files: use @Part("fieldName") RequestBody for non-file form fields
- Large files: consider pre-signed S3/GCS URLs — upload directly to cloud storage, not through your API
"For profile photos I use Multipart with Retrofit. For anything large (video, documents > 5MB) I use a pre-signed S3 URL from the backend — the client uploads directly to S3 with an expiring signed URL. This avoids routing megabytes through my backend server."
DTOs (Data Transfer Objects) match the API's JSON exactly. Domain models represent your app's business concepts. Keeping them separate means a backend API change only affects the DTO layer — not your entire codebase.
// DTO — matches the API JSON exactly @Serializable data class UserDto( @SerialName("user_id") val userId: String, @SerialName("full_name") val fullName: String, @SerialName("email_addr") val emailAddr: String, @SerialName("birth_date") val birthDate: String, // "1990-05-15" — raw string @SerialName("account_type") val accountType: String // "PRO" / "FREE" — raw string ) // Domain model — clean, business-focused, no API details data class User( val id: String, val name: String, val email: String, val birthDate: LocalDate, // proper type, not string val tier: AccountTier // enum, not raw string ) enum class AccountTier { FREE, PRO, ENTERPRISE } // Mapper — extension function in the data layer fun UserDto.toDomain () = User( id = userId, name = fullName, email = emailAddr, birthDate = LocalDate.parse(birthDate), // String → LocalDate tier = when (accountType. uppercase ()) { "PRO" -> AccountTier.PRO "ENTERPRISE" -> AccountTier.ENTERPRISE else -> AccountTier.FREE } ) // Repository maps DTO → domain before returning class UserRepositoryImpl @Inject constructor(private val api: UserApi) : UserRepository { override suspend fun getUser(id: String): User = api.getUser (id).toDomain () // ViewModel never sees UserDto }
- DTO: API-shaped, has @SerialName, knows about snake_case — lives in data layer only
- Domain model: clean Kotlin types (LocalDate, enums) — lives in domain layer, no API knowledge
- Mapper function: converts DTO → domain — the only translation point
- API rename impact: user_id renamed to id → change DTO and mapper only — domain unchanged
- ViewModel sees only domain models: zero knowledge of API field names or raw string types
"The maintenance argument: if the API renames 'user_id' to 'id', without this separation you'd grep the entire codebase and update 50 files. With DTOs, you change one @SerialName annotation. The mapper absorbs the change — everything above it is unaffected."
Testing network code in isolation requires either mocking the API interface or using MockWebServer to serve fake HTTP responses. Both approaches run fast, offline, and deterministically.
// Approach 1: Fake API (simplest — just implement the interface) class FakeUserApi : UserApi { var shouldThrow = false var userToReturn = UserDto("1", "Alice", "[email protected]", "1990-01-01", "PRO") override suspend fun getUser(id: String): UserDto { if (shouldThrow) throw IOException("No internet") return userToReturn } } class UserRepositoryTest { private val fakeApi = FakeUserApi() private val repo = UserRepositoryImpl(fakeApi) @Test fun getUser_returnsUser() = runTest { val user = repo.getUser ("1") assertEquals("Alice", user.name) assertEquals(AccountTier.PRO, user.tier) // also tests mapper! } @Test fun getUser_onNetworkError_throwsNetworkException() = runTest { fakeApi.shouldThrow = trueassertThrows <NetworkException> { repo.getUser ("1") } } } // Approach 2: MockWebServer — real HTTP requests against a local server // testImplementation("com.squareup.okhttp3:mockwebserver:4.12.0") class UserRepositoryIntegrationTest { private val server = MockWebServer() @Before fun setUp() { server.start () } @After fun tearDown() { server.shutdown () } @Test fun getUser_parsesResponseCorrectly() = runTest { server.enqueue (MockResponse() .setResponseCode (200) .setBody ("""{"user_id":"1","full_name":"Alice","email_addr":"[email protected]","birth_date":"1990-01-01","account_type":"PRO"}""")) val retrofit =buildRetrofit (server.url ("/").toString ()) val repo = UserRepositoryImpl(retrofit.create (UserApi::class.java)) val user = repo.getUser ("1") assertEquals("Alice", user.name) } }
- Fake API: implement the interface — simplest approach, great for unit tests
- MockWebServer: local HTTP server serving JSON — tests the full Retrofit + OkHttp + parsing pipeline
- Fake tests mapper: FakeApi returns a DTO; repo.getUser() applies the mapper — tests both at once
- MockWebServer tests format: verifies the actual JSON parsing works with your model classes
- No real network: both run offline, in milliseconds — no flaky tests due to internet
"I use Fake API for unit tests — it's fast and tests the repository + mapper logic. I use MockWebServer for integration tests when I want to verify the JSON parsing specifically — feeding real JSON from a file and asserting the parsed model is correct. MockWebServer also lets me test error handling by returning 4xx/5xx responses."
An API gateway sits between clients and microservices — it's a single entry point that handles auth, rate limiting, routing, and aggregation. From Android's perspective, you talk to one URL instead of dozens of microservice URLs.
// Without API gateway — Android talks to many services // https://users.api.com/users/123 // https://orders.api.com/orders // https://products.api.com/products // Each needs its own Retrofit instance, auth, error handling // With API gateway — single entry point // https://api.myapp.com/users/123 → gateway → users service // https://api.myapp.com/orders → gateway → orders service // https://api.myapp.com/products → gateway → products service // Android side — ONE Retrofit instance for everything val retrofit = Retrofit.Builder() .baseUrl ("https://api.myapp.com/") // single gateway URL .client (okHttpClient) // one auth interceptor for all services .build () // Benefits for Android: // ✅ One base URL, one auth token, one OkHttp client // ✅ Gateway handles rate limiting — app doesn't need to // ✅ Backend can change microservice topology without updating apps // ✅ CORS handled centrally // BFF (Backend for Frontend) — a gateway designed specifically for mobile // Instead of aggregating on Android: // screen needs: user + 3 recent orders + 5 products // Old way: 3 separate API calls from Android // BFF way: 1 call to /home-screen → gateway fetches all 3 and returns one response @Serializable data class HomeScreenData( val user: UserDto, val recentOrders: List<OrderDto>, val featuredProducts: List<ProductDto> ) // 1 network call instead of 3 — faster screen load
- API gateway: single entry point for all backend services — Android sends to one URL
- Simplifies Android: one Retrofit instance, one auth interceptor for all services
- BFF (Backend for Frontend): gateway aggregates multiple service calls into one mobile-optimised response
- Abstraction: backend can re-architecture services without changing Android app
- Trade-off: gateway is a single point of failure — but usually replicated for high availability
"From Android's perspective, API gateway means I have one base URL, one OkHttp client, one auth setup. Without it I'd need separate Retrofit instances for each microservice with separate auth. The BFF pattern is even better — instead of 3 calls to populate a screen, I make 1 call to a purpose-built endpoint."
Paging 3 is Jetpack's library for loading data in pages. You write a PagingSource that fetches one page at a time from Retrofit — Paging 3 handles loading states, error handling, and smooth scrolling automatically.
// API endpoint interface ProductApi { @GET("products") suspend fun getProducts( @Query("page") page: Int, @Query("per_page") perPage: Int = 20 ): PagedResponse<ProductDto> } @Serializable data class PagedResponse<T : @Serializable Any>( val data: List<T>, val totalPages: Int, val currentPage: Int ) // PagingSource — fetches one page at a time class ProductPagingSource( private val api: ProductApi ) : PagingSource<Int, Product>() { override suspend fun load(params: LoadParams<Int>): LoadResult<Int, Product> { val page = params.key ?: 1 // start from page 1 return try { val response = api.getProducts (page = page, perPage = params.loadSize) val products = response.data.map { it.toDomain () } LoadResult.Page ( data = products, prevKey = if (page == 1) null else page - 1, nextKey = if (page >= response.totalPages) null else page + 1 ) } catch (e: IOException) { LoadResult.Error (e) } catch (e: HttpException) { LoadResult.Error (e) } } override fun getRefreshKey(state: PagingState<Int, Product>) = state.anchorPosition } // Repository — creates the Pager fun getProducts(): Flow<PagingData<Product>> = Pager( config = PagingConfig(pageSize = 20, prefetchDistance = 5) ) { ProductPagingSource(api) }.flow // ViewModel val products = repo.getProducts ().cachedIn (viewModelScope) // Compose UI — LazyColumn handles paging automatically val products = vm.products.collectAsLazyPagingItems () LazyColumn { items(products) { item -> ProductCard(item) } }
- PagingSource: the data source — fetches one page, returns prevKey/nextKey for navigation
- nextKey=null: signals the last page — Paging 3 stops loading automatically
- cachedIn(viewModelScope): caches pages across recompositions — no refetch on rotation
- LoadResult.Error: Paging 3 shows an error state and allows retry — built in
- prefetchDistance: starts loading the next page before the user reaches the end — smooth scrolling
"Paging 3 handles the hard parts: tracking which page to load next, showing loading spinners at the bottom, handling errors with retry buttons, and caching so rotation doesn't re-fetch everything. My PagingSource only needs to know how to fetch one page — Paging 3 orchestrates the rest."
Timeouts prevent your app from waiting forever for a server that isn't responding. Three different timeouts control different phases of the HTTP connection — choosing the right values balances user experience against reliability.
// Three distinct timeout types: val client = OkHttpClient.Builder() // CONNECT TIMEOUT — how long to wait to establish the TCP connection // "Is the server reachable at all?" // Short is fine — if you can't connect in 15s, you can't connect .connectTimeout (15, TimeUnit.SECONDS) // READ TIMEOUT — how long to wait for data after connecting // "The server is connected but is it sending data?" // Longer for slow servers or large responses .readTimeout (30, TimeUnit.SECONDS) // WRITE TIMEOUT — how long to wait while sending data // "How long to wait while uploading our request body?" // Longer for file uploads, shorter for simple JSON POSTs .writeTimeout(30, TimeUnit.SECONDS) // CALL TIMEOUT — overall maximum for the entire request (OkHttp 4+) // Hard cutoff regardless of any other timeout . callTimeout (60, TimeUnit.SECONDS) .build () // Override per-request (for slow endpoints like report generation) interface ReportApi { @POST("reports/generate") @Headers("Timeout: 120") // custom header your interceptor reads suspend fun generateReport(@Body params: ReportParams): Report } // Practical guidelines for mobile: // Regular API calls: connect 15s, read 30s, write 30s // File uploads: write 120s (large payloads take time) // Long polling: read 60s+ (waiting for server-sent events) // Health checks: connect 5s, read 5s (fail fast) // On timeout → SocketTimeoutException (subclass of IOException) // Your safeApiCall wrapper catches IOException → show "Request timed out"
- Connect timeout: establishes TCP connection — 10-15s is typical on mobile
- Read timeout: waiting for bytes from server — 30s covers most API responses
- Write timeout: uploading the request body — increase for file uploads
- Call timeout: overall hard limit — prevents any single request from hanging forever
- SocketTimeoutException: what timeout throws — caught as IOException in your error handling
"Mobile users on 3G can have high latency — connect timeout of 15s is better than 5s. But read timeout of 30s means if the server hangs after connecting, the user waits 30 seconds before seeing an error. For user experience: show a progress indicator and let them cancel if needed rather than silently waiting."
Retrofit has built-in coroutine support — just add suspend to your interface functions. No callbacks, no threading boilerplate, no callback hell. Your network code reads like sequential code but runs asynchronously.
// OLD WAY: Callbacks — nested, hard to read, error-prone fun loadUserOldWay(id: String) { api.getUser(id). enqueue (object : Callback<User> { override fun onResponse(call: Call<User>, response: Response<User>) { if (response.isSuccessful()) { val user = response.body() // now get the user's orders... api. getOrders (user!!.id).enqueue (object : Callback<List<Order>> { // callback hell 😱 }) } } override fun onFailure(call: Call<User>, t: Throwable) { /* ... */ } }) } // ✅ NEW WAY: Coroutines — reads like sequential code interface UserApi { @GET("users/{id}") suspend fun getUser(@Path("id") id: String): User // just add suspend @GET("users/{id}/orders") suspend fun getOrders(@Path("id") id: String): List<Order> } suspend fun loadUserAndOrders(userId: String): UserWithOrders { val user = api.getUser (userId) // sequential — natural reading order val orders = api.getOrders (userId) // runs after user is fetched return UserWithOrders(user, orders) } // Parallel calls — async/await suspend fun loadDashboard(userId: String): Dashboard = coroutineScope { val userDeferred = async { api.getUser (userId) } val ordersDeferred = async { api.getOrders (userId) } val newsDeferred = async { api.getNews () } // All 3 run in parallel — total time = slowest, not sum Dashboard(userDeferred.await (), ordersDeferred.await (), newsDeferred.await ()) } // Retrofit runs on IO thread by default — no withContext(Dispatchers.IO) needed for Retrofit calls // Room queries do need withContext(Dispatchers.IO) if not using @Query's built-in async
- suspend + Retrofit: just add suspend keyword — Retrofit handles threading automatically
- No callbacks: code reads top-to-bottom — easier to understand and maintain
- Error handling: try-catch instead of onFailure callbacks — same familiar syntax
- Parallel with async: launch multiple requests simultaneously, await all — total time = slowest
- No withContext needed: Retrofit uses its own IO dispatcher internally
"Coroutines eliminated callback hell in networking. Two requests that depend on each other: sequential suspend calls, reads like synchronous code. Two independent requests: async { } both, await() both — they run in parallel. Try-catch handles errors. This is the biggest quality-of-life improvement in modern Android networking."
HTTP is request-response — the client asks, the server answers. WebSocket is a persistent bidirectional channel — server can push data anytime. Perfect for chat, live scores, stock prices, and notifications.
// WebSocket with OkHttp — direct support, no extra library needed class ChatWebSocketImpl @Inject constructor( private val client: OkHttpClient ) : ChatWebSocket { private val _events = MutableSharedFlow<ChatEvent>(extraBufferCapacity = 64) override val events: SharedFlow<ChatEvent> = _events private var webSocket: WebSocket? = null override fun connect(url: String, token: String) { val request = Request.Builder() .url (url) .header ("Authorization", "Bearer $token") .build () webSocket = client.newWebSocket (request, object : WebSocketListener() { override fun onOpen(ws: WebSocket, response: Response) { _events.tryEmit (ChatEvent.Connected) } override fun onMessage(ws: WebSocket, text: String) { val msg = Json.decodeFromString <ChatMessage>(text) _events.tryEmit (ChatEvent.Message (msg)) } override fun onClosed(ws: WebSocket, code: Int, reason: String) { _events.tryEmit (ChatEvent.Disconnected) } override fun onFailure(ws: WebSocket, t: Throwable, r: Response?) { _events.tryEmit (ChatEvent.Error (t)) } }) } override fun send(message: String) { webSocket?.send (message) } override fun disconnect() { webSocket?.close (1000, "Goodbye"); webSocket = null } } // Alternatives to raw WebSocket: // Firebase Realtime Database — managed WebSocket, offline support, free tier // Socket.IO — higher-level, auto-reconnect, rooms, namespaces // Ktor client — Kotlin-native WebSocket with coroutine Flow integration
- WebSocket: persistent bidirectional connection — server can push without client asking
- OkHttp WebSocket: direct support — newWebSocket() opens the connection
- SharedFlow bridge: converts OkHttp callbacks → Kotlin Flow — ViewModel observes events
- onFailure: network drop — implement reconnect logic with exponential backoff
- Alternatives: Firebase Realtime DB (managed), Socket.IO (higher-level), Ktor (Kotlin-native)
"The hardest part of WebSocket isn't connecting — it's reconnection. The network drops silently, onFailure fires, and you need to reconnect with backoff without losing messages. I emit events to a SharedFlow so the ViewModel observes them reactively, and the ViewModel triggers reconnect on ChatEvent.Error."
A production networking layer requires intentional decisions at every level — security, performance, error handling, testability, and maintainability all have to work together. Here's what a senior engineer sets up.
// LAYER 1: OkHttp — the engine @Provides @Singleton fun provideOkHttp( authInterceptor: AuthInterceptor, // adds Bearer token tokenRefreshInterceptor: TokenRefreshInterceptor // handles 401 ): OkHttpClient = OkHttpClient.Builder() .connectTimeout(15, TimeUnit.SECONDS) . readTimeout (30, TimeUnit.SECONDS) .addInterceptor (authInterceptor) .addInterceptor (tokenRefreshInterceptor) .apply { if (BuildConfig.DEBUG)addInterceptor (HttpLoggingInterceptor().apply { level = Level.BODY }) } .certificatePinner (buildCertPinner()) // SSL pinning in production .build () // LAYER 2: Retrofit — type-safe interface @Provides @Singleton fun provideRetrofit(client: OkHttpClient): Retrofit = Retrofit.Builder() .baseUrl (BuildConfig.API_BASE_URL) .client (client) .addConverterFactory (Json { ignoreUnknownKeys = true } .asConverterFactory("application/json". toMediaType ())) .build () // LAYER 3: Repository — error handling, DTO mapping, caching class ProductRepositoryImpl @Inject constructor( private val api: ProductApi, private val dao: ProductDao ) : ProductRepository { override fun getProducts(): Flow<List<Product>> = dao.observeAll ().map { it.map { e -> e.toDomain () } } override suspend fun refresh(): Result<Unit> =runCatching { val fresh =safeApiCall { api.getProducts () } dao.insertAll((fresh as ApiResult. Success ).data.map { it.toEntity () }) } } // LAYER 4: ViewModel — state management @HiltViewModel class ProductViewModel @Inject constructor( private val repo: ProductRepository ) : ViewModel() { val products = repo.getProducts ().stateIn (viewModelScope, SharingStarted.WhileSubscribed (5000),emptyList ()) fun refresh() = viewModelScope.launch { repo.refresh () } }
- OkHttp: auth + token refresh interceptors + debug logging + SSL pinning
- Retrofit: Kotlin Serialization with ignoreUnknownKeys, base URL from BuildConfig
- Repository: safeApiCall error wrapping, DTO→domain mapping, offline-first with Room
- ViewModel: stateIn with WhileSubscribed(5000) — auto-cancels when no observers
- Testable at every layer: FakeApi for repository, MockWebServer for integration tests
"The decisions I highlight: ignoreUnknownKeys so old app versions survive API updates, SSL pinning with two hashes for rotation safety, Mutex in token refresh to prevent simultaneous refresh calls, and offline-first with Room as the source of truth so users always see something. Each decision has a production war story behind it."
HTTP/2 multiplexes multiple requests over a single TCP connection, eliminating the head-of-line blocking that limits HTTP/1.1. It also compresses headers (HPACK) and supports server push. OkHttp uses HTTP/2 automatically when the server supports it -- no configuration needed.
// OkHttp uses HTTP/2 automatically -- verify with logging interceptor val logging = HttpLoggingInterceptor().apply { level = Level.BASIC } // Log output shows: "Protocol: h2" when HTTP/2 is negotiated val client = OkHttpClient.Builder() .addInterceptor(logging) .build() // HTTP/2 connection pool -- one connection handles all parallel requests // HTTP/1.1 needed multiple connections (default max 6 per host) // HTTP/2 streams: up to ~128 parallel requests on one TCP connection
- Multiplexing: HTTP/2 sends multiple requests simultaneously over one TCP connection -- no waiting for responses before sending the next request
- Head-of-line blocking eliminated: in HTTP/1.1 a slow request blocks the queue -- HTTP/2 streams are independent
- Header compression (HPACK): repeated headers (Authorization, Accept, User-Agent) are compressed -- significant savings on mobile where headers can be larger than body
- OkHttp: negotiates HTTP/2 automatically via ALPN during the TLS handshake -- zero configuration needed
- Server push: server can proactively send resources (not yet used by mobile apps widely -- focus on multiplexing benefit instead)
"HTTP/2 multiplexing is why making many small parallel API calls is now feasible on mobile. In HTTP/1.1, browsers limited to 6 connections per domain. With HTTP/2, all requests share one connection with no limit. OkHttp handles this transparently — my code doesn't change at all."
Opening a TCP connection is expensive — DNS lookup, TCP handshake, TLS handshake can take 100-300ms. Connection pooling keeps connections open and reuses them for future requests. OkHttp does this automatically.
// WITHOUT pooling — every request pays the full cost: // DNS lookup: ~20ms // TCP handshake: ~50ms // TLS handshake: ~100ms // Total overhead: ~170ms before even sending your request! // 10 requests = 1700ms wasted on connection setup alone // WITH pooling — connection reused: // First request: pays the 170ms overhead // All subsequent requests: 0ms overhead (connection already open) // 10 requests = 170ms total overhead // OkHttp default pool: 5 connections, 5 min keep-alive // Customise if needed (rarely required): val connectionPool = ConnectionPool( maxIdleConnections = 10, // max idle connections to keep keepAliveDuration = 5, // how long to keep them timeUnit = TimeUnit.MINUTES ) val client = OkHttpClient.Builder() .connectionPool (connectionPool) .build () // Critical: share ONE OkHttpClient across your entire app // ❌ BAD — new client = new pool, no reuse fun makeRequest() { val client = OkHttpClient() // creates new pool every call! } // ✅ GOOD — @Singleton in Hilt @Provides @Singleton fun provideOkHttp(): OkHttpClient = OkHttpClient.Builder().build ()
- TCP connection setup: DNS + TCP + TLS = 100-300ms overhead per new connection
- Pooling: keeps connections alive after request completes — next request reuses immediately
- OkHttp default: 5 idle connections, 5 minute keep-alive — sufficient for most apps
- One client everywhere: @Singleton ensures one shared pool — never create OkHttpClient per request
- HTTP/2 + pooling: even better — all requests share one connection, never idle
"The most common networking performance mistake I see in code reviews: creating a new OkHttpClient per request or per Repository. This defeats connection pooling entirely. The client must be @Singleton — that one instance maintains the pool that all requests share."
Always batch analytics — sending one event per HTTP request wastes battery (each wakes the radio), adds latency, and can overload your server. Collect events locally, flush in batches.
// ❌ BAD: One HTTP request per event fun trackEvent(event: String) { api.sendEvent (event) // HTTP call per event = battery drain + server load } // ✅ GOOD: Collect locally, flush in batches class AnalyticsBatcher @Inject constructor( private val api: AnalyticsApi, @ApplicationScope private val scope: CoroutineScope ) { private val queue = Channel<AnalyticsEvent>(Channel.UNLIMITED) init { scope.launch { queue.receiveAsFlow () .buffer (100) // collect up to 100 events .chunked (20) // batch into groups of 20 .collect { batch -> try { api.sendBatch (batch) } catch (e: Exception) { /* persist to DB for retry */ } } } } fun track(event: AnalyticsEvent) { queue.trySend (event) } } // Better: time-based flushing with WorkManager @HiltWorker class AnalyticsFlushWorker @AssistedInject constructor(...) : CoroutineWorker(...) { override suspend fun doWork(): Result { val pending = dao.getPendingEvents () if (pending.isEmpty ()) return Result.success () api.sendBatch (pending) // send all at once dao.markSent (pending) // mark as delivered return Result.success () } } // Schedule: every 30 minutes OR when batch reaches 50 events // Constraints: requiresNetwork(true) — only sends when connected
- Never send one event per request: each HTTP call wakes the cellular radio (10-20 second tail time)
- In-memory queue: collect events locally with Channel, flush when batch size or timer fires
- WorkManager flush: periodic background worker with requiresNetwork constraint — reliable delivery
- Persist to Room: if app is killed before flush, events survive and retry on next launch
- Batch size: 20-50 events per request is typical — balance latency vs data freshness
"Every network request on mobile wakes the cellular radio — and it stays awake for ~20 seconds (the 'tail time'). Sending 100 events individually = 100 radio wakeups. Sending them in 5 batches of 20 = 5 wakeups. This is why Firebase Analytics batches events and flushes every 30 minutes or on app background."
HTTPS is HTTP over TLS (Transport Layer Security). It encrypts all data in transit, authenticates the server's identity via a certificate, and verifies data integrity. Android has required HTTPS for all network traffic by default since Android 9 (API 28) -- cleartext HTTP requires explicit opt-in via Network Security Config.
// cleartext blocked by default on API 28+ -- opt-in only for debug // res/xml/network_security_config.xml // <network-security-config> // <debug-overrides><trust-anchors> // <certificates src="user"/> <!-- allow user-installed certs in debug --> // </trust-anchors></debug-overrides> // </network-security-config> // TLS handshake steps (simplified) // 1. Client → Server: ClientHello (supported cipher suites) // 2. Server → Client: ServerHello + Certificate // 3. Client: verify certificate chain → extract public key // 4. Client + Server: derive symmetric session key // 5. All subsequent traffic encrypted with session key val client = OkHttpClient.Builder() .sslSocketFactory(sslContext.socketFactory, trustManager) .build()
- TLS provides: encryption (data unreadable in transit), authentication (server is who it claims to be), integrity (data not tampered)
- Android 9+: cleartext HTTP blocked by default -- must explicitly enable via android:usesCleartextTraffic or Network Security Config
- Certificate chain: server sends its cert + intermediate CA certs -- Android validates the chain up to a trusted root CA in the system store
- Network Security Config: per-domain rules for trust anchors, cleartext, and certificate pinning -- the right place for debug overrides
- TLS 1.3: supported from Android 10+ -- faster handshake (1-RTT vs 2-RTT), stronger cipher suites, forward secrecy by default
"Every Android app should use HTTPS exclusively. The common mistake during development: allowing cleartext in the main manifest instead of only in the debug-flavored network_security_config. Put cleartext allowances only in src/debug/res/xml/ — they never reach production builds."
Large file downloads need real-time progress and the ability to cancel mid-download. OkHttp's ResponseBody combined with a Flow is the cleanest approach in modern Android.
// Download with progress using Flow fun downloadFile(url: String, dest: File): Flow<DownloadState> =flow { val request = Request.Builder().url (url).build () val response = client.newCall (request).execute () if (!response.isSuccessful ()) {emit (DownloadState.Error ("HTTP ${response.code}")); return@flow } val body = response.body ?: run {emit (DownloadState.Error ("Empty body")); return@flow } val totalBytes = body.contentLength() // -1 if unknown var bytesRead = 0L body.byteStream ().use { inputStream -> dest.outputStream ().use { outputStream -> val buffer = ByteArray(8192) var bytes: Int while (inputStream.read (buffer).also { bytes = it } != -1) {ensureActive () // cancels the flow if coroutine is cancelled outputStream.write (buffer, 0, bytes) bytesRead += bytesval progress =if (totalBytes > 0) (bytesRead * 100 / totalBytes).toInt ()else -1emit (DownloadState.Progress (progress, bytesRead)) } } }emit (DownloadState.Success (dest)) }.flowOn (Dispatchers.IO) sealed class DownloadState { data class Progress(val percent: Int, val bytesRead: Long): DownloadState() data class Success(val file: File): DownloadState() data class Error(val msg: String): DownloadState() } // In ViewModel — cancel by cancelling the job private var downloadJob: Job? = null fun startDownload(url: String) { downloadJob = viewModelScope.launch { repo.downloadFile (url, destFile).collect { state -> _uiState.update { it.copy (downloadState = state) } } } } fun cancelDownload() { downloadJob?.cancel () }
- Flow<DownloadState>: emits progress events while streaming — ViewModel collects and shows progress bar
- ensureActive(): checks if coroutine was cancelled at each loop — stops download immediately on cancel
- 8KB buffer: read in chunks, not all at once — avoids loading the whole file into memory
- contentLength() == -1: some servers don't send Content-Length — handle indeterminate progress
- flowOn(Dispatchers.IO): all IO work off main thread, Flow collection on main thread
"ensureActive() is the key to cancellable downloads. Without it, the while loop keeps reading even after the user presses cancel. With ensureActive() on every iteration, the download stops within 8KB of the cancel call. For production apps with large files, also consider using DownloadManager or WorkManager for background downloads that survive app closure."
These three mechanisms put data in different parts of an HTTP request. The right choice affects API design, security, and cacheability.
// PATH PARAMETER — part of the resource URL itself // Use when: identifying a specific resource // GET /users/123/orders/456 @GET("users/{userId}/orders/{orderId}") suspend fun getOrder( @Path("userId") userId: String, @Path("orderId") orderId: String ): Order // ✅ Cacheable by CDNs and proxies // ✅ RESTful — resource identity in URL // ❌ Don't put sensitive data here (appears in server logs) // QUERY PARAMETER — filters and options after "?" // Use when: filtering, sorting, pagination // GET /products?category=shoes&sort=price&page=2 @GET("products") suspend fun getProducts( @Query("category") category: String? = null, // null = omitted @Query("sort") sort: String = "date", @Query("page") page: Int = 1 ): List<Product> // ✅ Optional — null values are omitted from URL // ✅ Cacheable with the right Cache-Control // ❌ Don't put auth tokens here — they end up in server logs and browser history // REQUEST HEADER — metadata about the request // Use when: auth tokens, content type, API version, client info @GET("products") suspend fun getProductsV2( @Header("X-Api-Version") version: String = "2.0" ): List<Product> // ✅ Not stored in server logs (usually) // ✅ Best for auth tokens — Authorization: Bearer ... // ❌ Not cacheable by intermediaries (they ignore most custom headers) // Rule of thumb: // "What are you accessing?" → @Path // "How do you want it?" → @Query // "Who are you?" → @Header
- @Path: identifies the resource — /users/123 means "user 123" specifically
- @Query: filters/options — /products?sort=price means "all products, sorted by price"
- @Header: request metadata — auth, versioning, device info — not visible in server logs like URL params
- Security: never put auth tokens in @Query or @Path — they appear in URLs which end up in logs
- Null @Query: Retrofit omits null query params — great for optional filters
"The most common security mistake: putting an API key or auth token as a query parameter — ?api_key=secret. This gets logged in every server access log, CDN log, and browser history forever. Auth tokens always go in headers: Authorization: Bearer token."
SSE is a one-way push from server to client over HTTP. Simpler than WebSocket (no bidirectional protocol), more efficient than polling (no repeated requests). Perfect for live feeds, notifications, and status updates.
// Three approaches for server-to-client real-time data: // POLLING — client asks repeatedly (worst) // GET /updates every 5 seconds // ❌ 90% of requests return "nothing new" // ❌ Battery drain, server load, latency (up to 5 seconds) // SSE — server pushes over persistent HTTP connection (sweet spot) // Client opens ONE connection; server streams text/event-stream // Format: "data: {json}\n\n" // ✅ Works over HTTP (proxies, CDNs work normally) // ✅ Auto-reconnects built into browser/OkHttp // ✅ One-way push — simpler than WebSocket // ❌ Client can't send messages (use REST for that) // SSE with OkHttp — using EventSource // implementation("com.squareup.okhttp3:okhttp-sse:4.12.0") val request = Request.Builder() .url ("https://api.example.com/events") .header("Authorization", "Bearer $token") . build() val eventSource = EventSources. createFactory (client) .newEventSource (request, object : EventSourceListener() { override fun onEvent(es: EventSource, id: String?, type: String?, data: String) { val event = Json.decodeFromString <LiveUpdate>(data) _events.tryEmit (event) } override fun onFailure(es: EventSource, t: Throwable?, r: Response?) { // OkHttp SSE auto-reconnects — handle persistent failures only } }) // WEBSOCKET — bidirectional (when you need both directions) // Use for: chat, collaborative editing, live gaming // When to use what: // Polling: simple, data changes rarely, small team, few users // SSE: live feeds, order tracking, sports scores, notifications // WebSocket: chat, collaborative apps, real-time multiplayer
- SSE: persistent HTTP connection, server pushes text events — simpler than WebSocket
- One-way: server → client only — use REST for client → server actions alongside SSE
- HTTP-compatible: works through proxies and CDNs that break raw WebSocket connections
- Auto-reconnect: SSE clients automatically reconnect on disconnect — built-in resilience
- Use case sweet spot: order tracking, live scores, notifications — push without two-way communication
"SSE is underused on Android. For order tracking (server pushes status updates), SSE is simpler than WebSocket — you don't need bidirectional communication. It runs over plain HTTP so it works in corporate networks that block WebSocket. The OkHttp SSE library makes it a 10-line implementation."
API versioning ensures older app versions keep working when the backend evolves. Android apps can't be force-updated — users on old versions will keep making requests forever, so backward compatibility is critical.
// Strategy 1: Version in URL (most common) val retrofit = Retrofit.Builder() .baseUrl ("https://api.example.com/v2/") // version in base URL .build () // Strategy 2: Version in header (cleaner URLs) @GET("products") suspend fun getProducts( @Header("X-Api-Version") version: String = "2" ): List<Product> // Better: add version header globally via interceptor class ApiVersionInterceptor : Interceptor { override fun intercept(chain: Interceptor.Chain): Response = chain.proceed (chain.request().newBuilder () .header("X-Api-Version", "2") . header ("X-App-Version", BuildConfig.VERSION_NAME) // useful for server analytics .build ()) } // Protect against breaking changes with ignoreUnknownKeys: val json = Json { ignoreUnknownKeys = true } // If server adds new fields → old app versions don't crash // Minimum version enforcement: // Server returns 426 Upgrade Required when app is too old class MinVersionInterceptor : Interceptor { override fun intercept(chain: Interceptor.Chain): Response { val response = chain.proceed (chain.request()) if (response.code == 426) { // Emit a global event to show "Please update the app" dialog } return response } }
- Android apps can't be force-updated — apps on the Play Store 2 years ago still make requests
- Version in URL: /v2/products — simple, visible, easy to test in browser
- Version in header: cleaner URLs, interceptor adds it globally — no per-endpoint annotation
- ignoreUnknownKeys: the most important protection — new API fields don't break old app versions
- 426 Upgrade Required: server tells client the app is too old — show update prompt
"The key insight about mobile API versioning: you cannot deprecate and remove an API endpoint until every old app version is gone from the field — which may be never. API versioning on mobile is about indefinite backward compatibility, not a clean deprecation cycle like web APIs."
WorkManager is the right tool for deferred requests — it survives process death, respects network constraints, and retries automatically. Room is the queue.
// Pattern: Outbox — queue locally, sync when online // Step 1: Persist the pending request to Room @Entity(tableName = "pending_requests") data class PendingRequest( @PrimaryKey(autoGenerate = true) val id: Int = 0, val type: String, // "LIKE_POST", "FOLLOW_USER" val payload: String, // JSON serialised params val createdAt: Long = System.currentTimeMillis () ) // Step 2: Instead of calling API directly, save to DB suspend fun likePost(postId: String) { // Optimistic UI update immediately dao.updateLikeCount (postId, increment = 1) // Queue the API call pendingDao.insert (PendingRequest("LIKE_POST", """{"postId":"$postId"}""")) // Schedule sync worker WorkManager.getInstance (context).enqueue ( OneTimeWorkRequestBuilder<SyncWorker>() .setConstraints (Constraints(requiresNetwork = true)) .setExpedited (OutOfQuotaPolicy.RUN_AS_NON_EXPEDITED_WORK_REQUEST) .build () ) } // Step 3: Worker processes the queue when online @HiltWorker class SyncWorker @AssistedInject constructor(...) : CoroutineWorker(...) { override suspend fun doWork(): Result { val pending = pendingDao.getAll () for (req in pending) { when (req.type) { "LIKE_POST" -> {val params = Json.decodeFromString <LikeParams>(req.payload) api.likePost (params.postId) pendingDao.delete (req) // remove after success } } } return Result.success () } }
- Outbox pattern: write to local DB immediately, sync to API when online
- Optimistic UI: update local state instantly for a snappy feel, sync in background
- WorkManager constraints: requiresNetwork(true) — worker only runs when connected
- Room as queue: survives process death — requests never lost even if app is killed
- setExpedited: runs the worker as soon as possible when network becomes available
"This is the offline-first pattern for write operations. Instagram does this for likes — you tap the heart, the UI updates immediately, and the API call queues. If you lose signal, the like syncs silently when you reconnect. The user never sees a failure."
Gzip compresses the response body before sending — a 100KB JSON response might compress to 15KB. OkHttp adds the Accept-Encoding header and decompresses responses automatically. You get this for free.
// OkHttp adds this header automatically to every request: // "Accept-Encoding: gzip" // This tells the server: "I can handle gzip compressed responses" // If server supports gzip: "Content-Encoding: gzip" in response // OkHttp automatically decompresses → you get plain JSON in responseBody.string() // Zero code needed — it just works // Typical compression ratios for API responses: // JSON: 60-80% reduction ({"name":"Alice","email":"[email protected]"...} = very compressible) // HTML: 70-80% reduction // Images: 0-5% (already compressed — JPEG, PNG, WebP) // Binary: 0-20% // Verify gzip is working (check response headers): class CompressionLoggingInterceptor : Interceptor { override fun intercept(chain: Interceptor.Chain): Response { val response = chain.proceed (chain.request()) val encoding = response.header ("Content-Encoding") val originalSize = response.header ("X-Uncompressed-Content-Length") // if server sends it Log.d ("Compression", "Encoding: $encoding, Original: $originalSize") return response } } // Request compression — compressing the request body (less common) // Useful when POSTing large JSON payloads class GzipRequestInterceptor : Interceptor { override fun intercept(chain: Interceptor.Chain): Response { val original = chain.request() if (original.body == null) return chain.proceed (original) val gzipped = original.newBuilder () .header ("Content-Encoding", "gzip") .method (original.method, original.body!!.gzip ()) .build () return chain.proceed (gzipped) } }
- Automatic: OkHttp adds Accept-Encoding: gzip and decompresses responses — zero code
- JSON compresses extremely well: 70-80% size reduction typical for API responses
- Images: already compressed — gzip adds no benefit for JPEG/PNG/WebP
- Request compression: rare but useful for large POST bodies — add Content-Encoding: gzip interceptor
- Mobile impact: smaller responses = faster load time + less data usage for user
"Gzip is the biggest free performance win in networking — OkHttp handles it with zero effort. A product listing API returning 200 products as JSON might be 80KB uncompressed. With gzip it's 15KB. On a 3G connection that's the difference between 200ms and 50ms. Always ensure your server sends Content-Encoding: gzip."
async/await with coroutineScope runs all three requests simultaneously — total time equals the slowest, not the sum. This is one of the clearest wins of coroutines over callbacks.
// Sequential — BAD: 300ms + 250ms + 200ms = 750ms total suspend fun loadDashboardSequential(): Dashboard { val user = api.getUser () // 300ms val orders = api.getOrders () // 250ms val products = api.getProducts () // 200ms return Dashboard(user, orders, products) } // Parallel — GOOD: max(300, 250, 200) = 300ms total suspend fun loadDashboardParallel(): Dashboard = coroutineScope { val userDeferred = async { api.getUser () } val ordersDeferred = async { api.getOrders () } val productsDeferred = async { api.getProducts () } // All three requests are in flight right now Dashboard( user = userDeferred.await (), orders = ordersDeferred.await (), products = productsDeferred.await () ) } // If any one fails → coroutineScope cancels the others → exception propagates // Parallel with independent error handling (one failure doesn't kill others) suspend fun loadDashboardSafe(): Dashboard = coroutineScope { val userD = async {runCatching { api.getUser () } } val ordersD = async {runCatching { api.getOrders () } } val productsD = async {runCatching { api.getProducts () } } Dashboard( user = userD.await ().getOrNull (), // null on error orders = ordersD.await ().getOrNull (), products = productsD.await ().getOrNull () ) // Partial dashboard shown even if some calls fail } // In ViewModel fun loadDashboard() { viewModelScope.launch { _state.value = UiState.LoadingrunCatching { repo.loadDashboardParallel () } .onSuccess { _state.value = UiState.Success (it) } .onFailure { _state.value = UiState.Error (it.message !!) } } }
- async/await: launches coroutines in parallel — all requests in flight simultaneously
- coroutineScope: structured — if one async fails, all others are cancelled automatically
- Time benefit: 300ms instead of 750ms — real user-facing improvement
- runCatching per call: partial success — show what loaded even if one API fails
- SupervisorScope alternative: use supervisorScope if you want independent failure handling
"This is the most impactful coroutine pattern for UX. A dashboard with 3 serial API calls at 300ms each = 900ms load. With async/await they run in parallel = 300ms. 600ms faster, zero extra complexity. I use this pattern on every screen that needs data from multiple endpoints."
An idempotency key is a unique ID sent with non-idempotent requests (like payment) so the server can recognise and deduplicate retries. Without it, a network timeout could cause the user to be charged twice.
// The problem: payment request + network timeout // 1. App sends POST /payments {amount: 1000} // 2. Server processes payment → charges card ✅ // 3. Network drops → app never receives 200 OK // 4. App retries → POST /payments {amount: 1000} again // 5. Server processes again → charges card AGAIN ❌ double charge! // Solution: idempotency key suspend fun makePayment(amount: Int, orderId: String): PaymentResult { // Generate a unique key for this payment attempt // Use orderId + timestamp so retries for SAME order use same key val idempotencyKey = "payment-$orderId" return api.charge ( amount = amount, idempotencyKey = idempotencyKey // sent as header ) } // API interface @POST("payments") suspend fun charge( @Body request: PaymentRequest, @Header("Idempotency-Key") idempotencyKey: String ): PaymentResult // Server behaviour: // First request with key "payment-order-123" → process + store key + return result // Second request with SAME key "payment-order-123" → return SAME result, don't charge again // Third request → same result again (usually cached 24 hours) // UUID as idempotency key (for when there's no natural unique ID): val idempotencyKey = UUID.randomUUID ().toString () // Generate ONCE, persist to SharedPreferences // Reuse on retry, generate new one for a fresh attempt
- Problem: non-idempotent POST retry after timeout = duplicate side effect (double payment)
- Idempotency key: unique ID per logical operation — server uses it to deduplicate
- Use orderId-based key: same retry for same order always sends the same key
- Server caches result: same key → same response, no duplicate processing
- Required for: payments, order placement, email sending — any operation with real-world side effects
"Stripe, Razorpay, and PayPal all support idempotency keys. Without it, mobile payment is fundamentally unsafe — the user always has a brief window where a network drop could cause a double charge. The key rule: generate the idempotency key BEFORE making the request, persist it, and reuse on retry."
MockWebServer from OkHttp runs a real HTTP server locally during tests. You pre-program exactly what responses it returns — making network tests deterministic, fast, and offline-capable.
// testImplementation("com.squareup.okhttp3:mockwebserver:4.12.0") class UserRepositoryTest { private val server = MockWebServer() private lateinit var repo: UserRepository @Before fun setUp() { server.start () val retrofit = Retrofit.Builder() .baseUrl (server.url ("/")) // point to local server .addConverterFactory (GsonConverterFactory.create ()) .build () repo = UserRepositoryImpl(retrofit.create (UserApi::class.java)) } @After fun tearDown() { server.shutdown () } // Test: successful response @Test fun getUser_success() = runTest { server.enqueue (MockResponse() .setResponseCode (200) .setBody ("""{"id":"1","name":"Alice"}""") .setHeader ("Content-Type", "application/json")) val user = repo.getUser ("1") assertEquals("Alice", user.name) } // Test: 401 Unauthorized @Test fun getUser_401_throwsAuthException() = runTest { server.enqueue (MockResponse().setResponseCode (401))assertThrows <AuthException> { repo.getUser ("1") } } // Test: simulate slow response (timeout) @Test fun getUser_timeout_throwsIOException() = runTest { server.enqueue (MockResponse() .setBodyDelay(5, TimeUnit.SECONDS) // longer than readTimeout . setResponseCode (200).setBody ("""{}"""))assertThrows <SocketTimeoutException> { repo.getUser ("1") } } // Test: verify request was sent correctly @Test fun getUser_sendsCorrectHeaders() = runTest { server.enqueue (MockResponse().setResponseCode (200).setBody ("""{"id":"1","name":"Alice"}""")) repo.getUser ("1") val request = server.takeRequest () // inspect what was sent assertEquals("/users/1", request.path) assertNotNull(request.getHeader ("Authorization")) } }
- MockWebServer: real HTTP server in tests — tests go through the full Retrofit + OkHttp stack
- enqueue(): pre-program responses in order — each request consumes the next queued response
- setBodyDelay(): simulate slow servers — test timeout handling without real sleeps
- takeRequest(): inspect what the app actually sent — verify headers, path, body
- Runs offline: no real network — tests fast, deterministic, CI-safe
"MockWebServer tests are integration tests — they go through the actual Retrofit parsing, OkHttp interceptors, and response mapping. I use them specifically to test: does my repository correctly parse this JSON? Does it handle a 401 by throwing the right exception? Does my auth interceptor add the right header? Things a FakeApi can't verify."
PUT replaces the entire resource. PATCH updates only the fields you send. For most mobile apps, PATCH is preferable — you only send what changed, saving bandwidth and avoiding accidental data loss.
// User resource: { id, name, email, phone, address, avatar, bio, ... } // PUT — replace the ENTIRE resource // Must send ALL fields, even unchanged ones @PUT("users/{id}") suspend fun replaceUser( @Path("id") id: String, @Body user: UserRequest // ALL fields required ): User // PUT /users/123 with { name, email, phone, address, avatar, bio } // If you omit 'phone' → server sets phone = null (data loss!) // Use when: you want to replace the entire resource semantically // PATCH — update ONLY the fields you send // Unmentioned fields remain unchanged on the server @PATCH("users/{id}") suspend fun updateUser( @Path("id") id: String, @Body update: UserPatch // only fields to change ): User @Serializable data class UserPatch( val name: String? = null, // null = "don't change this" val bio: String? = null ) // PATCH /users/123 with { "name": "Alice Updated" } // → only name changes, email/phone/address untouched // Serialization: only include non-null fields in JSON val json = Json { encodeDefaults = false } // null fields omitted from JSON // UserPatch(name = "Alice") → {"name":"Alice"} — bio omitted // Real-world usage: // Profile edit screen → PATCH (user changed only name, don't touch others) // Account creation → PUT (replacing the empty record with full data) // Toggle feature flag → PATCH { "featureEnabled": true }
- PUT: idempotent full replacement — send all fields or risk data loss for omitted ones
- PATCH: partial update — only send changed fields, server merges with existing data
- encodeDefaults=false: null fields omitted from JSON — PATCH body contains only changed fields
- Mobile preference: PATCH — smaller payloads, safer, handles concurrent edits better
- Both are idempotent in practice: sending the same PATCH twice = same result
"The classic PUT mistake on mobile: user edits their name, app sends PUT with only the name field, server sets all other fields to null. Data wipe. PATCH exists specifically for partial updates — always use it for profile editing and settings. Set encodeDefaults=false in Kotlin Serialization so null fields are excluded from the request body."
gRPC is Google's RPC framework using Protocol Buffers (binary format) over HTTP/2. It's significantly faster and more efficient than REST+JSON, but more complex to set up. Best for internal microservices or high-performance data-heavy apps.
// REST+JSON vs gRPC+Protobuf comparison // Same user object: // JSON: {"id":"123","name":"Alice","email":"[email protected]"} = 47 bytes // Protobuf: binary encoded = ~15 bytes (3x smaller) // Protobuf also parses 5-10x faster than JSON // Protocol Buffer definition (.proto file) // syntax = "proto3"; // message User { string id = 1; string name = 2; string email = 3; } // service UserService { rpc GetUser (GetUserRequest) returns (User); } // Android gRPC setup (generated code from .proto) // implementation("io.grpc:grpc-android:1.64.0") // implementation("io.grpc:grpc-kotlin-stub:1.4.0") val channel = ManagedChannelBuilder .forAddress ("api.example.com", 443) .useTransportSecurity () .build () val stub = UserServiceGrpcKt.UserServiceCoroutineStub(channel) val user = stub.getUser (getUserRequest { id = "123" }) // type-safe, binary // gRPC streaming — server streams responses val userFlow: Flow<User> = stub.watchUser(watchRequest { id = "123" }) userFlow. collect { update ->println (update.name) } // When gRPC makes sense on Android: // ✅ Internal microservices backend (not public API) // ✅ High-frequency data (stock prices, real-time metrics) // ✅ Large data transfers (ML model weights, binary data) // ✅ Bidirectional streaming // ❌ Public API used by 3rd parties (REST is standard) // ❌ Small team / prototype (setup overhead not worth it)
- Protobuf: binary format — 3x smaller than JSON, 5-10x faster to parse
- HTTP/2 native: gRPC uses HTTP/2 multiplexing by design — efficient connection usage
- Streaming: server streaming, client streaming, bidirectional — built into the protocol
- Type-safe: .proto file generates client code — no manual JSON parsing
- Trade-off: harder to debug (binary, not human-readable), complex setup, less tooling than REST
"I'd choose gRPC for internal high-throughput services — like a real-time stock feed or ML inference. REST+JSON for anything public-facing or consumer-oriented. The debuggability difference matters: with REST I can test any endpoint in a browser. With gRPC I need specific tooling. That friction adds up in a small team."
CORS (Cross-Origin Resource Sharing) is a browser security policy that restricts web pages from making requests to a different domain than the one that served the page. Android apps are not browsers -- they have no CORS enforcement. An Android OkHttp call to any domain always works regardless of CORS headers.
// CORS does NOT affect Android -- OkHttp is not a browser, no origin concept val response = okHttpClient.newCall(Request.Builder(). url("https://api.otherdomain.com/data") // works fine, no CORS check .build()).execute() // CORS only matters in WebView -- the embedded browser DOES enforce it val settings = webView.settings settings.javaScriptEnabled = true // JavaScript inside WebView making cross-origin XHR → CORS applies // Server must return: Access-Control-Allow-Origin: https://yourapp.com
- Android native code: no CORS -- OkHttp has no concept of origin, all HTTP requests succeed regardless of server CORS headers
- WebView: CORS applies -- JavaScript running in a WebView is subject to the same browser origin policy
- Why CORS exists: prevents malicious websites from making authenticated requests to other domains using the user's cookies
- Server-side CORS: the Access-Control-Allow-Origin header is for browsers -- Android apps don't send an Origin header
- Interview gotcha: CORS is frequently confused as a server or Android issue -- it is purely a browser security mechanism
"Junior devs sometimes spend hours trying to 'fix CORS' in their Android app. CORS only applies to browsers. Your Retrofit calls will never have a CORS error — the server accepts them regardless. If you're seeing CORS errors, it's either in a WebView or you're testing in a browser's fetch API."
Offset pagination (page=1&limit=20) is simple but has problems with real-time data. Cursor-based pagination uses a pointer to the last seen item — more reliable for feeds that change while the user scrolls.
// OFFSET PAGINATION — page numbers // GET /posts?page=3&limit=20 // Server: SELECT * FROM posts ORDER BY date DESC LIMIT 20 OFFSET 60 // ✅ Simple to implement, easy to understand // ❌ New posts shift items — page 3 may have duplicates from page 2 // ❌ OFFSET is slow on large tables (DB scans all previous rows) interface PostApi { @GET("posts") suspend fun getPosts(@Query("page") page: Int, @Query("limit") limit: Int = 20): PagedPosts } // CURSOR PAGINATION — pointer to last seen item // GET /posts?after=post_id_xyz&limit=20 // Server: SELECT * FROM posts WHERE id > 'post_id_xyz' ORDER BY id LIMIT 20 // ✅ No duplicates — new posts don't shift the cursor // ✅ O(log n) with index — fast even on billions of rows // ❌ Can't jump to page 5 directly // ❌ Slightly more complex to implement @Serializable data class CursorPagedPosts( val posts: List<Post>, val nextCursor: String? = null // null = no more pages ) // PagingSource with cursor class PostPagingSource(private val api: PostApi) : PagingSource<String, Post>() { override suspend fun load(params: LoadParams<String>): LoadResult<String, Post> { return try { val cursor = params.key // null on first load val response = api.getPosts (after = cursor, limit = params.loadSize) LoadResult.Page ( data = response.posts, prevKey = null, // no going back in cursor pagination nextKey = response.nextCursor // null = last page ) } catch (e: Exception) { LoadResult.Error (e) } } override fun getRefreshKey(state: PagingState<String, Post>) = null }
- Offset: page number based — simple but shows duplicates in real-time feeds and slows with data size
- Cursor: pointer to last item — no duplicates, O(log n) on indexed column, ideal for infinite scroll
- Instagram/Twitter/LinkedIn: all use cursor pagination for feeds — consistency matters more than random access
- PagingSource key type: String for cursor (the cursor ID), Int for offset (page number)
- nextCursor=null: signals no more data — Paging 3 stops loading automatically
"For a social feed with continuous new content, cursor pagination is essential. With offset: user scrolls to page 3, a new post is added, all subsequent pages shift by one — user sees duplicate posts. With cursor: 'give me posts after ID xyz' — new posts don't affect existing cursors at all."
Observing network connectivity lets your app adapt in real time — showing offline banners, pausing sync, resuming operations when connectivity returns. The modern approach uses ConnectivityManager with a Flow.
// Observe network connectivity as a Flow class NetworkMonitor @Inject constructor( @ApplicationContext private val context: Context ) { val isOnline: Flow<Boolean> =callbackFlow { val cm = context.getSystemService (ConnectivityManager::class.java) val callback = object : NetworkCallback() { override fun onAvailable(network: Network) {trySend (true) } override fun onLost(network: Network) {trySend (false) } override fun onUnavailable() {trySend (false) } } val request = NetworkRequest.Builder() .addCapability (NetworkCapabilities.NET_CAPABILITY_INTERNET) .build () cm.registerNetworkCallback (request, callback) // Emit current state immediatelytrySend (cm.activeNetwork != null)awaitClose { cm.unregisterNetworkCallback (callback) } }.distinctUntilChanged () // don't re-emit same state } // In ViewModel — react to connectivity changes @HiltViewModel class HomeViewModel @Inject constructor( private val networkMonitor: NetworkMonitor, private val repo: HomeRepository ) : ViewModel() { val isOffline = networkMonitor.isOnline .map { !it } .stateIn (viewModelScope, SharingStarted.WhileSubscribed (5000), false) init { viewModelScope.launch { networkMonitor.isOnline.filter { it }.collect { repo.sync () // auto-sync when connection restored } } } }
- ConnectivityManager + NetworkCallback: modern API for connectivity monitoring — replaces deprecated NetworkInfo
- callbackFlow: bridges the callback-based ConnectivityManager API into a Kotlin Flow
- distinctUntilChanged: prevents repeated online/online events — only emit on actual changes
- Auto-sync on reconnect: filter { it } + collect = trigger sync exactly when network comes back
- isOffline StateFlow: collect in Compose with collectAsStateWithLifecycle to show/hide offline banner
"The key pattern: combine offline-first data (Room as source of truth) with NetworkMonitor for sync. When isOnline emits true, trigger a repo.sync(). Users see cached data immediately, and fresh data flows in when connectivity is restored. They never see a loading spinner just because of network state."
Tools like Charles Proxy work by installing a custom CA on the device. SSL pinning defeats this. For maximum security — combine SSL pinning, certificate transparency, and detection of proxy/rooted environments.
// Layer 1: SSL Pinning (blocks most proxy tools) val pinner = CertificatePinner.Builder() .add ("api.example.com", "sha256/AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=") .add ("api.example.com", "sha256/BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB=") // backup .build () // Layer 2: Detect proxy (Charles, Fiddler, mitmproxy) fun isProxySet(): Boolean { val proxyHost = System.getProperty ("http.proxyHost") val proxyPort = System.getProperty ("http.proxyPort") return !proxyHost.isNullOrBlank () && proxyPort != null } // Layer 3: Detect rooted device (jailbroken = custom CAs possible) fun isRooted(): Boolean { return File("/system/app/Superuser.apk").exists () || File("/sbin/su").exists () || Build.TAGS?.contains("test-keys") == true } // Layer 4: Network Security Config — block cleartext and user-added CAs <network-security-config> <base-config cleartextTrafficPermitted="false"> <trust-anchors> <certificates src="system" /> <!-- only system CAs, not user-installed --> <!-- NOT adding src="user" -- Charles requires user-installed CA! --> </trust-anchors> </base-config> </network-security-config> // This alone blocks Charles on non-rooted devices without needing cert pinning // Combine in security check: fun performSecurityCheck(): SecurityStatus = when {isRooted () -> SecurityStatus.ROOTED_DEVICEisProxySet () -> SecurityStatus.PROXY_DETECTED else -> SecurityStatus.SECURE }
- SSL pinning: rejects CAs that proxy tools install — the strongest protection
- Network Security Config without user CAs: blocks Charles on non-rooted devices without pinning
- Proxy detection: check system properties for http.proxyHost — skip sensitive operations
- Root detection: rooted devices can bypass most protections — warn or limit functionality
- Defense in depth: no single layer is perfect — combine all three for maximum coverage
"The simplest defense against Charles Proxy on non-rooted devices: don't add src='user' to your Network Security Config trust anchors. Charles requires installing a user CA — without trusting user CAs, Charles can't intercept. Add SSL pinning on top for rooted device protection. Neither is 100% — a determined attacker on a rooted device can always find a way."
EventListener gives you detailed timing callbacks for every phase of an HTTP request — DNS lookup, TCP connect, TLS handshake, request write, response read. Perfect for performance monitoring without third-party APM tools.
class NetworkEventListener : EventListener() { private val callStart = AtomicLong() override fun callStart(call: Call) { callStart.set (System.currentTimeMillis ()) } override fun dnsStart(call: Call, domainName: String) { Log.d ("Net", "DNS lookup started: $domainName") } override fun dnsEnd(call: Call, domainName: String, inetAddressList: List<InetAddress>) { Log.d ("Net", "DNS resolved to: ${inetAddressList.first()}") } override fun connectStart(call: Call, inetSocketAddress: InetSocketAddress, proxy: Proxy) { Log.d ("Net", "TCP connect started") } override fun secureConnectEnd(call: Call, handshake: Handshake?) { Log.d ("Net", "TLS handshake complete: ${handshake?.tlsVersion}") } override fun responseHeadersEnd(call: Call, response: Response) { val ttfb = System.currentTimeMillis () - callStart.get () Log.d ("Net", "Time to first byte: ${ttfb}ms") } override fun callEnd(call: Call) { val total = System.currentTimeMillis () - callStart.get () val url = call.request().url.toString() Log.d ("Net", "[$url] Total: ${total}ms") analytics.recordNetworkCall (url, total) // report to your analytics } } // Wire to OkHttp val client = OkHttpClient.Builder() .eventListenerFactory (EventListener.factory (NetworkEventListener())) .build () // Events fired (in order for a fresh connection): // callStart → dnsStart → dnsEnd → connectStart → connectEnd // → secureConnectStart → secureConnectEnd → requestHeadersStart // → requestBodyStart → requestBodyEnd → responseHeadersStart // → responseHeadersEnd → responseBodyStart → responseBodyEnd → callEnd
- EventListener: lifecycle hooks for every phase of an HTTP call — DNS, TCP, TLS, headers, body
- TTFB (Time to First Byte): responseHeadersEnd - callStart — key performance metric
- Connection reuse: connectStart/End not called for pooled connections — shows pool hit
- Production monitoring: record timing per URL to spot slow endpoints in your analytics
- EventListener.factory(): creates a new listener per call — thread-safe, per-request state
"EventListener is how you build your own lightweight APM for networking. I use it to track TTFB and total request time per endpoint, then report to Firebase Analytics. When a specific API endpoint becomes slow, it shows up in our dashboards before users file a complaint."
Main-thread network calls cause ANR (Application Not Responding) errors. They're detectable at runtime with StrictMode, preventable with architecture (suspend functions), and findable in code review with linting.
// DETECT: StrictMode in debug builds class MyApp : Application() { override fun onCreate() { super.onCreate () if (BuildConfig.DEBUG) { StrictMode.setThreadPolicy ( StrictMode.ThreadPolicy.Builder() .detectNetwork () // crash on network in main thread .detectDiskReads () // crash on disk IO in main thread .penaltyDeath () // crash immediately (don't just log) .build () ) } } } // PREVENT: Retrofit suspend functions run on background thread automatically interface UserApi { @GET("users") suspend fun getUsers(): List<User> // suspend = background thread ✅ @GET("users") fun getUsersBlocking(): Call<List<User>> // if you call .execute() = main thread ❌ } // In Repository — withContext ensures IO thread even if called from main suspend fun getUsers(): List<User> = withContext(Dispatchers.IO) { api.getUsers () // redundant for Retrofit suspend funs, but explicit is safe } // FIND IN CODE REVIEW: look for .execute() on main thread // ❌ Bad patterns to grep for: // api.getUsers().execute() — blocks current thread // runBlocking { api.getUsers() } — in Activity.onCreate()? main thread block! // Custom lint rule to catch blocking calls: // class NetworkOnMainThreadDetector : Detector() — checks for .execute() usage
- StrictMode.penaltyDeath(): crashes the debug build immediately on main-thread network — impossible to miss
- Retrofit suspend: automatically executes on OkHttp's dispatcher thread — safe by design
- call.execute(): the dangerous one — synchronous, blocks whatever thread calls it
- runBlocking in UI: blocks the main thread if it calls a network suspend function
- Code review grep: search for .execute() and runBlocking() in UI layer files
"I add StrictMode.penaltyDeath() in the debug Application class on day one of every project. It's the most effective tool against accidental main-thread IO — the app crashes immediately in development, so the mistake is caught before code review. penaltyLog() is too easy to miss."
User-Agent identifies your app to the server. OkHttp sends a default one, but a custom User-Agent lets your server analytics distinguish app versions, platforms, and even feature flags — without changing API contracts.
// OkHttp default User-Agent: // "okhttp/4.12.0" — not very informative // Custom User-Agent via interceptor — added to ALL requests automatically class UserAgentInterceptor(@ApplicationContext private val ctx: Context) : Interceptor { private val userAgent: String by lazy { val appName = ctx.getString (R.string.app_name) val versionName = BuildConfig.VERSION_NAME val versionCode = BuildConfig.VERSION_CODE val platform = "Android ${Build.VERSION.RELEASE}" val device = "${Build.MANUFACTURER} ${Build.MODEL}" "$appName/$versionName ($versionCode) $platform; $device" // "MyApp/2.1.0 (210) Android 14; Google Pixel 8" } override fun intercept(chain: Interceptor.Chain): Response = chain.proceed (chain.request().newBuilder () .header ("User-Agent", userAgent) .build ()) } // What the server can now do with this information: // ✅ Analytics: "70% of API calls from version 2.1.0, 30% from 2.0.3" // ✅ Deprecation: return 410 Gone for versions < 1.5.0 with forced update // ✅ Feature flags: enable new features only for version >= 2.0.0 // ✅ Bug tracking: correlate server errors with specific app versions // ✅ A/B testing: different responses for different app builds // Combine with X-App-Build header for additional context: // X-App-Build: debug / release / staging // X-Request-Id: UUID per request (for distributed tracing)
- User-Agent: identifies your app, version, and platform to the server
- Interceptor approach: added to all requests automatically — no per-endpoint annotation needed
- Version-based routing: server can return different data or enforce minimum app versions
- Distributed tracing: add X-Request-Id UUID to correlate client requests with server logs
- lazy evaluation: UserAgent string built once, cached — no runtime overhead per request
"A good User-Agent is a free analytics dimension. When a bug is reported on version 2.0.3, I can filter server logs by that version to see exactly what requests those users were making. Also useful for gradual API migration: server returns v2 response format only for app versions >= 2.1.0."
Apollo Kotlin is the official GraphQL client for Android — it generates type-safe Kotlin code from your .graphql query files. It has built-in normalized caching and coroutine support.
// build.gradle.kts // plugins { id("com.apollographql.apollo3") version "3.8.0" } // implementation("com.apollographql.apollo3:apollo-runtime:3.8.0") // implementation("com.apollographql.apollo3:apollo-normalized-cache:3.8.0") // Step 1: Define query in src/main/graphql/GetUser.graphql // query GetUser($id: ID!) { // user(id: $id) { id name email avatar { url } } // } // Apollo generates: GetUserQuery + GetUserQuery.Data types // Step 2: Configure Apollo client with caching val store = SqlNormalizedCacheFactory(context, "apollo.db") val apolloClient = ApolloClient.Builder() .serverUrl ("https://api.example.com/graphql") .normalizedCache (store) // persistent normalized cache .addHttpHeader ("Authorization", "Bearer $token") .build () // Step 3: Execute query with cache policy suspend fun getUser(id: String): User? { val response = apolloClient .query (GetUserQuery(id)) .fetchPolicy (FetchPolicy.CacheFirst) // cache → network .execute () if (response.hasErrors()) { val error = response.errors?.first() throw GraphQLException(error?.message ?: "GraphQL error") } return response.data?.user?.toDomain () } // FetchPolicy options: // CacheFirst: serve cache, update in background // NetworkFirst: network, fall back to cache // CacheOnly: never hit network (offline mode) // NetworkOnly: always fresh (like Retrofit default) // CacheAndNetwork: emit cache immediately, then network (two emissions) // Mutation (writing data) val result = apolloClient.mutation (UpdateUserMutation(name = "Alice")).execute () // Apollo automatically updates the normalized cache after mutations
- Apollo generates types from .graphql files: GetUserQuery, GetUserQuery.Data — fully type-safe
- Normalized cache: stores by ID, not by query — updating one user updates it everywhere automatically
- FetchPolicy: cache-first for fast loads, network-only for always-fresh critical data
- Error handling: GraphQL errors are in response.errors (HTTP 200 with errors inside)
- CacheAndNetwork: emit cached version first, then network — great for snappy feeling UX
"Apollo's normalized cache is its biggest advantage over raw REST+Room. When a mutation updates a user, Apollo automatically updates every query that includes that user — no manual cache invalidation code. The cache stores data by type+ID, not by query string."
A CallAdapter transforms Retrofit's Call into another type — Flow, Result, or a custom wrapper. Instead of writing try-catch in every repository, a custom adapter wraps every response automatically.
// Without custom adapter: try-catch in every repository function suspend fun getUser(id: String): Result<User> { return try { Result.success (api.getUser (id)) } catch (e: Exception) { Result.failure (e) } } // Same boilerplate in every single repository function — repeated 50 times // With custom CallAdapter: automatic wrapping interface UserApi { @GET("users/{id}") suspend fun getUser(@Path("id") id: String): Result<User> // Returns Result<User> directly — no try-catch needed anywhere } // Custom CallAdapter factory class ResultCallAdapterFactory : CallAdapter.Factory() { override fun get(returnType: Type, annotations: Array<Annotation>, retrofit: Retrofit): CallAdapter<*, *>? { if (getRawType (returnType) != Call::class.java) return null val upperType =getParameterUpperBound (0, returnType as ParameterizedType) if (getRawType (upperType) != Result::class.java) return null val resultType =getParameterUpperBound (0, upperType as ParameterizedType) return ResultCallAdapter<Any>(resultType) } } // Register on Retrofit val retrofit = Retrofit.Builder() .addCallAdapterFactory (ResultCallAdapterFactory()) .build () // Now in repository — clean, no try-catch suspend fun getUser(id: String): Result<User> = api.getUser (id).map { it.toDomain () } // HttpException and IOException automatically become Result.failure()
- CallAdapter: transforms Retrofit's internal Call into any return type
- Result<T> adapter: wraps success in Result.success(), exceptions in Result.failure()
- DRY: write error wrapping once in the adapter — zero try-catch in repositories
- addCallAdapterFactory(): register on Retrofit builder — applies to all API interfaces
- Flow adapter: RxJava2CallAdapterFactory is the same concept for RxJava — built-in with Retrofit's rx adapters
"A ResultCallAdapterFactory is one of the highest-ROI architecture patterns in Android networking. Write it once, and every suspend fun returning Result<T> in your API interfaces gets automatic error handling. Your repositories never need try-catch — they just call the API and map the result."
A systematic networking code review checklist prevents the most common production networking bugs — from memory leaks to security vulnerabilities. Each item maps to a real production issue.
// 1. ❌ Multiple OkHttpClient instances class UserRepo { val client = OkHttpClient() } // each creates a new pool class OrderRepo { val client = OkHttpClient() } // ❌ no pooling! // ✅ One @Singleton OkHttpClient shared via DI // 2. ❌ Gson instead of Kotlin Serialization // Can silently set non-null Kotlin fields to null → runtime NPE // 3. ❌ Missing ignoreUnknownKeys val json = Json { } // ❌ new API field = crash on old app versions val json = Json { ignoreUnknownKeys = true } // ✅ // 4. ❌ Auth token in query param @GET("users?api_key=secret123") // ❌ in logs, history, proxy // ✅ Authorization header via interceptor // 5. ❌ No error body parsing throw Exception("HTTP ${e.code()}") // ❌ ignores server error message // ✅ Parse e.response()?.errorBody()?.string() // 6. ❌ No timeout configured val client = OkHttpClient() // default timeout = 10 seconds — often wrong // ✅ Explicit connectTimeout, readTimeout, writeTimeout // 7. ❌ Token refresh without Mutex // Multiple 401s → multiple simultaneous refresh calls // ✅ Mutex in TokenRefreshInterceptor // 8. ❌ Network call in ViewModel constructor class HomeViewModel : ViewModel() { val user = runBlocking { repo.getUser () } // ❌ blocks during VM creation } // ✅ launch in init or call loadData() from UI // 9. ❌ HttpLoggingInterceptor.Level.BODY in release // Logs auth tokens and user data to Logcat in production // ✅ if (BuildConfig.DEBUG) addInterceptor(logging) // 10. ❌ No SSL pinning for sensitive endpoints // ✅ CertificatePinner for payment/auth endpoints at minimum
- Single OkHttpClient: one @Singleton — connection pool shared across all requests
- Kotlin Serialization + ignoreUnknownKeys: null-safe parsing + forward compatibility
- Auth in headers not query params: prevents tokens appearing in server logs
- Error body parsing: server error messages are in errorBody(), not the exception
- Mutex in token refresh: prevents stampede of simultaneous refresh calls on 401
"In a networking code review I check these 10 items in order of severity: (1) OkHttpClient not singleton, (2) logging in release, (3) no ignoreUnknownKeys, (4) auth token in URL, (5) no timeouts, (6) error body ignored, (7) no Mutex in token refresh, (8) no SSL pinning for auth, (9) network on main thread, (10) missing Gson→KotlinX migration. Finding 3+ is common in codebases that grew without a networking specialist."
25 questions on Room, DataStore, SharedPreferences, encrypted storage, and offline-first strategies for 2025-26 Android interviews.
Room is Jetpack's database library — a thin, type-safe layer on top of SQLite. It generates boilerplate SQL code at compile time, validates your queries, and integrates with coroutines and Flow out of the box. Think of it as SQLite with all the hard parts handled for you.
// Raw SQLite — manual, error-prone, no type safety val db = openOrCreateDatabase("users.db", Context.MODE_PRIVATE, null) db.execSQL ("CREATE TABLE users (id TEXT, name TEXT)") val cursor = db.rawQuery ("SELECT * FROM users WHERE id = ?",arrayOf (id)) // Must manually close cursor, no compile-time query validation // Room — type-safe, compile-validated, coroutine-friendly // 1. Entity — maps to a database table @Entity(tableName = "users") data class UserEntity( @PrimaryKey val id: String, val name: String, val email: String ) // 2. DAO — your query interface @Dao interface UserDao { @Query("SELECT * FROM users WHERE id = :id") suspend fun getUser(id: String): UserEntity? // SQL validated at BUILD time @Insert(onConflict = OnConflictStrategy.REPLACE) suspend fun upsert(user: UserEntity) @Query("SELECT * FROM users ORDER BY name") fun observeAll(): Flow<List<UserEntity>> // live updates via Flow } // 3. Database — ties it together @Database(entities = [UserEntity::class], version = 1) abstract class AppDatabase : RoomDatabase() { abstract fun userDao(): UserDao }
- Compile-time SQL validation: typos and wrong column names are build errors, not runtime crashes
- Type safety: DAO methods return real Kotlin types, not raw Cursors
- Flow integration: observeAll() emits a new list whenever data changes — reactive by default
- No boilerplate: Room generates all the Cursor parsing and ContentValues code for you
- Testable: run against an in-memory database in tests — no device needed
"The single biggest advantage of Room over raw SQLite: @Query validation at compile time. A typo in a column name is a build error in Room. In raw SQLite, it's a NullPointerException at 2 AM in production. That alone makes Room worth it."
DAO (Data Access Object) is the interface that defines how you interact with the database. Room generates the implementation — you just declare what you want to do, and Room figures out the SQL.
@Dao interface ProductDao { // INSERT — add a new row @Insert(onConflict = OnConflictStrategy.REPLACE) suspend fun insert(product: ProductEntity) @Insert suspend fun insertAll(products: List<ProductEntity>) // UPDATE — modify existing row (matches by @PrimaryKey) @Update suspend fun update(product: ProductEntity) // DELETE — remove a row @Delete suspend fun delete(product: ProductEntity) // @Query — full SQL power for reads and writes @Query("SELECT * FROM products WHERE id = :id") suspend fun getById(id: String): ProductEntity? @Query("SELECT * FROM products WHERE category = :cat ORDER BY price ASC") fun observeByCategory(cat: String): Flow<List<ProductEntity>> @Query("DELETE FROM products WHERE lastUpdated < :cutoff") suspend fun deleteOlderThan(cutoff: Long) // Upsert pattern (Room 2.5+) @Upsert suspend fun upsert(product: ProductEntity) // OnConflict strategies: // REPLACE → delete old row, insert new (most common for cache) // IGNORE → skip if already exists (idempotent inserts) // ABORT → throw exception (default) }
- @Insert: add a row — onConflict=REPLACE is the most useful for cache/sync patterns
- @Update: updates by matching the primary key in the entity
- @Delete: removes by matching primary key — pass the entity, not an ID
- @Query: full SQL for anything @Insert/@Update/@Delete can't express
- @Upsert (Room 2.5+): insert or update — cleaner than OnConflictStrategy.REPLACE
"The most common mistake: using @Delete but passing only an ID. @Delete requires the full entity object — Room matches by primary key. If you only have an ID, use @Query('DELETE FROM table WHERE id = :id') instead."
A DAO that returns Flow automatically emits a new value whenever the underlying table changes. This creates a reactive pipeline from database → ViewModel → UI with no manual refresh needed.
// DAO — return Flow for reactive observation @Dao interface OrderDao { @Query("SELECT * FROM orders ORDER BY createdAt DESC") fun observeOrders(): Flow<List<OrderEntity>> // NOT suspend — Flow is cold, emits on every table change } // Repository — exposes domain models, not entities class OrderRepository @Inject constructor(private val dao: OrderDao) { fun observeOrders(): Flow<List<Order>> = dao.observeOrders ().map { entities -> entities.map { it.toDomain () } } } // ViewModel — converts Flow to StateFlow for UI @HiltViewModel class OrderViewModel @Inject constructor(repo: OrderRepository) : ViewModel() { val orders = repo.observeOrders () .stateIn (viewModelScope, SharingStarted.WhileSubscribed (5000),emptyList ()) } // Compose — collects with lifecycle awareness val orders by vm.orders.collectAsStateWithLifecycle () // Flow vs LiveData for Room: // Flow: // ✅ Works in non-Android modules (pure Kotlin) // ✅ Powerful operators: map, filter, flatMapLatest, combine // ✅ Coroutine-native — backpressure, cancellation built in // ✅ No lifecycle coupling in Room — ViewModel handles that // LiveData: // ⚠️ Android-only — can't use in domain layer // ⚠️ Limited operators — no flatMapLatest etc. // ✅ Auto lifecycle-aware out of the box (but Compose doesn't need this)
- Flow DAO: no suspend — Room handles emissions; a new list emits on every table change
- Reactive pipeline: DB change → Flow emits → ViewModel → UI updates automatically
- stateIn(): converts cold Flow to hot StateFlow — ViewModel holds latest value
- WhileSubscribed(5000): stops collecting 5s after UI goes to background — saves resources
- Flow over LiveData: works in pure Kotlin modules, richer operators, coroutine-native
"The magic of Flow + Room: when a background sync writes new orders to the database, the UI automatically updates — no polling, no callbacks, no manual notify. The DAO Flow is the single source of truth. Every write triggers an automatic emit to all collectors."
When you change your database schema (add a column, rename a table), you must provide a migration path. Without one, Room throws an exception — or worse, destroys all user data.
// Bump version number when schema changes @Database(entities = [UserEntity::class], version = 2) // was 1 abstract class AppDatabase : RoomDatabase() // Migration 1 → 2: add a 'phone' column to users table val MIGRATION_1_2 = object : Migration(1, 2) { override fun migrate(db: SupportSQLiteDatabase) { db.execSQL ("ALTER TABLE users ADD COLUMN phone TEXT") } } // Migration 2 → 3: add an index for faster queries val MIGRATION_2_3 = object : Migration(2, 3) { override fun migrate(db: SupportSQLiteDatabase) { db.execSQL ("CREATE INDEX IF NOT EXISTS idx_users_email ON users(email)") } } // Register migrations on the builder val db = Room.databaseBuilder (context, AppDatabase::class.java, "app.db") .addMigrations (MIGRATION_1_2, MIGRATION_2_3) // chain of migrations .build () // Room auto-selects the right path: v1→v3 runs MIGRATION_1_2 then MIGRATION_2_3 // FALLBACK — destroys all data (dev builds only!) Room.databaseBuilder (...) .fallbackToDestructiveMigration () // ❌ NEVER in production .build () // AutoMigration (Room 2.4+) — for simple schema changes @Database(version = 4, autoMigrations = [ AutoMigration(from = 3, to = 4) // Room generates it for simple adds ])
- Version bump: every schema change requires incrementing the @Database version
- Migration object: write the SQL to transform the old schema into the new one
- Chain works: user on v1 installing v3 runs both migrations automatically
- fallbackToDestructiveMigration: deletes all user data — only for dev, never production
- AutoMigration (Room 2.4+): for simple additions Room can generate the migration SQL automatically
"forgetting to add a migration is a production incident — Room throws IllegalStateException on launch and users lose all their cached data. I always write a test that runs the full migration chain using MigrationTestHelper before shipping. Room's exportSchemas=true generates schema JSON files you can use for these tests."
DataStore is the modern replacement for SharedPreferences. It solves SharedPreferences' biggest problems: blocking the main thread and inconsistent behaviour across threads. DataStore is fully async and coroutine-based.
// SharedPreferences — old, synchronous, dangerous on main thread val prefs = context.getSharedPreferences ("settings", Context.MODE_PRIVATE) prefs.edit ().putString ("theme", "dark").apply () // async write (OK) val theme = prefs.getString("theme", "light") // synchronous read (blocks!) // ❌ Not type-safe. ❌ Can throw on main thread. ❌ No Flow support. // DataStore (Preferences) — async, Flow-based, type-safe keys // implementation("androidx.datastore:datastore-preferences:1.1.0") val THEME_KEY = stringPreferencesKey ("theme") val Context.dataStore: DataStore<Preferences> bypreferencesDataStore ("settings") // Read — returns Flow, never blocks val themeFlow: Flow<String> = context.dataStore.data .map { prefs -> prefs[THEME_KEY] ?: "light" } // Write — suspend function, runs on IO thread automatically suspend fun setTheme(theme: String) { context.dataStore.edit { prefs -> prefs[THEME_KEY] = theme } } // Comparison: // SharedPreferences DataStore // Async Partial (apply) Full (suspend + Flow) // Type-safe keys ❌ ✅ // Flow support ❌ ✅ // Error handling Silent failures Throws IOException // Thread safety ❌ ✅ // Proto variant ❌ ✅ (typed objects)
- DataStore is fully async: reads return Flow, writes are suspend — never blocks main thread
- Type-safe keys: stringPreferencesKey(), intPreferencesKey() — typos caught at compile time
- Error handling: DataStore throws IOException on errors — SharedPreferences fails silently
- Thread safety: DataStore uses coroutines internally — safe from any thread
- Two flavours: Preferences DataStore (key-value pairs) and Proto DataStore (typed objects via Protobuf)
"SharedPreferences.commit() blocks the main thread and can cause ANR. SharedPreferences.apply() is async but swallows failures silently. DataStore fixes both: writes are suspend functions on IO, reads are Flow. For any new project I use DataStore exclusively — it's not just a nicer API, it's genuinely safer."
Proto DataStore stores a typed Protobuf object instead of key-value pairs. Use it when your stored data has structure — multiple related fields, nested types, or enums — rather than scattered individual keys.
// Preferences DataStore — good for simple, unrelated settings val THEME_KEY =stringPreferencesKey ("theme") val FONT_SIZE =intPreferencesKey ("font_size") val NOTIFICATIONS =booleanPreferencesKey("notifications") // Fine for 3 unrelated keys. Gets messy with 20+ keys. // Proto DataStore — good for structured, related data // Define schema in user_preferences.proto: // syntax = "proto3"; // enum Theme { LIGHT = 0; DARK = 1; SYSTEM = 2; } // message UserPreferences { // Theme theme = 1; // int32 font_size = 2; // bool notifications_enabled = 3; // } // Serializer (boilerplate, generated once) object UserPreferencesSerializer : Serializer<UserPreferences> { override val defaultValue: UserPreferences = UserPreferences. getDefaultInstance () override suspend fun readFrom(input: InputStream) = UserPreferences.parseFrom (input) override suspend fun writeTo(t: UserPreferences, output: OutputStream) = t.writeTo (output) } // Usage — type-safe, no string keys val Context.protoStore bydataStore ("user_prefs.pb", UserPreferencesSerializer) val prefsFlow: Flow<UserPreferences> = context.protoStore.data suspend fun setDarkMode() { context.protoStore.updateData { current -> current.toBuilder ().setTheme (Theme.DARK).build () } }
- Proto DataStore: strongly typed Protobuf schema — no string keys, full type validation
- Use Preferences DataStore: a handful of unrelated simple settings (theme toggle, user ID)
- Use Proto DataStore: complex structured data, nested objects, enums, or many related fields
- Binary format: Protobuf is compact and fast to parse — better than JSON for local storage
- Schema evolution: Protobuf handles adding new fields gracefully — backward compatible
"I choose between them based on data shape. 3 unrelated toggles? Preferences DataStore. A UserSettings object with 10+ related fields including nested types? Proto DataStore — the Protobuf schema makes the structure self-documenting and prevents the 'bag of strings' anti-pattern."
Android provides EncryptedSharedPreferences and EncryptedFile (via Jetpack Security) backed by the AndroidKeyStore. Keys never leave the secure hardware — even a memory dump can't expose them.
// implementation("androidx.security:security-crypto:1.1.0-alpha06") // EncryptedSharedPreferences — for auth tokens, session data val masterKey = MasterKey.Builder(context) .setKeyScheme (MasterKey.KeyScheme.AES256_GCM) .build () val encryptedPrefs = EncryptedSharedPreferences.create ( context, "secure_prefs", masterKey, EncryptedSharedPreferences.PrefKeyEncryptionScheme.AES256_SIV, EncryptedSharedPreferences.PrefValueEncryptionScheme.AES256_GCM ) // Keys AND values are encrypted — even if device is rooted encryptedPrefs.edit () .putString ("access_token", token) .apply () val token = encryptedPrefs.getString("access_token", null) // EncryptedFile — for sensitive documents, private keys val encryptedFile = EncryptedFile.Builder( context, File(context.filesDir, "private_data.enc"), masterKey, EncryptedFile.FileEncryptionScheme.AES256_GCM_HKDF_4KB ). build () encryptedFile.openFileOutput ().use { it.write (sensitiveBytes) } encryptedFile.openFileInput ().use { it.readBytes () } // AndroidKeyStore — hardware-backed, key never in memory // AES256_GCM: authenticated encryption, tamper-evident // setUserAuthenticationRequired(true): biometric-gated key usage // What NOT to store in EncryptedSharedPreferences: // ❌ Encryption keys themselves (use KeyStore directly) // ❌ Bank card numbers (use backend tokenisation) // ✅ Auth tokens, session IDs, user preferences
- EncryptedSharedPreferences: drop-in replacement — same API as SharedPreferences but AES-256 encrypted
- MasterKey backed by AndroidKeyStore: encryption key in secure hardware, never exposed
- Both keys AND values encrypted: even knowing the key name doesn't help an attacker
- EncryptedFile: for arbitrary binary data — documents, certificates, private keys
- setUserAuthenticationRequired: biometric authentication required before key can be used
"EncryptedSharedPreferences uses AES-256 with keys stored in AndroidKeyStore — hardware-backed on modern devices. On a rooted device, an attacker can read regular SharedPreferences in plain text. With EncryptedSharedPreferences they'd need to extract the key from the hardware security module, which is computationally infeasible."
Offline-first means Room is always the single source of truth. The UI observes Room via Flow — it never reads from the network directly. The network only writes to Room; Room then triggers a UI update automatically.
// Data flow: API → Room → Flow → ViewModel → UI // UI never calls API. API only writes to Room. Room notifies UI. class ProductRepositoryImpl @Inject constructor( private val api: ProductApi, private val dao: ProductDao, @IoDispatcher private val io: CoroutineDispatcher ) : ProductRepository { // 1. UI observes this — always from Room, never direct from API override fun observeProducts(): Flow<List<Product>> = dao.observeAll ().map { it.map { e -> e.toDomain () } } // 2. Refresh fetches from API and writes to Room → Flow auto-emits override suspend fun refresh(): Result<Unit> = withContext(io) {runCatching { val fresh = api.getProducts () dao.upsertAll (fresh.map { it.toEntity () }) // Room emits updated list to all Flow collectors automatically } } } // ViewModel — UI reads from observeProducts(), triggers refresh @HiltViewModel class ProductViewModel @Inject constructor(private val repo: ProductRepository) : ViewModel() { val products = repo.observeProducts () .stateIn (viewModelScope, SharingStarted.WhileSubscribed (5000),emptyList ()) init { viewModelScope.launch { repo.refresh () } } // kick off sync on load } // What the user experiences: // 1. Screen opens → cached data shows immediately (Room) // 2. Refresh runs in background (API call) // 3. New data written to Room → Flow emits → UI updates seamlessly // 4. No internet? Still shows cached data, no crash
- Single source of truth: Room is always what the UI reads — API only feeds Room
- UI observes Flow: automatically updates when Room is written to — no manual refresh
- Immediate display: cached data shows instantly, fresh data replaces it after network call
- No internet: app still works — shows stale data, not an error screen
- upsertAll: replaces old cache with fresh data atomically
"The key mental model: the UI is a pure function of Room's state. The network is an input that feeds Room, not something the UI directly consumes. When a sync writes to Room, every screen observing that data updates instantly — no callbacks, no notifyDataSetChanged, no setState. It just works."
SQLite only stores primitives (text, integer, real, blob). TypeConverters tell Room how to convert complex types — like Date, List, or custom enums — to and from a storable primitive.
// Problem: SQLite can't store a Date or List directly @Entity data class OrderEntity( @PrimaryKey val id: String, val createdAt: Date, // ❌ Room doesn't know how to store Date val tags: List<String> // ❌ Room doesn't know how to store List ) // Solution: TypeConverters class Converters { // Date ↔ Long (timestamp in milliseconds) @TypeConverter fun dateToLong(date: Date): Long = date.time @TypeConverter fun longToDate(value: Long): Date = Date(value) // List<String> ↔ JSON String @TypeConverter fun listToJson(list: List<String>): String = Json.encodeToString (list) @TypeConverter fun jsonToList(json: String): List<String> = Json.decodeFromString (json) } // Register on the Database class @Database(entities = [OrderEntity::class], version = 1) @TypeConverters(Converters::class) abstract class AppDatabase : RoomDatabase() // Important: don't overuse TypeConverters for large nested objects // ❌ Storing an entire User JSON blob in an Order entity // ✅ Use a foreign key relationship instead (normalised DB) // ✅ TypeConverters for primitives: Date, Enum, List<String>, URI
- SQLite primitives only: TEXT, INTEGER, REAL, BLOB — everything else needs a TypeConverter
- Paired converters: always write both directions — toStorage() and fromStorage()
- Register at @Database level: applies to all entities and DAOs in that database
- Common uses: Date↔Long, Enum↔String, List↔JSON, URI↔String
- Don't overuse: large nested objects should be separate entities with foreign keys, not TypeConverted blobs
"TypeConverters are a double-edged sword. Date↔Long is perfect — small, fast, queryable. List<String>↔JSON is fine for small lists you don't query by individual items. But if you store an entire complex object as JSON, you've lost the ability to filter or sort by its fields in SQL. That's a sign you need a proper relationship."
Room uses @ForeignKey for referential integrity and special result classes (one-to-many with @Relation, many-to-many with a junction table) to query related data efficiently.
// ONE-TO-MANY: One User has many Orders @Entity data class UserEntity(@PrimaryKey val id: String, val name: String) @Entity(foreignKeys = [ForeignKey( entity = UserEntity::class, parentColumns = ["id"], childColumns = ["userId"], onDelete = ForeignKey.CASCADE // delete orders when user deleted )]) data class OrderEntity(@PrimaryKey val id: String, val userId: String, val total: Double) // Result class for the relationship data class UserWithOrders( @Embedded val user: UserEntity, @Relation(parentColumn = "id", entityColumn = "userId") val orders: List<OrderEntity> ) @Transaction // ← required for @Relation queries to be atomic @Query("SELECT * FROM users") fun observeUsersWithOrders(): Flow<List<UserWithOrders>> // MANY-TO-MANY: Products ↔ Tags (via junction table) @Entity(primaryKeys = ["productId", "tagId"]) data class ProductTagCrossRef(val productId: String, val tagId: String) data class ProductWithTags( @Embedded val product: ProductEntity, @Relation( parentColumn = "id", entityColumn = "id", associateBy = Junction(ProductTagCrossRef::class, parentColumn = "productId", entityColumn = "tagId") ) val tags: List<TagEntity> )
- @ForeignKey: enforces referential integrity — Room prevents orphaned child rows
- CASCADE: when parent is deleted, all children auto-delete
- @Relation: Room executes a separate query and joins the results — not a SQL JOIN
- @Transaction on @Relation: makes the multi-query read atomic — prevents partial reads
- Junction table: many-to-many via a cross-reference entity with composite primary key
"Always put @Transaction on any DAO method that has @Relation. Without it, Room runs two separate SQL queries — if a write happens between them, you get inconsistent data. @Transaction wraps both reads in a single database transaction, guaranteeing consistency."
Room supports in-memory databases that run on the JVM — no emulator or device needed for most DAO tests. MigrationTestHelper validates your migration scripts against exported schemas.
// androidTestImplementation("androidx.room:room-testing:2.6.1") // Basic DAO test — in-memory database @RunWith(AndroidJUnit4::class) class UserDaoTest { private lateinit var db: AppDatabase private lateinit var dao: UserDao @Before fun setUp() { db = Room.inMemoryDatabaseBuilder ( ApplicationProvider.getApplicationContext (), AppDatabase::class.java ) .allowMainThreadQueries () // allowed in tests only .build () dao = db.userDao () } @After fun tearDown() { db.close () } @Test fun insertAndRetrieve() = runTest { val user = UserEntity("1", "Alice", "[email protected]") dao.insert (user) val result = dao.getUser ("1")assertEquals ("Alice", result?.name) } @Test fun observeEmitsOnInsert() = runTest { val collected =mutableListOf <List<UserEntity>>() val job =launch { dao.observeAll ().collect { collected.add (it) } } dao.insert (UserEntity("1", "Alice", "[email protected]"))advanceUntilIdle ()assertTrue (collected.any { it.size == 1 }) job.cancel () } } // Migration test — validates your Migration objects @RunWith(AndroidJUnit4::class) class MigrationTest { @get:Rule val helper = MigrationTestHelper(InstrumentationRegistry.getInstrumentation (), AppDatabase::class.java) @Test fun migrate1To2() { helper.createDatabase ("test.db", 1) helper.runMigrationsAndValidate ("test.db", 2, true, MIGRATION_1_2) } }
- inMemoryDatabaseBuilder: creates a fresh DB in memory — isolated, fast, no file cleanup needed
- allowMainThreadQueries(): only use in tests — never in production code
- Flow testing with Turbine: use app.cash.turbine for clean Flow assertion in DAO tests
- MigrationTestHelper: runs your migrations against exported schemas — catches SQL errors before users hit them
- exportSchemas=true: required in @Database for MigrationTestHelper to work
"I run MigrationTestHelper on every PR that touches a Migration. It loads the exported schema JSON for version N, runs the migration, and validates the resulting schema matches version N+1. This has caught countless 'the column name is slightly wrong' bugs before they shipped."
SharedPreferences is synchronous key-value storage using XML files -- simple but blocks the main thread on first read. DataStore (Preferences or Proto) is the modern async replacement backed by a DataStore file, exposed as a Flow. Room is for relational structured data. Files are for binary blobs like images. Choose based on what you're storing and whether you need queries.
// SharedPreferences -- synchronous, legacy, avoid for new code val prefs = context.getSharedPreferences("settings", Context.MODE_PRIVATE) prefs.edit().putString("theme", "dark").apply() // DataStore -- async, type-safe, Flow-based (use this for new code) val themeFlow: Flow<String> = dataStore.data.map { prefs -> prefs[THEME_KEY] ?: "light" } suspend fun saveTheme(theme: String) = dataStore.edit { it[THEME_KEY] = theme } // Room -- structured relational data with SQL queries @Dao interface ProductDao { @Query("SELECT * FROM products WHERE category = :cat") fun getByCategory(cat: String): Flow<List<Product>> }
- SharedPreferences: synchronous XML-backed key-value store -- blocks main thread on first read, use only for legacy compatibility
- Preferences DataStore: async, Flow-based -- direct drop-in replacement for SharedPreferences with no main-thread blocking
- Proto DataStore: type-safe, schema-defined -- use when you need guaranteed data types and structured prefs (not just String/Boolean)
- Room: relational SQL database -- use when you need queries, joins, migrations, and structured data with relationships
- Decision rule: user preferences (theme, language) → DataStore. Structured data with queries → Room. Binary files → internal/external storage
"The most common mistake: storing a list of items in DataStore as JSON, when Room is the right tool. DataStore loads its entire contents into memory on every read — fine for 3 settings, terrible for 500 products. The moment you think 'I'll store this as a JSON array', that's your signal to use Room instead."
Cache-then-network shows cached data immediately while fetching fresh data in the background. Stale-while-revalidate is the formal name for this pattern — serve stale data while revalidating with the network.
// Pattern: emit cache first, then network, then updated cache fun getProducts(): Flow<Resource<List<Product>>> =flow { // Step 1: emit cached data immediatelyemit (Resource.Loading ()) val cached = dao.getAll () if (cached.isNotEmpty ())emit (Resource.Success (cached.map { it.toDomain () })) // Step 2: fetch fresh data from networkrunCatching { api.getProducts () } .onSuccess { fresh -> dao.upsertAll (fresh.map { it.toEntity () }) // Step 3: emit fresh data (Room Flow would auto-emit, but explicit here)emit (Resource.Success (fresh.map { it.toDomain () })) } .onFailure { error -> // Network failed but cache available — not a fatal erroremit (Resource.Error ("Showing cached data", cached.map { it.toDomain () })) } }.flowOn (Dispatchers.IO) // Cache invalidation — when to discard stale cache @Entity data class ProductEntity( @PrimaryKey val id: String, val name: String, val cachedAt: Long = System.currentTimeMillis () // timestamp every cache write ) fun isCacheStale(cachedAt: Long, ttlMinutes: Int = 30): Boolean = System.currentTimeMillis () - cachedAt > ttlMinutes * 60_000L // Strategy: always show cache; refresh if stale if (isCacheStale(cached.first().cachedAt))refresh ()
- Stale-while-revalidate: show cached data immediately, fetch fresh in background — best UX
- Resource wrapper: Loading/Success/Error sealed class communicates state to UI
- Timestamp-based TTL: store cachedAt timestamp, check age before deciding to refresh
- Network failure graceful: if refresh fails, show cached data with a subtle "may be stale" indicator
- Room + offline-first: the Room Flow observer means cache update auto-propagates to UI
"The UX difference is huge: with cache-then-network, the screen shows content in ~50ms (Room read). The network fetch takes 300ms. Users see instant content that silently updates. Without cache: 300ms of blank/loading screen every time. Stale-while-revalidate is the standard pattern in any well-architected Android app."
Room has first-class Paging 3 support — a DAO method returning PagingSource gives you automatic pagination with zero boilerplate. Room handles page fetching, loading states, and error handling.
// Room DAO — return PagingSource instead of List @Dao interface MessageDao { @Query("SELECT * FROM messages ORDER BY timestamp DESC") fun paginate(): PagingSource<Int, MessageEntity> // Room generates the PagingSource implementation — no manual paging logic } // Repository — create Pager from DAO PagingSource fun observeMessages(): Flow<PagingData<Message>> = Pager( config = PagingConfig( pageSize = 20, enablePlaceholders = false, prefetchDistance = 5 ), pagingSourceFactory = { dao.paginate () } ).flow.map { pagingData -> pagingData.map { it.toDomain() } } // ViewModel val messages = repo. observeMessages ().cachedIn (viewModelScope) // Compose UI — collectAsLazyPagingItems handles everything val messages = vm.messages.collectAsLazyPagingItems () LazyColumn {items (messages, key = { it.id }) { msg ->MessageRow(msg) } // Append loading indicator if (messages.loadState.append is LoadState.Loading) { item { CircularProgressIndicator() } } } // RemoteMediator — combine Room + API for full offline paging // Paging 3 fetches pages from Room; when Room runs out, RemoteMediator // fetches more from the API and inserts into Room, then Paging continues from Room
- Room PagingSource: return PagingSource from DAO — Room generates full implementation
- Pager: wraps the PagingSource and emits PagingData — the page controller
- cachedIn(viewModelScope): caches pages in ViewModel — survives recompositions
- collectAsLazyPagingItems(): Compose integration — drives LazyColumn with loading states
- RemoteMediator: combines Room cache + API — Room is the local page source, API fills gaps
"Room's PagingSource is the cleanest paging implementation available — the DAO just declares what to query, Room generates the paging logic. Paging 3 handles loading states, retries, and headers/footers. The combination of Room + Paging 3 + RemoteMediator is the recommended pattern for any large dataset that has both local and remote data."
WAL (Write-Ahead Logging) is a journaling mode that dramatically improves concurrent read/write performance. Room enables WAL by default since Room 2.2 — you usually don't need to configure it manually.
// Journal modes in SQLite: // Default (DELETE): writes lock the whole database // WAL: writes and reads can happen simultaneously // Without WAL (default mode): // Thread 1 writes to DB → Thread 2 blocked, waiting // Thread 1 finishes writing → Thread 2 can now read // Result: reads and writes serialised → slower with concurrent access // With WAL: // Thread 1 writes to WAL file → Thread 2 can read main DB concurrently // WAL checkpointed to main DB → periodically, in background // Result: reads and writes run simultaneously → much faster // Room 2.2+ enables WAL by default — you get this for free val db = Room.databaseBuilder (context, AppDatabase::class.java, "app.db") .build () // WAL already enabled — no code needed // Manual control (if needed) val db = Room.databaseBuilder (context, AppDatabase::class.java, "app.db") .setJournalMode (JournalMode.WRITE_AHEAD_LOGGING) // explicit WAL .build () val db = Room.databaseBuilder (...) .setJournalMode (JournalMode.TRUNCATE) // disable WAL (rare — e.g. shared DBs) .build () // When NOT to use WAL: // - Database shared with other processes (WAL doesn't support multi-process) // - Very low memory devices (WAL uses slightly more memory) // For standard single-process apps: always use WAL (Room default)
- WAL: writes go to a separate log file, reads continue from main DB — true concurrency
- Room 2.2+ default: you get WAL automatically — no configuration needed
- Performance: significantly better throughput when reads and writes happen simultaneously
- Multi-process exception: disable WAL if multiple processes share the same database
- Practical impact: DAOs called from multiple coroutines won't block each other
"WAL is why Room can handle concurrent reads and writes from multiple coroutines without serialising everything. A background sync writing products while the UI reads them runs without locking. In practice, Room 2.2+ enables this automatically — but knowing what WAL is and why it matters shows database depth."
Sync conflicts happen when the same record is modified both locally (offline) and remotely. The three main strategies are last-write-wins, server-wins, and three-way merge — each with different trade-offs.
// Track what needs syncing with a status field @Entity data class NoteEntity( @PrimaryKey val id: String, val content: String, val updatedAt: Long, // timestamp for last-write-wins val syncStatus: SyncStatus = SyncStatus.SYNCED ) enum class SyncStatus { SYNCED, // matches server PENDING, // local change not yet synced CONFLICT // local and server both changed } // Strategy 1: Last Write Wins — compare timestamps suspend fun syncNote(local: NoteEntity, remote: NoteDto) { when { remote.updatedAt > local.updatedAt -> { // Server newer — overwrite local dao.upsert (remote.toEntity ().copy (syncStatus = SyncStatus.SYNCED)) } local.updatedAt > remote.updatedAt && local.syncStatus == SyncStatus.PENDING -> { // Local newer — push to server api.updateNote (local.toDto ()) dao.upsert (local.copy (syncStatus = SyncStatus.SYNCED)) } else -> { // Both modified — mark conflict for user resolution dao.upsert (local.copy (syncStatus = SyncStatus.CONFLICT)) } } } // Strategy 2: Server always wins (simplest) // On sync: replace all local data with server data // ✅ No conflict logic needed // ❌ Local changes discarded on conflict // Good for: product catalogues, config, anything user doesn't edit // Strategy 3: Show conflict UI (best for user-owned data) // Mark conflicted rows, show "your version vs server version" picker // Good for: notes, todos, documents
- SyncStatus field: track whether each record is SYNCED, PENDING, or CONFLICT in Room
- Timestamp comparison: updatedAt on both local and remote — the newer timestamp wins
- Server-wins: simplest strategy — good for read-only caches and server-managed data
- Last-write-wins: good for non-collaborative data — notes, settings, preferences
- Conflict UI: for collaborative or important data — let the user choose which version to keep
"The sync strategy must match the data ownership model. Server-wins for a product catalogue — the server owns it. Last-write-wins for a notes app — the user owns each note. Conflict UI for shared documents — multiple people own it. Choosing wrong means silent data loss, which is always worse than a conflict dialog."
Android has multiple storage locations, each with different visibility, persistence, and permission requirements. Choosing the right location avoids permission issues and data leaks.
// INTERNAL STORAGE — private to your app, always available val filesDir = context.filesDir // persistent, deleted on uninstall val cacheDir = context.cacheDir // deleted by OS when storage is low val noBackupDir = context.noBackupFilesDir // excluded from auto-backup // No permissions needed. Not visible to other apps or user. // Room and DataStore use internal storage automatically // EXTERNAL STORAGE — shared or app-specific val externalFiles = context.getExternalFilesDir (Environment.DIRECTORY_PICTURES) // App-specific external — no permission needed on Android 10+ // Visible to user in file manager. Deleted on uninstall. // Use for: user-generated files they'd want to keep // MediaStore — shared media (photos, music, videos) // READ_MEDIA_IMAGES permission (Android 13+) needed to read other apps' media // Writing YOUR app's media: no permission needed val contentValues = ContentValues().apply {put (MediaStore.Images.Media.DISPLAY_NAME, "photo.jpg")put (MediaStore.Images.Media.MIME_TYPE, "image/jpeg") } val uri = context.contentResolver.insert (MediaStore.Images.Media.EXTERNAL_CONTENT_URI, contentValues) // Summary: // Database (Room) → filesDir (auto) // App settings → DataStore → filesDir (auto) // Downloaded files → cacheDir (or externalFilesDir for user-kept) // User photos/videos → MediaStore (shared) or externalFilesDir (private) // Sensitive data → filesDir + EncryptedFile
- filesDir: permanent internal storage — survives low storage, deleted on uninstall only
- cacheDir: temporary — Android can delete this anytime if storage is low
- externalFilesDir: app-specific external — visible to user, no permission on API 29+
- MediaStore: for shared photos/videos/music — survives uninstall, accessible to other apps
- cacheDir vs filesDir: use cacheDir for downloaded content you can re-fetch; filesDir for data you can't easily replace
"The Android storage permissions maze: on API 29+, you need zero permissions for app-specific external storage (getExternalFilesDir). For MediaStore writes (saving a photo to gallery), also no permission needed — just use MediaStore API. READ_MEDIA_IMAGES is only needed to read OTHER apps' photos. Most apps use far more permissions than they need."
DataStore should live in a dedicated data module with an interface, not accessed directly from feature modules. This prevents tight coupling and makes it mockable in tests.
// ❌ BAD: feature module directly accesses DataStore class ThemeViewModel(private val context: Context) : ViewModel() { val theme = context.dataStore.data.map { it[THEME_KEY] } // ❌ Context in ViewModel. ❌ Feature knows about DataStore internals. } // ✅ GOOD: wrap DataStore in an interface // :core:data — the repository interface (no DataStore import here) interface UserSettingsRepository { val theme: Flow<Theme> suspend fun setTheme(theme: Theme) val notificationsEnabled: Flow<Boolean> suspend fun setNotifications(enabled: Boolean) } // :core:datastore — the DataStore implementation class UserSettingsRepositoryImpl @Inject constructor( private val dataStore: DataStore<Preferences> ) : UserSettingsRepository { private val THEME_KEY =stringPreferencesKey ("theme") override val theme = dataStore.data .map { prefs -> Theme.valueOf (prefs[THEME_KEY] ?: Theme.SYSTEM.name) } override suspend fun setTheme(theme: Theme) { dataStore.edit { it[THEME_KEY] = theme.name } } } // Hilt wiring — @Singleton, one DataStore instance for the whole app @Provides @Singleton fun provideDataStore(@ApplicationContext ctx: Context): DataStore<Preferences> = ctx.dataStore // Feature module — injects interface, never knows about DataStore class ThemeViewModel @Inject constructor( private val settings: UserSettingsRepository ) : ViewModel()
- Repository interface: feature modules depend on the interface, not DataStore directly
- Single DataStore instance: @Singleton — never create multiple DataStore for the same file
- No Context in ViewModel: the repository wraps DataStore, ViewModel stays clean
- Testable: swap UserSettingsRepositoryImpl with FakeUserSettingsRepository in tests
- Separate module: :core:datastore for implementation, :core:data for interfaces
"The DataStore @Singleton rule: creating two DataStore instances pointing to the same file causes data corruption — both try to write simultaneously. One instance, application-scoped, injected everywhere via Hilt. The repository wrapper ensures feature modules are completely decoupled from the DataStore API."
Android Auto Backup (API 23+) automatically backs up app data to Google Drive -- up to 25MB, daily when the device is idle, charging, and on WiFi. Users restore their backup when they install the app on a new device. You configure what's included or excluded via a backup_rules XML file.
// AndroidManifest.xml -- enable and configure backup // <application // android:allowBackup="true" // android:dataExtractionRules="@xml/data_extraction_rules" (API 31+) // android:fullBackupContent="@xml/backup_rules"> (API 23-30) // res/xml/backup_rules.xml -- include/exclude specific files // <full-backup-content> // <include domain="database" path="app_database.db"/> // <exclude domain="sharedpref" path="session_prefs.xml"/> <!-- do not back up auth tokens --> // </full-backup-content> class MyBackupAgent : BackupAgentHelper() { override fun onCreate() { addHelper("prefs", SharedPreferencesBackupHelper(this, "settings")) addHelper("db", FileBackupHelper(this, "../databases/app.db")) } }
- Auto Backup: free, zero-code backup to Google Drive -- up to 25MB, happens automatically when idle + charging + WiFi
- Exclude sensitive data: always exclude auth tokens, session data, and cached content from backup -- these should not restore to new devices
- API 31+ uses dataExtractionRules: separate rules for cloud backup and device-to-device transfer
- BackupAgentHelper: extend for custom backup logic -- encrypt data before backup, transform schema on restore
- Test with adb: 'adb shell bmgr run' triggers a backup, 'adb shell bmgr restore' triggers restore -- verify your app handles it correctly
"Auto Backup is a double-edged sword. It's great for restoring a user's notes when they get a new phone. But if you backup EncryptedSharedPreferences or session tokens, they might restore to a different device and cause security issues. Rule: backup domain data (notes, settings, drafts), never backup credentials or device-specific keys."
Scoped Storage (introduced in Android 10, enforced in Android 11) restricts apps to their own directories and MediaStore. You can no longer browse the entire filesystem — each app is sandboxed.
// Before Scoped Storage (Android 9 and below): // READ_EXTERNAL_STORAGE + WRITE_EXTERNAL_STORAGE → access entire SD card // After Scoped Storage (Android 10+): // App-specific external → no permission needed // Shared media (photos, music) → READ_MEDIA_IMAGES, READ_MEDIA_VIDEO, READ_MEDIA_AUDIO // Other files → Storage Access Framework (SAF) / file picker // 1. Reading/writing YOUR app's files — no permission val myFile = File(context.getExternalFilesDir (null), "data.json") myFile.writeText("hello") // no permission needed on API 29+ // 2. Saving a photo to the gallery — no permission, use MediaStore val values = ContentValues(). apply {put (MediaStore.Images.Media.DISPLAY_NAME, "screenshot.jpg")put (MediaStore.Images.Media.MIME_TYPE, "image/jpeg")put (MediaStore.Images.Media.RELATIVE_PATH, "Pictures/MyApp") } val uri = contentResolver.insert (MediaStore.Images.Media.EXTERNAL_CONTENT_URI, values) // 3. Opening a user-picked file — Storage Access Framework val launcher =registerForActivityResult (ActivityResultContracts.GetContent()) { uri -> // uri is a content URI — read via contentResolver.openInputStream(uri) val input = contentResolver.openInputStream (uri!!) } // No READ permission needed — user explicitly chose the file // MANAGE_EXTERNAL_STORAGE — full access (requires Play Store approval) // Only for: file managers, antivirus apps, backup tools
- Scoped Storage: apps sandbox to their own directories — no free-range filesystem access
- App-specific files: getExternalFilesDir() — no permission needed on Android 10+
- MediaStore: save photos/videos to shared media without permission
- SAF (Storage Access Framework): user opens a file picker — app gets a URI without needing permissions
- MANAGE_EXTERNAL_STORAGE: full access only for legitimate file manager apps — needs Play Store approval
"Scoped Storage interview insight: most apps don't need storage permissions anymore. No permission to save a photo to gallery (use MediaStore). No permission to read a user-selected file (use SAF). READ_EXTERNAL_STORAGE and WRITE_EXTERNAL_STORAGE are legacy — if you're still requesting them in a modern app, you're probably doing it wrong."
An e-commerce app's data layer needs offline capability, optimistic UI updates, and clear cache invalidation. The architecture: Room as the single source of truth for all displayed data, Retrofit to fetch from the network, and a Repository that orchestrates the cache-then-network pattern. The ViewModel only talks to the Repository.
// Entities -- Room tables @Entity data class ProductEntity(@PrimaryKey val id: String, val name: String, val price: Double) @Entity data class CartItemEntity(@PrimaryKey val id: String, val productId: String, val qty: Int) // Repository -- cache-then-network pattern class ProductRepository @Inject constructor( private val dao: ProductDao, private val api: ProductApi ) { val products: Flow<List<Product>> = dao.getAll() // always serve from Room suspend fun refresh() { val fresh = api.getProducts() dao.upsertAll(fresh.map { it.toEntity() }) // upsert -- insert or update } }
- Room as source of truth: ViewModel observes a Flow from Room -- data is always served from local cache, never directly from the network
- Refresh pattern: Repository.refresh() fetches from network and upserts into Room -- Flow updates automatically, ViewModel doesn't change
- Cart with transactions: add/remove cart items in a single Room transaction -- prevents partial updates that leave the cart in an inconsistent state
- Optimistic UI: write to Room immediately on user action, sync to network in background -- user sees instant feedback
- Cache invalidation: store a lastUpdated timestamp in Room -- if stale, trigger refresh in the background while showing cached data
"The guiding question for each piece of data: 'What happens if this is lost?' Auth token lost → user logs in again (EncryptedPrefs, acceptable). Cart lost → user loses their items (Room, persist). Product image lost → re-downloaded (cacheDir, disposable). Order history lost → serious problem (Room + backup, protect). Map the recovery cost to the storage choice."
Room automatically runs on a background thread when you use suspend functions or return Flow. You don't need withContext(Dispatchers.IO) for Room calls — Room handles threading internally.
// Room threading rules: // 1. Suspend DAO functions — Room switches to IO thread automatically @Dao interface UserDao { @Insert suspend fun insert(user: UserEntity) // runs on Room's IO thread @Query(...) suspend fun getUser(id: String): UserEntity? // same } // You DON'T need this — Room handles it: suspend fun getUser(id: String) = withContext(Dispatchers.IO) { dao.getUser (id) // redundant withContext — Room already does this } // ✅ Just call directly: suspend fun getUser(id: String) = dao.getUser (id) // clean, same result // 2. Flow DAO functions — Room emits on its IO thread @Query(...) fun observeUsers(): Flow<List<UserEntity>> // NOT suspend — Room observes table changes on a background thread // No withContext needed — Room handles it // 3. allowMainThreadQueries() — ONLY in tests Room.inMemoryDatabaseBuilder (...) .allowMainThreadQueries () // lets tests call DAO without coroutines .build () // 4. When DO you need withContext(IO)? // File I/O: reading/writing raw files // Non-suspend library calls: blocking APIs // NOT needed for: Room, Retrofit (suspend funs), DataStore // Rule: Room suspend functions are safe to call from viewModelScope.launch {} // without any Dispatcher specification — they don't run on Main thread
- Suspend DAO: Room automatically executes on its internal IO thread — no withContext needed
- Flow DAO: emissions come from a background thread — safe to collect on any scope
- allowMainThreadQueries: test-only escape hatch — crashes in production if called on main thread
- withContext(IO) is redundant for Room: adding it doesn't hurt but adds noise
- Where you DO need Dispatchers.IO: raw File reads/writes, blocking third-party SDKs
"Common junior mistake: wrapping every Room call in withContext(Dispatchers.IO). Room 2.1+ handles this automatically for suspend functions. The pattern 'viewModelScope.launch { dao.getUser(id) }' is perfectly fine — Room switches threads internally. Only use withContext(IO) for raw file operations or truly blocking calls."
Search in Room uses LIKE queries with indexes for performance, and FTS (Full-Text Search) for multi-word search. FTS gives you tokenised search — searching for "android kotlin" finds records containing both words.
// Basic LIKE search — works but slow on large tables without index @Query("SELECT * FROM products WHERE name LIKE '%' || :query || '%'") fun search(query: String): Flow<List<ProductEntity>> // ❌ '%query%' can't use an index — full table scan every time // Add index for prefix search (query%) — fast @Entity(indices = [Index(value = ["name"])]) data class ProductEntity(@PrimaryKey val id: String, val name: String) @Query("SELECT * FROM products WHERE name LIKE :query || '%'") // 'query%' uses the index — fast for prefix search // FTS (Full Text Search) — best for multi-word search @Entity(tableName = "products") data class ProductEntity(@PrimaryKey val id: String, val name: String, val description: String) @Fts4(contentEntity = ProductEntity::class) // creates FTS virtual table @Entity(tableName = "products_fts") data class ProductFts(val name: String, val description: String) @Query("""SELECT products.* FROM products JOIN products_fts ON products.rowid = products_fts.rowid WHERE products_fts MATCH :query""") fun ftsSearch(query: String): Flow<List<ProductEntity>> // With debounce for search-as-you-type val searchResults = searchQuery .debounce (300) // wait 300ms after last keystroke .distinctUntilChanged () // don't search same query twice .flatMapLatest { query -> repo.search (query) }
- LIKE '%query%': full table scan — avoid for large tables, only use with prefix search
- Index: speeds up prefix search (query%) — doesn't help with contains search (%query%)
- FTS4/FTS5: tokenised full-text search — searches across multiple columns, finds partial words
- debounce(300ms): prevents DB query on every keystroke — only searches after typing pauses
- flatMapLatest: cancels previous search when a new query arrives — no stale results
"FTS is the answer for any real search feature. Regular LIKE with '%query%' is a full table scan — 100ms on 1000 rows, 10 seconds on 100,000 rows. FTS with @Fts4 uses a tokenised inverted index — milliseconds even on huge tables. Combined with debounce(300) + flatMapLatest, you get instant search that doesn't hammer the database."
A bloated database slows down queries and wastes device storage. Regular pruning, smart data modelling, and avoiding storing unnecessary data keeps Room lean and fast.
// 1. TTL-based cleanup — delete old cached data @Query("DELETE FROM products WHERE cachedAt < :cutoff") suspend fun deleteOlderThan(cutoff: Long) // Schedule cleanup with WorkManager (daily, in background) val cutoff = System.currentTimeMillis () - 7 * 24 * 60 * 60_000L // 7 days dao.deleteOlderThan (cutoff) // 2. Don't store binary blobs in Room // ❌ Storing image bytes in a column bloats the database massively @Entity data class ProductEntity(val image: ByteArray) // ❌ // ✅ Store the URL, let Coil/Glide cache the image file @Entity data class ProductEntity(val imageUrl: String) // ✅ // 3. Limit list sizes — paginate instead of storing everything @Query("SELECT * FROM products ORDER BY cachedAt DESC LIMIT :limit") fun getRecent(limit: Int = 100): Flow<List<ProductEntity>> // 4. VACUUM — defragment the database file // After many deletes, SQLite file doesn't shrink — pages marked free @Query("VACUUM") suspend fun vacuum() // Run after large batch deletions to reclaim disk space // Expensive — run rarely (monthly), never on main thread // 5. Use appropriate data types // Store Long (8 bytes) not String for timestamps ("2024-01-15T10:30:00Z" = 22 bytes) // Store Int status codes not String status names ("PENDING" = 7 bytes vs 1 byte) // 6. Monitor database size val dbFile = context.getDatabasePath ("app.db") Log.d ("DB", "Size: ${dbFile.length() / 1024} KB")
- TTL cleanup: delete cached data older than N days — scheduled via WorkManager
- No blob storage: store image URLs, not bytes — Coil/Glide disk cache handles image files
- LIMIT queries: cap cache size — keep only the 100 most recent items, not all 10,000
- VACUUM: reclaims space after mass deletes — SQLite doesn't shrink the file automatically
- Right types: Long for timestamps, Int for status codes — saves bytes per row, adds up at scale
"The most common Room bloat cause: storing images. One product image as Base64 can be 100KB. A catalogue of 1000 products = 100MB database. Store imageUrl (30 bytes). Let Coil cache the actual image in the file cache. Your Room database should almost never contain binary data — that's what filesDir is for."
A systematic storage code review catches the most expensive mistakes — data loss, security vulnerabilities, and performance issues — before they reach production.
// 1. ❌ Auth tokens in plain SharedPreferences prefs.putString ("token", accessToken) // ❌ readable on rooted device // ✅ EncryptedSharedPreferences // 2. ❌ Missing database migration @Database(version = 2) // ❌ version bumped but no addMigrations() call // ✅ .addMigrations(MIGRATION_1_2) + MigrationTestHelper test // 3. ❌ DataStore accessed via multiple instances val store1 = context.dataStore // in SettingsViewModel val store2 = context.dataStore // in ThemeRepository — TWO instances? ❌ corruption risk // ✅ @Singleton DataStore injected via Hilt // 4. ❌ Room query on main thread fun onCreate(...) {val user = db.userDao ().getUserSync (id) // ❌ blocking main thread } // ✅ suspend DAO + coroutine // 5. ❌ Storing images as ByteArray in Room @Entity data class ProductEntity(val thumbnail: ByteArray) // ❌ // ✅ Store URL, let image loading library cache files // 6. ❌ @Relation without @Transaction @Query("SELECT * FROM users") fun getUsersWithOrders(): Flow<List<UserWithOrders>> // ❌ missing @Transaction // ✅ @Transaction @Query(...) // 7. ❌ Sensitive DB not excluded from backup // AndroidManifest: android:allowBackup="true" with no backup_rules.xml // ✅ backup_rules.xml excluding secure_prefs and sensitive databases // 8. ❌ No cache TTL — database grows forever @Entity data class SearchHistoryEntity(val query: String) // no cachedAt ❌ // ✅ Add cachedAt: Long, schedule cleanup WorkManager
- Auth tokens in plain prefs: security vulnerability — always EncryptedSharedPreferences
- Missing migration: crashes on upgrade — the most common, most damaging storage bug
- Multiple DataStore instances: data corruption — enforce @Singleton via DI
- Room on main thread: ANR risk — all DAO calls must be suspend or return Flow
- ByteArray images in Room: database bloat — store URL, let Coil handle disk caching
"In storage code reviews I always check these in priority order: (1) auth tokens in plain prefs — security. (2) missing Room migrations — data loss. (3) multiple DataStore instances — corruption. (4) Room on main thread — ANR. The rest are performance issues. Security and data loss first, performance second."
The Repository is the single access point to all data sources — it decides whether to read from Room, the network, or DataStore. The ViewModel never touches a DAO or API directly; it always goes through the Repository.
// Without Repository — ViewModel knows too much class ProductViewModel @Inject constructor( private val dao: ProductDao, // ❌ ViewModel depends on DB layer private val api: ProductApi // ❌ ViewModel depends on network layer ) : ViewModel() // With Repository — clean separation interface ProductRepository { fun observeProducts(): Flow<List<Product>> suspend fun refresh(): Result<Unit> suspend fun getProduct(id: String): Product? } class ProductRepositoryImpl @Inject constructor( private val dao: ProductDao, private val api: ProductApi ) : ProductRepository { // Room is the source of truth — UI always reads from here override fun observeProducts() = dao.observeAll ().map { it.map { e -> e.toDomain () } } // Refresh fetches from API and writes to Room override suspend fun refresh() =runCatching { api.getProducts ().also { dao.upsertAll (it.map { p -> p.toEntity () }) } }.map { } } // ViewModel — only knows about the Repository interface @HiltViewModel class ProductViewModel @Inject constructor( private val repo: ProductRepository // ✅ depends on interface, not impl ) : ViewModel() { val products = repo.observeProducts () .stateIn (viewModelScope, SharingStarted.WhileSubscribed (5000),emptyList ()) }
- Repository hides data sources: ViewModel doesn't know if data comes from Room, network, or cache
- Interface for testability: inject FakeProductRepository in tests — no DB or network needed
- Single responsibility: Repository decides fetch strategy; ViewModel decides what to show
- Domain models only: Repository maps entities→domain before returning — ViewModel never sees DTOs or entities
- Clean Architecture: Repository sits at the data/domain boundary — depends inward only
"The test case for Repository: in a ViewModel unit test, I inject FakeProductRepository that returns hardcoded data. No Room, no network, no Hilt. The ViewModel test runs in milliseconds. This is only possible because the ViewModel depends on the interface, not the Room implementation."
@PrimaryKey marks the unique identifier for each row. Room supports auto-generated integer IDs, manual String UUIDs, and composite primary keys — each with different trade-offs for offline sync and performance.
// Option 1: Auto-generated Int — simple, local only @Entity data class NoteEntity( @PrimaryKey(autoGenerate = true) val id: Int = 0, val content: String ) // ✅ Simple. ❌ IDs are local — clash if syncing with server. // Option 2: UUID String — safe for sync @Entity data class ProductEntity( @PrimaryKey val id: String = UUID.randomUUID ().toString (), val name: String ) // ✅ Globally unique — safe to create offline and sync later // ❌ Slightly larger storage, slower index than Int // Option 3: Server-assigned ID — wait for server to confirm @Entity data class OrderEntity( @PrimaryKey val id: String, // ID comes from server after creation val status: String ) // ✅ Matches server ID exactly — no mapping needed // ❌ Can't persist locally before server responds // Option 4: Composite primary key — for relationship tables @Entity(primaryKeys = ["userId", "productId"]) data class FavoriteEntity( val userId: String, val productId: String ) // ✅ Enforces uniqueness of the pair — no separate ID needed // Insert with autoGenerate — use 0 as placeholder, Room assigns real ID val note = NoteEntity(id = 0, content = "Hello") // 0 = "generate for me" val newId = dao.insert (note) // returns the assigned rowId
- autoGenerate=true: SQLite assigns sequential integers — simple but only safe for local-only data
- UUID String: globally unique, safe to create offline — best choice for synced data
- Server ID: use when the server controls identity — can't persist before API responds
- Composite key: for join/relationship tables — the combination of columns is the identifier
- insert() return value: returns the rowId of the inserted row — useful to retrieve the generated ID
"UUID vs autoGenerate: if your data ever syncs with a server, use UUID. AutoGenerate gives sequential IDs (1, 2, 3) — if user A and user B both create records offline and then sync, they'll have conflicting IDs 1 and 1. UUID.randomUUID() is statistically collision-proof globally."
@Embedded flattens a nested object's fields into the parent table — one table, multiple logical groups of columns. @Relation links two separate tables via a foreign key and runs a second query to fetch the related rows.
// @Embedded — nested object stored in SAME table data class Address( val street: String, val city: String, val pinCode: String ) @Entity data class UserEntity( @PrimaryKey val id: String, val name: String, @Embedded val address: Address // street, city, pinCode as columns in users table ) // DB columns: id | name | street | city | pinCode // ✅ Single table, single query ❌ Can't query across users by address easily // Column prefix if same type embedded twice @Embedded(prefix = "billing_") val billingAddress: Address @Embedded(prefix = "shipping_") val shippingAddress: Address // billing_street | billing_city | shipping_street | shipping_city // @Relation — linked data in SEPARATE table (two queries) data class UserWithOrders( @Embedded val user: UserEntity, @Relation(parentColumn = "id", entityColumn = "userId") val orders: List<OrderEntity> ) // Query 1: SELECT * FROM users // Query 2: SELECT * FROM orders WHERE userId IN (ids from query 1) // ✅ Proper normalisation ✅ Query orders independently ❌ Two queries // When to use each: // @Embedded: value objects always stored/retrieved with parent (Address, LatLng, Price) // @Relation: separate entities with own identity and lifecycle (User's Orders, Post's Comments)
- @Embedded: flattens fields into the parent table — one SELECT, one table, no foreign key
- Use @Embedded for value objects: Address, GeoPoint, Money — always travel with the parent
- @Relation: separate tables joined at query time — proper database normalisation
- prefix attribute: disambiguates columns when embedding the same type twice
- @Transaction required with @Relation: makes the two-query read atomic
"Rule of thumb: @Embedded for things that have no independent identity (an Address doesn't exist without a User), @Relation for things with their own lifecycle (an Order can exist and be queried independently of a User). The database normalisation test: would you ever query the embedded object alone? If yes, it should be @Relation."
runTest is the coroutine test builder from kotlinx-coroutines-test. It runs coroutines in a controlled test environment — skipping real delays, executing all coroutines eagerly, and making async code behave synchronously in tests.
// testImplementation("org.jetbrains.kotlinx:kotlinx-coroutines-test:1.9.0") @RunWith(AndroidJUnit4::class) class UserDaoTest { private lateinit var db: AppDatabase private lateinit var dao: UserDao @Before fun setUp() { db = Room.inMemoryDatabaseBuilder( ApplicationProvider. getApplicationContext (), AppDatabase::class.java ).build () dao = db.userDao () } @After fun tearDown() { db.close () } // runTest — wraps the test in a coroutine scope @Test fun insertAndRead() = runTest { val user = UserEntity("1", "Alice") dao.insert (user) // suspend — works inside runTest val result = dao.getUser("1") // suspend — works inside runTest assertEquals ("Alice", result?.name) } // Testing Flow with Turbine library // testImplementation("app.cash.turbine:turbine:1.1.0") @Test fun flowEmitsOnInsert() = runTest { dao.observeAll ().test { // .test {} is Turbine's Flow test builderawaitItem ().also {assertTrue (it.isEmpty ()) } // initial empty emission dao.insert (UserEntity("1", "Alice"))awaitItem ().also {assertEquals (1, it.size) } // emission after insertcancelAndIgnoreRemainingEvents () } } // Testing with delay — runTest skips real time @Test fun debounceSearch() = runTest {delay (1000) // virtual — no real 1 second wait // advanceTimeBy(500) / advanceUntilIdle() for precise control } }
- runTest: replaces runBlocking in tests — handles coroutines, virtual time, proper cleanup
- Virtual time: delay(1000) inside runTest completes instantly — no real waiting
- Turbine: the best Flow testing library — awaitItem(), awaitError(), cancelAndIgnoreRemainingEvents()
- advanceUntilIdle(): runs all pending coroutines to completion — useful for testing async operations
- advanceTimeBy(ms): fast-forward the virtual clock — test time-dependent logic without real sleeps
"Turbine is the missing piece for Flow testing. Without it, testing Room Flows requires complex coroutine gymnastics. With it: dao.observeAll().test { awaitItem(); dao.insert(user); assertEquals(1, awaitItem().size) }. Three lines to verify that inserting a row triggers a Flow emission. That's readable, maintainable test code."
Internal storage file I/O in modern Android uses standard Kotlin file APIs combined with coroutines. Always read/write on a background thread — file operations can block for hundreds of milliseconds.
// Writing a file — always on IO dispatcher suspend fun saveJson(context: Context, filename: String, json: String) = withContext(Dispatchers.IO) { File(context.filesDir, filename).writeText(json) } // Reading a file suspend fun readJson(context: Context, filename: String): String? = withContext(Dispatchers.IO) { val file = File(context.filesDir, filename) if (file. exists ()) file.readText () else null } // Subdirectory — create if missing suspend fun saveInvoice(context: Context, id: String, bytes: ByteArray) = withContext(Dispatchers.IO) { val dir = File(context.filesDir, "invoices").also { it.mkdirs () } File(dir, "$id.pdf").writeBytes(bytes) } // Listing files fun listInvoices(context: Context): List<String> = File(context.filesDir, "invoices") . listFiles () ?.map { it.name } ?:emptyList () // Delete old files suspend fun deleteOldInvoices(context: Context, cutoffMs: Long) = withContext(Dispatchers.IO) { File(context.filesDir, "invoices") .listFiles () ?.filter { it.lastModified () < cutoffMs } ?.forEach { it.delete () } } // Cache files — use cacheDir for re-downloadable content val cacheFile = File(context.cacheDir, "thumbnail_$id.jpg") // OS may delete cacheDir contents when storage is low — always handle missing files
- filesDir: permanent internal storage — use for user documents, data that can't be re-fetched
- cacheDir: OS-managed temporary storage — for downloadable content that can be re-fetched
- withContext(Dispatchers.IO): always needed for file operations — unlike Room which handles its own threading
- mkdirs(): creates the full directory path — call before writing to a subdirectory
- exists() check: always verify before reading — missing files are normal, not exceptional
"The key difference from Room threading: Room's suspend functions switch threads internally. File I/O does NOT — you must wrap in withContext(Dispatchers.IO) yourself. Reading a 5MB file on the main thread can freeze the UI for 200-500ms. Always IO thread for file operations."
A database transaction groups multiple operations so they either all succeed or all fail together. Without @Transaction, a crash between two related writes leaves the database in a corrupt, inconsistent state.
// Problem without transaction: suspend fun placeOrder(order: OrderEntity, items: List<OrderItemEntity>) { orderDao.insert (order) // succeeds // 💥 app crashes here orderItemDao.insertAll (items) // never runs → order with no items in DB! } // With @Transaction — atomic: all or nothing @Dao abstract class OrderDao { @Insert abstract suspend fun insertOrder(order: OrderEntity) @Insert abstract suspend fun insertItems(items: List<OrderItemEntity>) @Transaction open suspend fun placeOrder(order: OrderEntity, items: List<OrderItemEntity>) {insertOrder (order)insertItems (items) // If insertItems fails → insertOrder is automatically rolled back // Database remains consistent — no orphaned orders } } // useDatabase() — lower-level transaction control suspend fun transferCredits(fromId: String, toId: String, amount: Int) { db.withTransaction { val from = dao.getUser (fromId) ?: throw Exception("User not found") val to = dao.getUser (toId) ?: throw Exception("User not found") if (from.credits < amount) throw Exception("Insufficient credits") dao.update (from.copy (credits = from.credits - amount)) dao.update (to.copy (credits = to.credits + amount)) // Both updates succeed or both are rolled back — credits never lost } } // @Transaction also needed for @Relation queries — makes multi-query read atomic @Transaction @Query("SELECT * FROM orders") abstract fun observeOrdersWithItems(): Flow<List<OrderWithItems>>
- Atomicity: @Transaction guarantees all-or-nothing — partial writes never left in the database
- withTransaction: Room's coroutine-friendly API for manual transaction control
- Rollback on exception: any exception inside a transaction cancels all changes automatically
- @Transaction + @Relation: prevents reading inconsistent data during concurrent writes
- abstract DAO class: required for @Transaction methods that call other DAO methods — can't use interface
"The classic interview scenario: 'User places an order — insert order row, insert order items, deduct inventory. What happens if the app crashes after inserting the order but before inserting items?' Without @Transaction: orphaned order with no items. With @Transaction: the order insertion is rolled back too. The database is always consistent."
LIKE does a sequential scan — it checks every row. FTS (Full-Text Search) uses a pre-built inverted index — like a book's index vs reading every page. FTS is orders of magnitude faster on large tables.
// LIKE — full table scan, O(n) @Query("SELECT * FROM notes WHERE content LIKE '%' || :q || '%'") fun searchLike(q: String): Flow<List<NoteEntity>> // 100 rows: ~1ms | 10,000 rows: ~100ms | 1,000,000 rows: ~10 seconds // ❌ '%query%' never uses an index — SQLite must read every row // FTS4 — inverted index, O(log n) @Entity(tableName = "notes") data class NoteEntity(@PrimaryKey val id: String, val title: String, val content: String) @Fts4(contentEntity = NoteEntity::class) @Entity(tableName = "notes_fts") data class NoteFts(val title: String, val content: String) @Transaction @Query("""SELECT notes.* FROM notes INNER JOIN notes_fts ON notes.rowid = notes_fts.rowid WHERE notes_fts MATCH :q ORDER BY rank""") fun searchFts(q: String): Flow<List<NoteEntity>> // 1,000,000 rows: still ~1ms (uses tokenised inverted index) // FTS special syntax: // "android kotlin" → must contain BOTH words // "android OR kotlin" → contains either // "android*" → prefix match (android, androidy, androidx) // "title:android" → search only in title column // FTS5 — more features (Room supports @Fts4 and @Fts5) @Fts5(contentEntity = NoteEntity::class) @Entity(tableName = "notes_fts5") data class NoteFts5(val title: String, val content: String) // FTS5 adds relevance ranking via 'rank' column — sort by relevance
- LIKE '%query%': full table scan — fine under 1000 rows, painful beyond 10,000
- FTS index: tokenises text at insert time, builds inverted index — query time is near-constant
- contentEntity: links FTS to the real entity — Room keeps them in sync automatically
- MATCH operator: FTS query language — supports AND, OR, prefix wildcards (*), column filters
- FTS5 vs FTS4: FTS5 adds relevance ranking (ORDER BY rank) and better performance
"Use LIKE for simple prefix search on small tables (< 1000 rows). Use FTS for any real search feature. The FTS table is automatically kept in sync with the content entity — you insert into notes, Room updates notes_fts automatically. The extra setup is worth it: FTS search on a million notes takes the same time as LIKE on 10 notes."
SQLite supports multi-process access but Room's in-memory state (invalidation tracker, WAL) breaks with multiple processes. Room provides enableMultiInstanceInvalidation as a partial solution — but it comes with significant trade-offs.
// Problem: Two processes, one SQLite file // Process 1 (app): reads from Room, caches query results // Process 2 (service): writes new data to SQLite // Result: Process 1's Flow NEVER updates — its invalidation tracker is separate! // Solution 1: enableMultiInstanceInvalidation (Room 2.3+) val db = Room.databaseBuilder (ctx, AppDatabase::class.java, "app.db") .enableMultiInstanceInvalidation () // uses IPC to notify other processes of changes .build () // ✅ Flow in Process 1 now invalidates when Process 2 writes // ❌ Higher overhead — IPC calls on every write // ❌ WAL mode must be disabled (WAL doesn't support multi-process) // ❌ More complex debugging — race conditions across processes val db = Room.databaseBuilder (ctx, AppDatabase::class.java, "app.db") .enableMultiInstanceInvalidation () .setJournalMode (JournalMode.TRUNCATE) // WAL not supported with multi-instance .build () // Solution 2: Avoid multi-process entirely (preferred) // Keep the background service in the same process (no ':remote' in Manifest) // Use foreground service or WorkManager (same process) // Only use separate processes for true isolation (crash containment) // ContentProvider bridge — for legitimate multi-process DB access // Wrap Room behind a ContentProvider // Process 2 queries the ContentProvider → ContentProvider reads Room → returns cursor // Oldest pattern, most compatible, but lots of boilerplate
- Room's invalidation tracker is per-process: a write from Process 2 never notifies Process 1's Flow
- enableMultiInstanceInvalidation: uses IPC to broadcast invalidation — partial fix with overhead
- WAL incompatible: multi-instance invalidation requires journal mode TRUNCATE, not WAL
- Best solution: keep services in the same process — no :remote in AndroidManifest
- ContentProvider: the traditional bridge for legitimate cross-process data access
"The interview insight: multi-process in Android is rare and should be intentional. If you're using android:process=':remote' for a service, ask yourself why. 99% of the time, a bound service or foreground service in the same process is correct. Multi-process is for crash isolation (camera, media codecs) — not for background tasks."
apply() writes asynchronously — it returns immediately and queues the write. commit() writes synchronously — it blocks until the write is flushed to disk. In modern Android you should use DataStore instead of both, but this distinction is still a common interview question.
// apply() — asynchronous, fire-and-forget prefs.edit () .putString ("username", "alice") .apply () // ✅ Returns immediately — doesn't block // ✅ Writes are queued and batched // ❌ No confirmation — you can't know if the write succeeded // ❌ If process killed immediately after apply(), write may be lost // commit() — synchronous, blocking val success = prefs.edit () .putString("username", "alice") . commit () // blocks current thread until written to disk // ✅ Returns Boolean — true if successful // ❌ Blocks the thread — NEVER call on main thread (ANR risk) // ❌ On main thread, even 5ms disk write can cause jank // When to use which (for legacy code maintaining SharedPreferences): // apply() → almost always — async is fine for preferences // commit() → if you MUST confirm the write before proceeding // (e.g., writing before process intentionally terminates) // Always on background thread if using commit() // Modern answer: use DataStore instead suspend fun saveUsername(name: String) { dataStore.edit { it[USERNAME_KEY] = name } // ✅ Async (suspend), ✅ Returns after write, ✅ Throws on failure }
- apply(): asynchronous — queues the write, returns immediately, no success feedback
- commit(): synchronous — blocks thread until disk write completes, returns success boolean
- apply() on main thread: safe (non-blocking) but silent failures are not reported
- commit() on main thread: dangerous — disk I/O blocks UI thread, causes ANR on slow devices
- DataStore: the modern replacement — suspend function that doesn't block, throws on failure
"The correct answer for new code: use DataStore, not SharedPreferences. But for the interview question: apply() is always preferred over commit() because commit() on the main thread is one of the most common causes of ANR in Android apps. If you must verify the write, commit() on a background thread, or use DataStore's suspend edit {}."
Drafts need to survive process death, so SavedStateHandle alone isn't enough for large data. The complete solution combines Room (persistent storage) with SavedStateHandle (lightweight session state) and auto-save with debounce.
// Room entity for drafts @Entity(tableName = "drafts") data class DraftEntity( @PrimaryKey val id: String, val content: String, val lastModified: Long = System.currentTimeMillis () ) // ViewModel — auto-saves with debounce @HiltViewModel class DraftViewModel @Inject constructor( private val dao: DraftDao, private val saved: SavedStateHandle ) : ViewModel() { private val draftId = saved.get <String>("draftId") ?: UUID.randomUUID ().toString () private val _content = MutableStateFlow("") val content: StateFlow<String> = _content init { // Restore draft from Room on launch viewModelScope.launch { dao.getDraft (draftId)?.let { _content.value = it.content } } // Auto-save with 500ms debounce — saves while typing viewModelScope.launch { _content .debounce (500) .distinctUntilChanged () .collect { text -> if (text.isNotBlank ()) dao.upsert (DraftEntity(draftId, text)) } } } fun onTextChanged(text: String) { _content.value = text } fun submit() { viewModelScope.launch { // Submit logic... dao.delete (draftId) // clean up after successful submit } } } // Compose UI — just collect the StateFlow var text byremember {mutableStateOf ("") } LaunchedEffect(Unit) { vm.content.collect { text = it } } TextField(value = text, onValueChange = { text = it; vm.onTextChanged (it) })
- Room for persistence: drafts survive process death, device restart, app uninstall/reinstall
- debounce(500ms): saves 500ms after typing stops — doesn't write on every keystroke
- draftId in SavedStateHandle: survives configuration changes and process death within session
- Restore on init: load draft from Room when ViewModel initialises — seamless continuation
- Delete on submit: clean up after successful submission — prevent stale draft confusion
"Google Docs, Gmail drafts, WhatsApp unsent messages — all use this pattern. The debounce is critical: without it you write to the database on every character typed, which is wasteful. 500ms debounce means at most one write per pause in typing. The user never loses more than 500ms of work."
Room can export a JSON snapshot of your database schema to a file. These schema files are version-controlled alongside your code and enable MigrationTestHelper to validate your migrations without running them on a real device.
// Enable schema export in build.gradle.kts android { defaultConfig {javaCompileOptions {annotationProcessorOptions {arguments +=mapOf ( "room.schemaLocation" to "$projectDir/schemas" ) } } } } // For KSP (modern, recommended): ksp {arg ("room.schemaLocation", "$projectDir/schemas") } // After build, Room generates: schemas/com.example.AppDatabase/1.json // Content (simplified): // { // "formatVersion": 1, // "database": { // "version": 1, // "entities": [{ "tableName": "users", "columns": [...] }] // } // } // Commit schemas to source control — git add schemas/ // Every version bump creates a new JSON file // schemas/1.json, schemas/2.json, schemas/3.json // Using schemas in migration tests @RunWith(AndroidJUnit4::class) class MigrationTest { @get:Rule val helper = MigrationTestHelper( InstrumentationRegistry.getInstrumentation (), AppDatabase::class.java ) @Test fun migrate1To3() { helper.createDatabase ("test.db", 1) // creates v1 from schemas/1.json helper.runMigrationsAndValidate ( // runs migrations, validates against schemas/3.json "test.db", 3, true, MIGRATION_1_2, MIGRATION_2_3 ) } }
- exportSchemas: Room generates a JSON file per database version — commit these to Git
- Version history: schemas/ folder shows exactly how the database evolved over time
- MigrationTestHelper depends on schemas: can't test migrations without exported schemas
- KSP configuration: use ksp { arg() } instead of annotationProcessorOptions for modern setups
- CI validation: schema export fails if schema changes without version bump — catches mistakes automatically
"Exported schemas are your database's changelog — commit them. When a colleague opens a PR changing the database, the diff in schemas/ instantly shows what changed: new column, renamed table, new index. Without schemas, you need to read through the entity classes carefully. With schemas, it's one JSON diff."
A read-through cache is transparent to callers — they just call getProduct(id) and get data. The repository internally checks Room first, fetches from the network only on a cache miss, and saves back to Room for next time.
// Read-through cache — callers don't know where data comes from class ProductRepositoryImpl @Inject constructor( private val dao: ProductDao, private val api: ProductApi, @IoDispatcher private val io: CoroutineDispatcher ) { suspend fun getProduct(id: String): Product = withContext(io) { // Step 1: Check Room cache val cached = dao.getProduct (id) if (cached != null) { return@withContext cached.toDomain () // Cache hit — return immediately } // Step 2: Cache miss — fetch from network val remote = api.getProduct (id) // Step 3: Save to Room for next time dao.insert (remote.toEntity ()) remote.toDomain () // Return fresh data } // With staleness check — refresh if cache is old suspend fun getProductFresh(id: String, maxAgeMs: Long = 300_000): Product { val cached = dao.getProduct (id) val isStale = cached == null || System.currentTimeMillis () - cached.cachedAt > maxAgeMs return if (!isStale) { cached!!.toDomain () } else { api.getProduct (id).also { dto -> dao.insert (dto.toEntity ().copy (cachedAt = System.currentTimeMillis ())) }.toDomain () } } } // ViewModel — single call, cache is transparent fun loadProduct(id: String) { viewModelScope.launch { val product = repo.getProduct (id) // doesn't know if from cache or network _state.value = UiState.Success (product) } }
- Read-through: caller asks for data, cache is checked invisibly — single API surface
- Cache hit path: Room has data → return immediately, no network call
- Cache miss path: Room empty → fetch from API → save to Room → return data
- Staleness check: maxAge parameter — refresh automatically when cache is old
- cachedAt timestamp: stored with entity — enables TTL-based staleness decisions
"Read-through vs cache-aside: cache-aside means the caller manually checks cache then calls API. Read-through means the Repository handles the check — the caller just calls getProduct(id) always. Read-through is cleaner — the caching logic is encapsulated, callers are simpler, and you can change the cache strategy without touching any ViewModel."
@RawQuery lets you build a query string at runtime instead of at compile time. Useful for dynamic filters, sorting, and search combinations that can't be expressed with static @Query annotations.
// @Query — static, validated at compile time @Query("SELECT * FROM products WHERE category = :cat ORDER BY price ASC") fun getByCategory(cat: String): Flow<List<ProductEntity>> // ❌ Can't dynamically change 'ORDER BY price' to 'ORDER BY name' // @RawQuery — dynamic, built at runtime @Dao interface ProductDao { @RawQuery(observedEntities = [ProductEntity::class]) // needed for Flow support fun rawQuery(query: SupportSQLiteQuery): Flow<List<ProductEntity>> } // Build dynamic query safely (prevents SQL injection) fun buildProductQuery( category: String? = null, sortBy: String = "name", ascending: Boolean = true ): SupportSQLiteQuery { val args =mutableListOf <Any>() var sql = "SELECT * FROM products" category?.let { sql += " WHERE category = ?"; args.add (it) } val col = if (sortBy inlistOf ("name", "price")) sortBy else "name" // whitelist! val order = if (ascending) "ASC" else "DESC" sql += " ORDER BY $col $order" return SimpleSQLiteQuery(sql, args.toTypedArray ()) } // Usage in Repository fun observeProducts(category: String?, sort: String, asc: Boolean) = dao.rawQuery (buildProductQuery (category, sort, asc)) // ⚠️ Security: NEVER interpolate user input directly into SQL string // ✅ Use ? placeholders for values // ✅ Whitelist column names before using in ORDER BY (can't use ? for column names)
- @RawQuery: accepts SupportSQLiteQuery at runtime — full SQL flexibility
- observedEntities: required for Flow support — tells Room which table to watch for changes
- SimpleSQLiteQuery: wraps SQL string + args array — Room handles parameterisation
- Whitelist column names: SQL injection risk for ORDER BY — validate against an allowed list
- Use sparingly: @RawQuery loses compile-time validation — prefer @Query where possible
"@RawQuery is the escape hatch for when @Query isn't flexible enough. The SQL injection risk: column names can't be parameterised with ?, so always whitelist them. 'sortBy in listOf(name, price)' prevents 'name; DROP TABLE products; --' being injected as a sort column. Values are always safe via ? parameters."
Room performance issues usually come from missing indexes, loading too much data at once, or doing heavy work on the main thread. The fix is a combination of indexing, pagination, and profiling.
// Step 1: Diagnose — enable query logging val db = Room.databaseBuilder (ctx, AppDatabase::class.java, "app.db") .setQueryCallback (RoomDatabase.QueryCallback { sql, args -> Log.d ("RoomQuery", "SQL: $sql, Args: $args") }, Executors.newSingleThreadExecutor ()) .build () // Step 2: Check for missing indexes — EXPLAIN QUERY PLAN // Run in DB Browser for SQLite or adb shell: // EXPLAIN QUERY PLAN SELECT * FROM products WHERE category = 'shoes' ORDER BY price // If output shows "SCAN TABLE products" → full scan, needs index // If output shows "SEARCH TABLE products USING INDEX" → good // Step 3: Add indexes for frequently queried columns @Entity(indices = [ Index(value = ["category"]), // WHERE category = ? Index(value = ["category", "price"]), // WHERE category = ? ORDER BY price Index(value = ["userId"]) // WHERE userId = ? (foreign key) ]) data class ProductEntity(...) // Step 4: Paginate — never load 50,000 rows at once @Query("SELECT * FROM products ORDER BY name LIMIT :pageSize OFFSET :offset") suspend fun getPage(pageSize: Int, offset: Int): List<ProductEntity> // Or use @PagingSource — let Paging 3 manage pagination // Step 5: Select only needed columns data class ProductSummary(val id: String, val name: String, val price: Double) @Query("SELECT id, name, price FROM products ORDER BY name") fun observeSummaries(): Flow<List<ProductSummary>> // Don't load 50 columns for a list that shows 3
- setQueryCallback: logs every SQL query with timing — find slow queries in development
- EXPLAIN QUERY PLAN: SQLite built-in — shows if a query uses an index or does a full scan
- Indexes for WHERE + ORDER BY: add compound index matching the query's filter + sort
- Pagination: never load all 50,000 rows — load 20-50 at a time with Paging 3
- Projection (SELECT specific columns): only fetch columns you display — avoids transferring unused data
"The performance checklist for Room: (1) EXPLAIN QUERY PLAN to find full scans, (2) add compound index matching WHERE + ORDER BY columns, (3) use Paging 3 instead of loading all rows, (4) select only needed columns for list screens. In practice, missing indexes cause 90% of Room performance issues on large datasets."
AutoMigration (Room 2.4+) generates migration SQL automatically for simple schema changes — adding columns, adding tables, renaming with @RenameColumn. Complex changes like splitting a table or changing column types still require manual migrations.
// AutoMigration — Room generates the SQL for you @Database( entities = [UserEntity::class], version = 3, autoMigrations = [ AutoMigration(from = 1, to = 2), // simple add column AutoMigration(from = 2, to = 3, spec = AppDatabase.Migration2to3::class) ] ) abstract class AppDatabase : RoomDatabase() { // Spec needed when Room can't infer intent (rename vs delete+add) @RenameColumn(tableName = "users", fromColumnName = "user_name", toColumnName = "name") class Migration2to3 : AutoMigrationSpec } // ✅ AutoMigration handles: // - Adding a new column with default value // - Adding a new table // - Renaming a column (@RenameColumn spec) // - Renaming a table (@RenameTable spec) // - Deleting a column (@DeleteColumn spec) // ❌ Manual Migration required for: // - Changing a column's type (TEXT → INTEGER) // - Splitting one table into two // - Merging two tables into one // - Complex data transformations during migration // - Adding a NOT NULL column without a default value // Manual Migration for complex changes val MIGRATION_3_4 = object : Migration(3, 4) { override fun migrate(db: SupportSQLiteDatabase) { // Complex: split users into users + user_profiles db.execSQL ("CREATE TABLE user_profiles AS SELECT id, bio, avatar FROM users") db.execSQL ("ALTER TABLE users DROP COLUMN bio") } }
- AutoMigration: zero-code migrations for simple schema changes — Room generates SQL from schema diffs
- AutoMigrationSpec: required when Room can't distinguish rename from delete+add — be explicit
- Requires exportSchemas=true: AutoMigration reads previous schema JSON to compute the diff
- Column type changes: SQLite ALTER TABLE can't change types — need CREATE TABLE + INSERT + DROP
- Mix and match: use AutoMigration for simple, manual Migration for complex — they coexist
"AutoMigration is the answer to 'I added a new column, do I need to write a migration?' — yes you bump the version, but no you don't need to write SQL. Room reads the old schema JSON, sees the new column, and generates 'ALTER TABLE users ADD COLUMN phone TEXT' automatically. The catch: it only works if you've been exporting schemas from the start."
Offline-first favourites need optimistic UI (instant heart toggle), local persistence (Room), and background sync (WorkManager). The user sees instant feedback — the sync is invisible.
// Room entity @Entity(tableName = "favourites", foreignKeys = [ForeignKey(ProductEntity::class, ["id"], ["productId"], onDelete = ForeignKey.CASCADE)]) data class FavouriteEntity( @PrimaryKey val productId: String, val syncStatus: String = "PENDING" // PENDING | SYNCED ) // Repository — optimistic toggle suspend fun toggleFavourite(productId: String) { val exists = dao.isFavourite (productId) if (exists) { dao.delete (productId) // immediate local delete } else { dao.insert (FavouriteEntity(productId)) // immediate local insert }scheduleSync () // queue background sync } fun observeIsFavourite(productId: String): Flow<Boolean> = dao.observeIsFavourite (productId) fun scheduleSync() { WorkManager.getInstance (context).enqueueUniqueWork ( "fav-sync", ExistingWorkPolicy.REPLACE, // debounce rapid toggles OneTimeWorkRequestBuilder<FavSyncWorker>() .setConstraints (Constraints(requiresNetwork = true)) .setInitialDelay (2, TimeUnit.SECONDS) // wait for more toggles .build () ) } // ViewModel — heart icon reacts instantly to Room Flow val isFav = repo.observeIsFavourite(productId) . stateIn (viewModelScope, SharingStarted.WhileSubscribed (5000), false) fun onHeartTap() { viewModelScope.launch { repo.toggleFavourite (productId) } }
- Optimistic UI: write to Room first, sync later — heart toggles instantly with no network wait
- ExistingWorkPolicy.REPLACE: rapid heart taps cancel the previous sync work — debounce effect
- setInitialDelay: waits 2s before syncing — batches rapid toggles into one network call
- Flow heart state: observeIsFavourite() makes the heart icon react to Room changes automatically
- Cascade delete: when a product is deleted from Room, its favourite is automatically removed
"ExistingWorkPolicy.REPLACE with setInitialDelay is the debounce pattern for WorkManager. If the user taps the heart 5 times in 2 seconds, only one sync request is sent. Without this, you'd fire 5 separate API calls and potentially get a race condition between add and remove."
RemoteMediator bridges your network API and Room. When Paging 3 runs out of data in Room, RemoteMediator fetches the next page from the API and saves it to Room — Paging then reads from Room seamlessly.
// RemoteMediator — fetches from API, writes to Room @OptIn(ExperimentalPagingApi::class) class ProductRemoteMediator @Inject constructor( private val api: ProductApi, private val db: AppDatabase ) : RemoteMediator<Int, ProductEntity>() { override suspend fun load( loadType: LoadType, state: PagingState<Int, ProductEntity> ): MediatorResult { val page = when (loadType) { LoadType.REFRESH -> 1 // start from beginning LoadType.PREPEND -> return MediatorResult.Success (endOfPaginationReached = true) LoadType.APPEND -> { val lastItem = state.lastItemOrNull () ?: return MediatorResult.Success (endOfPaginationReached = true) // Calculate next page from last loaded item db.remoteKeyDao().getPage (lastItem.id) + 1 } } return try { val response = api.getProducts (page = page, pageSize = state.config.pageSize) db.withTransaction { if (loadType == LoadType.REFRESH) db.productDao().clearAll () db.productDao().insertAll (response.items.map { it.toEntity () }) } MediatorResult.Success (endOfPaginationReached = response.items.isEmpty ()) } catch (e: Exception) { MediatorResult.Error (e) } } } // Wire up with Pager val products = Pager( config = PagingConfig(pageSize = 20), remoteMediator = productRemoteMediator, pagingSourceFactory = { db.productDao().paginate () } // always reads from Room ).flow.cachedIn (viewModelScope)
- RemoteMediator: triggered by Paging 3 when Room runs out of data — fetches next page from API
- LoadType.REFRESH: pull-to-refresh — clear Room and reload from page 1
- LoadType.APPEND: load more — fetch next page, append to Room
- db.withTransaction: clear + insert atomically on refresh — no partial states
- Room always the source: pagingSourceFactory reads from Room — RemoteMediator feeds Room
"RemoteMediator gives you the complete offline-first paging experience: first app launch fetches from API → writes to Room. Subsequent launches read from Room instantly. Scroll to the bottom → RemoteMediator fetches the next page. Pull to refresh → clears Room + refetches. The UI only ever observes Room — it never calls the API directly."
Room maps Kotlin nullable types to SQLite NULL values. A String? column can be NULL in the database; String cannot. This aligns with Kotlin's null safety — Room enforces the contract at the database boundary.
// Nullable columns — SQLite NULL maps to Kotlin null @Entity data class UserEntity( @PrimaryKey val id: String, val name: String, // NOT NULL in SQLite — Room enforces this val phone: String? = null, // NULL allowed — optional field val avatar: String? = null // NULL allowed — user may not have avatar ) // DAO queries with nullable results @Dao interface UserDao { // Return type nullable — row may not exist @Query("SELECT * FROM users WHERE id = :id") suspend fun getUser(id: String): UserEntity? // null if not found // Nullable column in query @Query("SELECT * FROM users WHERE phone IS NOT NULL") fun observeUsersWithPhone(): Flow<List<UserEntity>> // COALESCE — provide default for null in query @Query("SELECT id, COALESCE(phone, 'N/A') AS phone FROM users") suspend fun getUsersWithDefaultPhone(): List<UserProjection> } // Using nullable results safely in Kotlin suspend fun loadUser(id: String) { val user = dao.getUser (id) ?: return // Elvis operator — handle null // user is UserEntity (non-null) here val phone = user.phone ?: "No phone" // nullable column handled safely } // Kotlin default values in entity = NOT the same as nullable val count: Int = 0 // stored as 0 in SQLite — NOT NULL, with default 0 val count: Int? = null // stored as NULL in SQLite
- String vs String?: Room maps Kotlin nullability directly to SQLite NOT NULL / NULL columns
- Return type nullable: getUser() returns UserEntity? — null means the row doesn't exist
- IS NOT NULL in SQL: filter out null column values in queries
- COALESCE: SQL function to provide a default when a column is NULL
- Default values ≠ nullable: val count: Int = 0 stores 0, not NULL — different SQLite semantics
"The Room nullability contract is clean: if your Kotlin type is non-nullable (String), Room enforces NOT NULL in the schema — inserting null throws an exception at runtime. If it's nullable (String?), Room allows NULL. This means you can trust Room's query results to match your Kotlin types exactly — no surprise nulls from the database."
Notification history is a classic append-heavy workload. Design it with efficient indexes for the most common queries (unread count, recent list), and automatic cleanup to prevent unbounded growth.
@Entity( tableName = "notifications", indices = [ Index(value = ["isRead"]), // fast unread count query Index(value = ["receivedAt"]), // fast ORDER BY receivedAt Index(value = ["type"]) // fast filter by type ] ) data class NotificationEntity( @PrimaryKey val id: String, val title: String, val body: String, val type: String, // "ORDER_UPDATE" | "PROMO" | "CHAT" val isRead: Boolean = false, val receivedAt: Long = System.currentTimeMillis (), val deepLink: String? = null ) @Dao interface NotificationDao { @Insert(onConflict = OnConflictStrategy.IGNORE) // idempotent — FCM may deliver twice suspend fun insert(n: NotificationEntity) @Query("SELECT * FROM notifications ORDER BY receivedAt DESC LIMIT 50") fun observeRecent(): Flow<List<NotificationEntity>> @Query("SELECT COUNT(*) FROM notifications WHERE isRead = 0") fun observeUnreadCount(): Flow<Int> // drives badge on app icon @Query("UPDATE notifications SET isRead = 1 WHERE id = :id") suspend fun markRead(id: String) @Query("UPDATE notifications SET isRead = 1") suspend fun markAllRead() // Keep only last 100 notifications — cleanup old ones @Query("""DELETE FROM notifications WHERE id NOT IN (SELECT id FROM notifications ORDER BY receivedAt DESC LIMIT 100)""") suspend fun pruneOld() } // Schedule daily cleanup with WorkManager val prune = PeriodicWorkRequestBuilder<NotificationPruneWorker>(1, TimeUnit.DAYS).build ()
- OnConflictStrategy.IGNORE: FCM can deliver the same notification twice — IGNORE makes inserts idempotent
- LIMIT 50: cap the list query — don't load all notifications ever received
- observeUnreadCount: Flow<Int> drives the notification badge reactively
- Prune query: keeps only the 100 most recent — prevents unlimited growth without user action
- Indexes on isRead and receivedAt: both are queried frequently — index them for fast counts and sorts
"OnConflictStrategy.IGNORE on notification insert is the idempotency fix. FCM guarantees at-least-once delivery — the same notification may arrive twice. With IGNORE, the second insert is silently dropped. Without it, you'd show the same notification twice in the list. Use the notification ID from FCM as the @PrimaryKey."
Production storage profiling uses Android Studio's Database Inspector for Room, StrictMode for main-thread I/O, and Firebase Performance Monitoring for real-world timing data across all users.
// TOOL 1: Room Query Callback — log slow queries in development Room.databaseBuilder (...) .setQueryCallback ({ sql, _ -> Log.d ("SlowQuery", sql) }, Executors.newSingleThreadExecutor ()) .build () // TOOL 2: Android Studio Database Inspector // View → Tool Windows → App Inspection → Database Inspector // Run queries live on device/emulator, see table contents // Track query execution time per query // TOOL 3: StrictMode — catch main-thread disk access StrictMode.setThreadPolicy ( StrictMode.ThreadPolicy.Builder() .detectDiskReads ().detectDiskWrites () .penaltyLog () // log instead of crash in production profiling .build () ) // Prints stack trace whenever disk I/O happens on main thread // TOOL 4: Firebase Performance — real-world timing suspend fun searchProducts(query: String): List<Product> { val trace = Firebase.performance.newTrace ("room_product_search") trace.start () val result = dao.search (query) trace.stop () // captured in Firebase dashboard — p50, p95, p99 timing return result.map { it.toDomain () } } // COMMON FIXES after profiling: // 1. Slow SELECT → add index (EXPLAIN QUERY PLAN shows full scan) // 2. Slow INSERT → batch inserts (insertAll vs 1000 individual inserts) // 3. Main thread → move to withContext(IO) or fix Room threading // 4. Large results → paginate (LIMIT/OFFSET or Paging 3) // 5. Bloated DB → VACUUM after deletes, prune old data // 6. Too many cols → project only needed columns in SELECT
- Database Inspector: real-time Room profiling in Android Studio — see query times and table data
- setQueryCallback: logs every SQL query — find which queries are slow in development
- StrictMode: surfaces main-thread disk access — penaltyLog for profiling, penaltyDeath for dev
- Firebase Performance Traces: real-world timing from all production users — p50/p95/p99 percentiles
- Fix order: indexes first (biggest impact), then pagination, then batching, then projection
"Production storage profiling is different from development profiling. In dev I use Database Inspector and StrictMode. In production I use Firebase Performance custom traces around slow DAO calls — this shows me that 5% of users (p95) experience 800ms for a search query that takes 50ms for the median user. Those outliers likely have large databases and missing indexes."
Migrating from SharedPreferences to DataStore requires reading existing prefs values and writing them to DataStore once — then permanently switching to DataStore. The SharedPreferencesMigration API handles this automatically.
// DataStore provides a built-in SharedPreferences migration helper // It reads from SharedPreferences on first DataStore access // then deletes the SharedPreferences file after successful migration val Context.dataStore: DataStore<Preferences> bypreferencesDataStore ( name = "settings", produceMigrations = { context ->listOf ( SharedPreferencesMigration( context, sharedPreferencesName = "app_settings" // old prefs file name ) ) } ) // What happens on first DataStore access: // 1. DataStore checks if migration is needed // 2. Reads ALL values from SharedPreferences "app_settings" // 3. Writes them to DataStore with the same keys // 4. Deletes the original SharedPreferences file // 5. Subsequent accesses use DataStore only // Migrate specific keys only (exclude sensitive data) SharedPreferencesMigration( context, sharedPreferencesName = "app_settings", keysToMigrate =setOf ("theme", "language", "font_size") // "auth_token" NOT in the list — stays in SharedPreferences or migrate separately ) // Custom key mapping — rename keys during migration SharedPreferencesMigration(context, "app_settings") { prefs: SharedPreferencesView, current: MutablePreferences -> if (prefs.contains ("dark_mode")) { // Old key was "dark_mode" — new key is "theme" val isDark = prefs.getBoolean("dark_mode", false) current[ stringPreferencesKey ("theme")] = if (isDark) "dark" else "light" } current }
- SharedPreferencesMigration: reads old prefs on first DataStore access, migrates all keys, deletes old file
- One-time migration: runs exactly once — subsequent launches use DataStore directly
- keysToMigrate: selectively migrate — leave sensitive data (auth tokens) where it is
- Custom migration lambda: map old key names to new ones, transform values during migration
- Atomic: if migration fails, it retries next time — data is never lost mid-migration
"SharedPreferencesMigration is the answer to 'how do I migrate without a big bang?' It's completely transparent to users — they upgrade the app, first DataStore access triggers migration silently, old SharedPreferences file is deleted. No data lost, no user action required. The migration is idempotent — safe to ship even if some users already have partial migrations."
A shopping cart needs local persistence (works offline), atomic quantity updates (no race conditions), and reliable server sync. Room @Transaction + coroutines handles all three.
@Entity(tableName = "cart_items") data class CartItemEntity( @PrimaryKey val productId: String, val quantity: Int, val price: Double, val name: String, val isDirty: Boolean = true // true = needs sync with server ) @Dao abstract class CartDao { @Query("SELECT * FROM cart_items") abstract fun observeCart(): Flow<List<CartItemEntity>> @Query("SELECT SUM(quantity * price) FROM cart_items") abstract fun observeTotal(): Flow<Double> @Insert(onConflict = OnConflictStrategy.REPLACE) abstract suspend fun upsert(item: CartItemEntity) @Query("UPDATE cart_items SET quantity = quantity + :delta, isDirty = 1 WHERE productId = :id") abstract suspend fun adjustQuantity(id: String, delta: Int) // atomic increment! @Query("DELETE FROM cart_items WHERE productId = :id") abstract suspend fun remove(id: String) // Atomic add: insert if absent, increment if present @Transaction open suspend fun addToCart(item: CartItemEntity) { val existing =getItem (item.productId) if (existing != null) {adjustQuantity (item.productId, 1) } else {upsert (item.copy (quantity = 1)) } } @Query("SELECT * FROM cart_items WHERE productId = :id") abstract suspend fun getItem(id: String): CartItemEntity? @Query("UPDATE cart_items SET isDirty = 0") abstract suspend fun markSynced() }
- quantity = quantity + delta: atomic SQL increment — prevents lost updates from concurrent taps
- isDirty flag: marks rows needing server sync — sync worker processes dirty items only
- @Transaction addToCart: check-then-insert/update is atomic — no duplicate items from rapid taps
- observeTotal() Flow: real-time cart total with SQLite SUM — updates on every quantity change
- OnConflictStrategy.REPLACE: ensures upsert semantics — safe to call multiple times
"The atomic quantity increment is the key insight: 'quantity = quantity + 1' in a single UPDATE is atomic in SQLite. The alternative — read quantity, increment in Kotlin, write back — has a race condition if two coroutines tap simultaneously. The SQL atomic update is always correct; the read-modify-write pattern is only correct with careful locking."
Room 2.7+ supports Kotlin Multiplatform — the same Room code can run on Android, iOS, and Desktop. This means your entire data layer can be shared across platforms, eliminating duplicate database code.
// Room 2.7+ with KMP — shared commonMain code // build.gradle.kts (shared module) // kotlin { sourceSets { commonMain.dependencies { // implementation("androidx.room:room-runtime:2.7.0") // } } } // Entity — in commonMain (shared across platforms) @Entity data class ProductEntity( @PrimaryKey val id: String, val name: String, val price: Double ) // DAO — in commonMain @Dao interface ProductDao { @Query("SELECT * FROM products") fun observeAll(): Flow<List<ProductEntity>> @Upsert suspend fun upsertAll(products: List<ProductEntity>) } // Database — in commonMain @Database(entities = [ProductEntity::class], version = 1) abstract class AppDatabase : RoomDatabase() { abstract fun productDao(): ProductDao } // Platform-specific builder — in androidMain / iosMain // androidMain: fun createDatabase(context: Context) = Room.databaseBuilder (context, AppDatabase::class.java, "app.db").build () // iosMain: fun createDatabase() = Room.databaseBuilder <AppDatabase>( name = NSHomeDirectory() + "/app.db", factory = { AppDatabase::class.instantiateImpl() } ).build () // Same DAO used by both Android and iOS ViewModels // No duplicate database code across platforms
- Room 2.7+ KMP: Entity, DAO, and Database in commonMain — shared across Android, iOS, Desktop
- Platform builders: createDatabase() is platform-specific — Android uses Context, iOS uses file path
- SQLite driver: Room uses platform SQLite on Android, SQLite.swift on iOS — same API
- Flow works cross-platform: KMP coroutines support Flow on all platforms
- Trade-off: Room KMP is newer — SQLDelight is more mature for KMP database needs
"Room KMP is a 2025 development — most production KMP projects still use SQLDelight for the database layer because it's been KMP-native for years. Room KMP is the right choice if your team knows Room well and wants to share the data layer without learning SQLDelight. For greenfield KMP: evaluate both; SQLDelight has a larger KMP production track record."
Room database corruption is rare but happens when the SQLite file is incomplete — usually from a process kill during a write. Detecting corruption, recovering gracefully, and preventing data loss requires a deliberate strategy.
// Common causes of corruption: // 1. Process killed during a write that isn't WAL-protected // 2. Low disk space during write // 3. App sharing a DB file written by multiple processes // 4. Manual file manipulation (backup/restore gone wrong) // Detection: Room throws SQLiteDatabaseCorruptException val db = Room.databaseBuilder (ctx, AppDatabase::class.java, "app.db") .build () // Catch in repository suspend fun getProducts() = try { dao.getAll () } catch (e: SQLiteDatabaseCorruptException) { handleCorruption(e)emptyList () } fun handleCorruption(e: SQLiteDatabaseCorruptException) { // Log to crash reporting (Crashlytics) FirebaseCrashlytics.getInstance ().recordException (e) // Recovery option 1: delete and recreate (data loss) context.deleteDatabase ("app.db") // App will recreate on next DB access // ❌ All local data lost — acceptable for cache-only data // Recovery option 2: restore from backup (if you maintain one) val backup = File(context.filesDir, "app.db.backup") if (backup.exists ()) { context.getDatabasePath ("app.db").delete () backup.copyTo (context.getDatabasePath ("app.db")) } } // Prevention: WAL mode (default in Room 2.2+) greatly reduces corruption risk // WAL checkpoints are atomic — partial writes are rolled back automatically // Optional: periodic backup before risky operations suspend fun createBackup() = withContext(Dispatchers.IO) { db.close () // flush WAL before backup context.getDatabasePath ("app.db") .copyTo (File(context.filesDir, "app.db.backup"), overwrite = true) }
- SQLiteDatabaseCorruptException: catch this specifically — regular IOException needs different handling
- WAL prevents most corruption: atomic checkpointing means partial writes are rolled back
- Crash reporting: always log corruption events — monitor frequency in production
- Delete and recreate: for cache-only data (can re-fetch from server) — simplest recovery
- Backup before risky ops: close DB first (flush WAL) then copy — never copy an open WAL database
"WAL mode (Room's default) makes corruption extremely rare — partial writes are journalled and rolled back atomically. If corruption does happen, check if the data is re-fetchable from your server. If yes: delete and recreate, re-sync from API. If the data is user-generated and not on the server: you need a backup strategy. Log all corruption events to Crashlytics — a spike means something is wrong with the write path."
A note-taking app is the canonical data storage design challenge — it touches every storage mechanism and requires careful decisions about what to persist where, how to sync, and what to encrypt.
// ROOM — main data store @Entity(tableName = "notes", indices = [Index("updatedAt"), Index("isPinned")]) data class NoteEntity( @PrimaryKey val id: String = UUID.randomUUID ().toString (), val title: String, val content: String, val isPinned: Boolean = false, val isEncrypted: Boolean = false, val syncStatus: SyncStatus = SyncStatus.PENDING, val createdAt: Long = System.currentTimeMillis (), val updatedAt: Long = System.currentTimeMillis () ) // FTS for search @Fts4(contentEntity = NoteEntity::class) @Entity(tableName = "notes_fts") data class NoteFts(val title: String, val content: String) // DATASTORE — user preferences // sort order (MODIFIED_DATE | CREATED_DATE | TITLE) // default view (list | grid) // auto-lock timeout // sync frequency preference // ENCRYPTED STORAGE — for locked notes // content of isEncrypted notes stored AES-256 encrypted // encryption key in AndroidKeyStore, unlocked by biometric // EncryptedFile for export/backup of encrypted notes // SYNC DESIGN: // Write: Room first → schedule WorkManager sync (requiresNetwork) // Read: Room always → background sync refreshes when stale // Conflict: last-write-wins on updatedAt timestamp // Deleted: soft delete (deletedAt timestamp) → server hard-deletes after 30 days // BACKUP: // backup_rules.xml: include "notes.db", exclude "secure_prefs" // Encrypted notes: backed up encrypted — key stays on device (per-device encryption) // noBackupFilesDir: for encryption keys cache
- UUID primary key: safe for offline creation and sync — no ID collisions across devices
- FTS4 for search: fast multi-word search across title + content — essential for any note app
- syncStatus field: PENDING/SYNCING/SYNCED/CONFLICT — drives sync worker and UI indicators
- Soft delete: deletedAt timestamp instead of hard delete — server can propagate deletes to other devices
- Per-note encryption: isEncrypted flag + AndroidKeyStore — only biometric-unlocked content is readable
"The soft delete pattern is essential for sync: if you hard-delete from Room before the server is informed, the next sync would restore it from the server. Soft delete (deletedAt timestamp) lets you sync the deletion event to the server first, then clean up locally. This is how Google Keep, Notion, and every sync'd note app works."
25 questions on Gradle, R8, ProGuard, APK vs AAB, build variants, flavors, APK size reduction, KAPT, KSP, and build performance for 2025-26 interviews.
Gradle is the build automation tool for Android. It takes your source code, resources, and dependencies, then compiles, optimises, and packages them into an APK or AAB. Understanding the build pipeline helps you diagnose slow builds and misconfigured outputs.
// Build pipeline (simplified): // Source code (.kt/.java) // → Kotlin/Java Compiler → .class files (bytecode) // → D8 (dex compiler) → .dex files (Dalvik bytecode) // → R8 (if enabled) → shrink + obfuscate + optimise .dex // → Packager → APK or AAB // → zipalign + sign → release-ready artifact // Two build scripts in every Android project: // settings.gradle.kts — project structure pluginManagement { repositories { google(); mavenCentral() } } include(":app", ":core:network", ":feature:home") // app/build.gradle.kts — module configuration plugins { alias(libs.plugins.android.application) alias(libs.plugins.kotlin.android) } android { compileSdk = 35 defaultConfig { applicationId = "com.example.app" minSdk = 24; targetSdk = 35 versionCode = 1; versionName = "1.0" } buildTypes { release { isMinifyEnabled = true } } } dependencies { implementation(libs.androidx.core.ktx) } // Gradle wrapper (gradlew) — pins Gradle version for the project // gradle/wrapper/gradle-wrapper.properties defines the exact Gradle version // Always commit the wrapper — ensures every developer uses the same build
- Gradle orchestrates: compiling Kotlin, merging resources, running R8, and packaging
- D8: the dex compiler — converts Java bytecode to Android's Dalvik bytecode
- R8: runs after D8 — shrinks, obfuscates, and optimises (replaces old ProGuard)
- settings.gradle.kts: declares project structure and module graph
- Gradle wrapper: pins Gradle version — everyone on the team uses the same build tool
"The pipeline in one sentence: Kotlin source → bytecode → dex → (R8 shrinks/obfuscates) → packaged into APK/AAB → signed. Knowing where in this pipeline a problem occurs tells you which tool to investigate — compile error = Kotlin compiler, missing class in release = R8 shrinking too aggressively."
APK (Android Package) is the traditional installable file -- it contains code, resources, and native libraries for every device configuration. AAB (Android App Bundle) is a publishing format: you upload it to Play Store, and Play generates device-specific APKs from it. Users only download the code and resources their specific device needs -- typically 20-40% smaller than a universal APK.
// Build APK -- for direct distribution or testing // ./gradlew assembleRelease // Build AAB -- for Play Store (mandatory since August 2021) // ./gradlew bundleRelease // bundletool -- test AAB locally before uploading to Play // bundletool build-apks --bundle=app.aab --output=app.apks // bundletool install-apks --apks=app.apks // ABI splits for direct APK distribution (achieves same size benefit as AAB) android { splits { abi { isEnable = true reset() include("arm64-v8a", "armeabi-v7a") isUniversalApk = false } } }
- APK: self-contained installable -- includes all ABIs, all densities, all languages -- users download everything even if unused
- AAB: publishing format, not directly installable -- Play generates per-device split APKs automatically
- 20-40% smaller: Play strips wrong-ABI native libs, wrong-density images, and unused language strings
- Mandatory for Play: AAB required for new apps since August 2021
- bundletool: Google's CLI to simulate AAB→APK generation locally -- verify the output before uploading to Play
"The key insight: AAB is not the file users install — it's the file Google Play uses to generate user-specific APKs. A user on a Pixel 8 (arm64, xxhdpi, English) gets an APK containing only arm64 libraries, xxhdpi images, and English strings. That's why downloads shrink significantly — most users were downloading 3x the resources they needed with APK."
R8 is the modern replacement for ProGuard — it does code shrinking, obfuscation, and optimisation in a single pass. It's significantly faster than ProGuard and produces smaller output. Since AGP 3.4, R8 is the default.
// Enable R8 (default in release builds) android { buildTypes { release { isMinifyEnabled = true // enable R8 shrinking + obfuscation isShrinkResources = true // also remove unused resources proguardFiles(getDefaultProguardFile ("proguard-android-optimize.txt"), "proguard-rules.pro" ) } } } // What R8 does (in one pass): // 1. SHRINKING (tree-shaking) // Removes unused classes, methods, and fields // A library with 10,000 methods you use 50 of → 50 methods in output // 2. OBFUSCATION // Renames: com.example.UserRepository → a.b // Makes reverse engineering much harder // Produces mapping.txt for crash de-obfuscation // 3. OPTIMISATION // Inlines short methods // Removes dead code branches // Rewrites bytecode for smaller dex // R8 vs ProGuard: // ProGuard R8 // Integrated No Yes (built into AGP) // Speed Slow 2-3x faster // Dex output size Larger ~8% smaller // Full mode No Yes (more aggressive) // mapping.txt — generated alongside release build // Upload to Play Console → crash stacktraces auto-deobfuscated // WITHOUT mapping.txt: "at a.b.c(Unknown Source:4)" // WITH mapping.txt: "at com.example.UserViewModel.loadUser(UserViewModel.kt:42)"
- R8 = shrinking + obfuscation + optimisation in one pass — ProGuard did them separately
- Shrinking removes unused code: libraries rarely used fully — R8 strips what you don't call
- Obfuscation: renames classes and methods to single letters — harder to reverse engineer
- mapping.txt: crucial for crash debugging — always save it alongside every release build
- isShrinkResources: separate flag to also remove unused drawables, layouts, strings
"The most important production practice: always upload mapping.txt to Play Console for every release. Without it, crash reports from Firebase Crashlytics and Play Vitals show obfuscated stack traces — 'a.b.c:4' instead of real method names. You can't debug crashes without the mapping file from that exact build."
R8 removes code it thinks is unused — but it can't see code accessed via reflection, serialisation, or native JNI. Keep rules tell R8 "don't touch this" for classes that must survive shrinking.
// proguard-rules.pro — your custom keep rules // Keep a class and all its members -keep class com.example.api.UserDto { *; } // Keep all classes in a package -keep class com.example.api.** { *; } // Keep only class name (not members) — for reflection -keepnames class com.example.MyClass // Keep Serializable classes intact (Gson/Moshi use reflection) -keepclassmembers class * implements java.io.Serializable { private static final java.io.ObjectStreamField[] serialPersistentFields; private void writeObject(java.io.ObjectOutputStream); private void readObject(java.io.ObjectInputStream); java.lang.Object readResolve(); } // Common situations where R8 breaks things: // 1. Gson / reflection-based serialisation // R8 removes fields it thinks unused — Gson reads them via reflection // Fix: add -keep for your model classes, OR switch to Kotlin Serialization -keepclassmembers class com.example.models.** { *; } // 2. Retrofit interface methods // R8 may remove methods it thinks uncalled — Retrofit uses reflection // Fix: library consumer rules (Retrofit ships its own .pro rules — usually auto-applied) // 3. Custom View constructors (needed by XML inflation) -keepclasseswithmembers class * extends android.view.View { public(android.content.Context); public (android.content.Context, android.util.AttributeSet); } // Debug R8 issues // -printusage usage.txt → shows what was removed // -printseeds seeds.txt → shows what was kept // -verbose → detailed output during R8 run
- -keep: preserve class + all members — use for reflection-accessed classes
- -keepnames: preserve name only, R8 can still remove unused members
- -keepclassmembers: keep specific members of matched classes
- Library rules: most libraries ship consumer ProGuard rules — auto-applied, check aar/META-INF
- Debug with -printusage: generates a file listing everything R8 removed — find missing classes
"The fastest way to debug an R8 crash: run the release build on a device, get the crash stack trace, check if the class names are obfuscated. If yes, use the mapping.txt. If the class is missing entirely (ClassNotFoundException), R8 stripped it — add a -keep rule. Add -printusage to the build to see exactly what got removed."
Build types define how your app is compiled and packaged for different purposes. Debug is for development — fast builds, debugging enabled, test signing. Release is for distribution — optimised, obfuscated, production-signed.
android { buildTypes { // DEBUG — automatic, for development debug { isDebuggable = true // allows debugger attachment isMinifyEnabled = false // R8 off — fast builds applicationIdSuffix = ".debug" // install alongside release versionNameSuffix = "-debug" // Uses auto-generated debug.keystore for signing } // RELEASE — for Play Store release { isDebuggable = false isMinifyEnabled = true // R8 enabled isShrinkResources = true // remove unused resources proguardFiles(getDefaultProguardFile ("proguard-android-optimize.txt"), "proguard-rules.pro" ) signingConfig = signingConfigs.getByName ("release") } // STAGING — custom build type (mirrors release config but hits staging server) create("staging") { initWith(getByName ("release")) // inherits release settings applicationIdSuffix = ".staging" buildConfigField("String", "API_URL", "\"https://staging.api.example.com\"") signingConfig = signingConfigs.getByName ("debug") // easier to install } } // BuildConfig fields — accessible from code defaultConfig { buildConfigField("String", "API_URL", "\"https://api.example.com\"") } } // Access in code: val url = BuildConfig.API_URL if (BuildConfig.DEBUG) { /* dev-only code */ }
- debug: debuggable, no R8, debug signing — fast iteration during development
- release: non-debuggable, R8 enabled, production signing — what goes to users
- applicationIdSuffix: lets debug and release be installed simultaneously on one device
- initWith(): inherit another build type's settings — avoids repeating release configuration
- BuildConfig fields: compile-time constants that differ per build type — API URLs, flags
"The staging build type is underused. Create one that inherits release config (R8 enabled, same optimisations) but uses staging server URLs and debug signing. This catches R8-related issues before they reach production — a bug that only appears in release is often an R8 stripping issue that staging would catch first."
Product flavors let you create multiple versions of your app from the same codebase — free vs premium, different regions, white-label variants. They combine with build types to create build variants (e.g. freeDebug, premiumRelease).
android { flavorDimensions +=listOf ("tier", "region") // must declare dimensions productFlavors { // TIER dimension create("free") { dimension = "tier" applicationIdSuffix = ".free" versionNameSuffix = "-free" buildConfigField("Boolean", "IS_PREMIUM", "false") resValue("string", "app_name", "\"MyApp Free\"") } create("premium") { dimension = "tier" applicationIdSuffix = ".premium" buildConfigField("Boolean", "IS_PREMIUM", "true") resValue("string", "app_name", "\"MyApp\"") } // REGION dimension create("india") { dimension = "region" buildConfigField("String", "CURRENCY", "\"INR\"") } create("global") { dimension = "region" buildConfigField("String", "CURRENCY", "\"USD\"") } } } // This creates 8 build variants (2 tiers × 2 regions × 2 build types): // freeIndiaDebug, freeIndiaRelease // freeGlobalDebug, freeGlobalRelease // premiumIndiaDebug, premiumIndiaRelease // premiumGlobalDebug, premiumGlobalRelease // Flavor-specific source sets — different code per flavor // src/free/java/ — free-only code // src/premium/java/ — premium-only code // src/main/java/ — shared code // Check flavor at runtime: if (BuildConfig.IS_PREMIUM) { showPremiumFeature() }
- Flavor dimensions: required group for each flavor — can have multiple orthogonal dimensions
- Build variants: every combination of flavor + build type — flavors × build types = variants
- applicationIdSuffix: each flavor can have a different app ID — multiple variants installed side-by-side
- Source sets: flavor-specific code/resources in src/flavorName/ — different implementations per variant
- resValue: override string resources per flavor — different app names, API endpoints
"The killer use case for flavors: white-label apps. Same codebase, different logo, different colors, different API endpoints — each configured in a different flavor's source set and buildConfigField. One build system, 10 branded apps. Without flavors you'd maintain 10 separate codebases."
APK size reduction is a systematic process — profile first with Android Size Analyzer, then attack the biggest contributors: native libraries (ABI splits), images (WebP), unused code (R8), and unused resources (isShrinkResources).
// Step 1: Profile — Android Studio → Build → Analyze APK // Shows breakdown: res/ classes.dex lib/ assets/ // Identify the biggest contributors before optimizing // Step 2: ABI Splits — biggest win for native libraries android { splits { abi { isEnable = true reset() include("arm64-v8a", "armeabi-v7a") // 98%+ of devices isUniversalApk = false // no universal APK } } } // arm64-v8a APK: 15MB vs universal (arm64+arm+x86): 45MB // Step 3: Switch to AAB — Play generates per-device APKs // ./gradlew bundleRelease (instead of assembleRelease) // Automatic ABI + density + language splits — same effect as step 2 // Step 4: Enable R8 + resource shrinking android { buildTypes { release { isMinifyEnabled = true isShrinkResources = true // removes unused drawables, layouts, strings } } } // Step 5: Convert PNG/JPG to WebP // Android Studio: right-click drawable → Convert to WebP // WebP lossy: 25-35% smaller than JPEG, near same quality // WebP lossless: ~26% smaller than PNG // Step 6: Remove unused language resources android { defaultConfig { resourceConfigurations +=setOf ("en", "hi") // only keep these languages } } // OkHttp ships 20+ language string files — this strips them to your 2 // Step 7: Vector drawables instead of PNGs for multiple densities android { defaultConfig { vectorDrawables.useSupportLibrary = true } } // One vector file replaces mdpi/hdpi/xhdpi/xxhdpi/xxxhdpi PNGs
- Analyze APK first: find the real culprits — native libs, images, or unused library code
- AAB or ABI splits: single biggest size win — arm64 APK is 3x smaller than universal
- isShrinkResources: removes unused drawables/layouts — safe and automatic with R8
- WebP: 25-35% smaller than JPEG, lossless WebP beats PNG — supported since API 18
- resourceConfigurations: strip unused library language files — OkHttp alone has 20+ languages
"Priority order for APK reduction: (1) Switch to AAB — free 20-40% from Play optimisations. (2) Enable R8 + isShrinkResources — removes unused code and assets. (3) ABI filter to arm64+arm only. (4) WebP for large images. (5) resourceConfigurations for languages. Steps 1-3 alone typically get you from 80MB to 35MB without touching any assets."
KAPT (Kotlin Annotation Processing Tool) compiles Kotlin to Java stubs before running annotation processors — a slow extra step. KSP (Kotlin Symbol Processing) processes Kotlin source directly — up to 2x faster and supports incremental processing.
// KAPT — old way (slow) // Kotlin source → Kotlin compiler → Java stubs → KAPT → annotation processor // Extra compilation step: generating Java stubs is slow (adds 30-60s to clean builds) // No incremental processing for many processors // build.gradle.kts with KAPT plugins { alias(libs.plugins.kotlin.kapt) } dependencies { kapt(libs.hilt.compiler) // ❌ KAPT — deprecated for Hilt kapt(libs.room.compiler) // ❌ KAPT — should migrate to KSP } // KSP — new way (fast) // Kotlin source → KSP → annotation processor (reads Kotlin AST directly) // No Java stub generation step // Incremental: only reprocesses changed files // KMP compatible: works in Kotlin Multiplatform plugins {alias(libs.plugins.ksp) } dependencies { ksp(libs.hilt.compiler) // ✅ KSP — 2x faster ksp(libs.room.compiler) // ✅ KSP — recommended for Room 2.6+ } // Libraries supporting KSP (2025): // ✅ Hilt — ksp("com.google.dagger:hilt-compiler") // ✅ Room — ksp("androidx.room:room-compiler") // ✅ Moshi — ksp("com.squareup.moshi:moshi-kotlin-codegen") // ✅ Kotlin Serialization — uses plugin, no KAPT/KSP needed // ⚠️ Dagger2 (standalone) — KSP support available but check version // Can't mix KAPT and KSP for the same library // Pick one processor per library — use KSP when supported // Benchmark (typical project): // KAPT clean build: 4 min // KSP clean build: 2.5 min (37% faster) // KSP incremental: 15 sec (stubs rebuilt only for changed files)
- KAPT: generates Java stubs then runs Java annotation processors — extra slow compilation step
- KSP: reads Kotlin source directly — no stubs, 2x faster on clean builds
- Incremental: KSP only reprocesses files that changed — huge win on incremental builds
- KMP support: KSP works in Kotlin Multiplatform, KAPT doesn't
- Migrate now: Room 2.6+, Hilt — both fully support KSP with no functionality loss
"Migration from KAPT to KSP is one of the highest-ROI build improvements you can make — a 30-60 second saving on every clean build, and dramatically faster incremental builds. The migration is usually just changing kapt() to ksp() in dependencies and updating the plugin. Do it for Room and Hilt first — those two account for most annotation processing time."
Version Catalog centralises all dependency versions in a single TOML file — no more hunting across 10 build.gradle files when upgrading a library. It also enables type-safe accessors (libs.retrofit instead of string literals) and IDE autocomplete.
// gradle/libs.versions.toml — single source of truth [versions] kotlin = "2.1.0" compose-bom = "2024.12.01" hilt = "2.51.1" room = "2.6.1" retrofit = "2.11.0" [libraries] hilt-android = { module = "com.google.dagger:hilt-android", version.ref = "hilt" } hilt-compiler = { module = "com.google.dagger:hilt-compiler", version.ref = "hilt" } room-runtime = { module = "androidx.room:room-runtime", version.ref = "room" } room-ktx = { module = "androidx.room:room-ktx", version.ref = "room" } room-compiler = { module = "androidx.room:room-compiler", version.ref = "room" } retrofit = { module = "com.squareup.retrofit2:retrofit", version.ref = "retrofit" } [plugins] android-application = { id = "com.android.application", version = "8.7.3" } kotlin-android = { id = "org.jetbrains.kotlin.android", version.ref = "kotlin" } hilt = { id = "com.google.dagger.hilt.android", version.ref = "hilt" } ksp = { id = "com.google.devtools.ksp", version = "2.1.0-1.0.29" } // In build.gradle.kts — type-safe, IDE autocomplete dependencies { implementation(libs.hilt.android) // ✅ type-safe accessor ksp(libs.hilt.compiler) implementation(libs.retrofit) } // Before (string literals — typo-prone, no autocomplete): // implementation("com.google.dagger:hilt-android:2.51.1") // Upgrade a library: change ONE version in [versions], all modules pick it up // hilt = "2.51.1" → "2.52" → all 3 hilt dependencies updated atomically
- Single source of truth: all versions in one file — no hunting across module build scripts
- Type-safe accessors: libs.hilt.android instead of string literals — IDE autocomplete, typos caught at build time
- Atomic upgrades: change version once → all modules using that library update together
- Bundles: group related dependencies — libs.bundles.room includes room-runtime + room-ktx
- Dependency updates: Renovate/Dependabot can auto-update the TOML file via PRs
"Before Version Catalog, upgrading Retrofit meant finding every 'com.squareup.retrofit2:retrofit:2.x.x' string across 8 build files, updating each, hoping you got them all. With Version Catalog: change one line in libs.versions.toml, every module picks it up. The type-safe accessor also means you can't accidentally reference a non-existent library — it's a compile error."
Slow builds kill developer productivity. The fix is a combination of Gradle caching, configuration cache, parallel execution, and KSP — each targeting a different bottleneck in the build pipeline.
// gradle.properties — the most impactful config file org.gradle.jvmargs=-Xmx4g -XX:+HeapDumpOnOutOfMemoryError -Dfile.encoding=UTF-8 org.gradle.caching=true // reuse outputs from previous builds org.gradle.parallel=true // build independent modules in parallel org.gradle.configureondemand=true // only configure modules needed for the task org.gradle.configuration-cache=true // cache build configuration (Gradle 8+) kotlin.incremental=true // only recompile changed Kotlin files ksp.incremental=true // only reprocess changed KSP inputs android.enableR8.fullMode=true // faster, smaller R8 in release // Profile build with --scan // ./gradlew assembleDebug --scan // Generates a Gradle Enterprise report — shows where time is spent // Key optimizations: // 1. KAPT → KSP (saves 30-60s per clean build) // 2. Gradle build cache (saves 40-80% on CI — reuses unchanged module outputs) // 3. Configuration cache (saves 30s+ — skips re-evaluating build scripts) // 4. Parallel builds — multi-module projects build independent modules simultaneously // 5. Modularisation — smaller modules = faster incremental builds // Change in :feature:profile only rebuilds :feature:profile, not :feature:home // 6. Avoid transitive dependency leakage // implementation() — dependency not exposed to consumers (faster compilation) // api() — exposed to consumers (forces recompilation of dependents) // Rule: use implementation() everywhere — only api() when truly needed // 7. Disable unused features in debug android { buildTypes { debug { splits.abi.isEnable = false // no ABI splits in debug } } } // Typical results after all optimizations: // Clean build: 20 min → 8 min // Incremental build: 3 min → 30 sec
- Gradle build cache: reuses task outputs — unchanged modules never recompiled
- Configuration cache: caches the build graph — skips script evaluation on repeated builds
- Parallel execution: independent modules built concurrently — critical for multi-module apps
- KAPT→KSP: annotation processing 2x faster — single largest win for annotation-heavy projects
- implementation() over api(): limits recompilation cascade — change in a module doesn't force all dependents to recompile
"Build time ROI order: (1) Gradle build cache — free, enable it, huge CI win. (2) KAPT→KSP — one-time migration, saves time on every build forever. (3) Configuration cache — needs compatibility fixes but saves 30s+ per build. (4) implementation() over api() — simple discipline, prevents recompilation cascades. Profile first with --scan to know where your time actually goes."
These configurations control how dependencies are exposed to other modules and when they're included in the classpath. Choosing wrong causes compilation errors, build slowdowns, or bloated APKs.
// implementation — private dependency (default choice) implementation(libs.retrofit) // ✅ Available at compile AND runtime in THIS module // ❌ NOT visible to modules that depend on THIS module // ✅ Faster builds — consumer modules don't recompile when this changes // api — public dependency (use sparingly) api(libs.retrofit) // ✅ Available at compile AND runtime, also exposed to consumers // ❌ Slower builds — changing this forces all consuming modules to recompile // Use for: types in your public API surface that consumers need // Example: :core:network exposes Retrofit types that :feature:home uses in its API // compileOnly — compile time only, not bundled in APK compileOnly(libs.javax.annotation) // ✅ Used for annotations/stubs needed at compile time (e.g., Lombok, JSR-305) // ❌ Not available at runtime — app crashes if you try to use it // Use for: annotation processors, APIs provided by the runtime environment // runtimeOnly — runtime only, not needed at compile time runtimeOnly(libs.slf4j.simple) // ❌ Not on compile classpath — can't import or reference it directly // ✅ Available at runtime (needed for service discovery, SPI, logging backends) // Use for: logging implementations, JDBC drivers, plugin implementations // Real example — multi-module // :core:network module dependencies { implementation(libs.okhttp) // internal impl — feature modules don't see OkHttp api(libs.retrofit) // exposed — feature modules use Retrofit API types } // :feature:home module can use Retrofit (transitive via api) // :feature:home cannot use OkHttp directly (hidden by implementation)
- implementation: the default — keeps dependency private, faster incremental builds
- api: exposes transitive dependency — only when consumers genuinely need the types
- compileOnly: annotations and stubs only — reduces APK size, crashes if used at runtime
- runtimeOnly: service implementations — logging backends, JDBC drivers, plugins
- Build speed: prefer implementation everywhere — api forces consumer recompilation on any change
"The rule: start with implementation for everything. Upgrade to api only when you get a compilation error in a consumer module that says it can't find a type from your module's dependency. api() should be rare — it means 'I'm intentionally making this part of my public API'. Overusing api() is one of the top causes of slow incremental builds."
Convention Plugins are reusable Gradle plugins written in your project that standardise build configuration across modules. Instead of copy-pasting the same android {} block into 20 modules, you apply one plugin that contains all the shared config.
// Problem: 20 feature modules each have the same 50-line android {} block // Change compileSdk → edit 20 files. Add a lint rule → edit 20 files. // Solution: build-logic/convention/src/main/kotlin/ // AndroidFeatureConventionPlugin.kt class AndroidFeatureConventionPlugin : Plugin<Project> { override fun apply(target: Project) { with(target) { pluginManager.apply {apply ("com.android.library")apply ("org.jetbrains.kotlin.android")apply ("com.google.devtools.ksp") }extensions .configure <LibraryExtension> { compileSdk = 35 defaultConfig { minSdk = 24; testInstrumentationRunner = "androidx.test.runner.AndroidJUnitRunner" } compileOptions { sourceCompatibility = JavaVersion.VERSION_17; targetCompatibility = JavaVersion.VERSION_17 } }dependencies { add("implementation",libs .findLibrary ("hilt.android").get ()) add("ksp",libs .findLibrary ("hilt.compiler").get ()) } } } } // build-logic/convention/build.gradle.kts gradlePlugin { plugins { register("androidFeature") { id = "myapp.android.feature" implementationClass = "AndroidFeatureConventionPlugin" } } } // Each feature module's build.gradle.kts — just 3 lines plugins {alias(libs.plugins.myapp.android.feature) } dependencies { implementation(project(":core:data")) // only module-specific deps here } // compileSdk, minSdk, Hilt, KSP — all handled by the plugin
- Convention Plugin: a Gradle plugin in your project — encapsulates shared build config
- Single change: update compileSdk in the plugin → all 20 modules updated instantly
- Enforces standards: every module gets the same lint rules, test runner, Java version
- Follows Now in Android: Google's reference architecture uses this pattern exactly
- build-logic module: lives in build-logic/ — a composite build included in settings.gradle.kts
"Convention plugins are the answer to 'how do you maintain 20 feature modules without duplication?' Each module's build.gradle.kts is 5-10 lines — just plugins{} and module-specific dependencies. All shared config (Android SDK, Kotlin, Hilt, testing) lives in the convention plugin. This is exactly how Google's Now in Android reference app is structured."
These three SDK values control what APIs you can use, what Android versions can install your app, and how the OS handles your app's behaviour. Getting them wrong causes either app crashes or Play Store policy violations.
android { compileSdk = 35 // SDK used to COMPILE your code defaultConfig { minSdk = 24 // MINIMUM Android version that can install your app targetSdk = 35 // Android version you've TESTED against (affects OS behaviour) } } // compileSdk — "which APIs can I write code with?" // compileSdk = 35 → can use APIs introduced up to Android 15 // compileSdk = 30 → using a new API from API 33 = compile error // Rule: always set to latest SDK — doesn't affect what devices can run your app // minSdk — "who can install my app?" // minSdk = 24 → Android 7.0+ can install (covers 99%+ of active devices in 2025) // minSdk = 21 → Android 5.0+ (adds 0.1% more devices, significant compat work) // Calling API 26 on a device running API 24 → crash! // Fix: @RequiresApi(26) + check at runtime: if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.O) { /* API 26+ code */ } // targetSdk — "what behaviour changes do I accept?" // Android makes behaviour changes per targetSdk // targetSdk = 33 → app is subject to Android 13 behaviour changes (storage, permissions) // targetSdk = 34 → subject to Android 14 changes (foreground service types required) // targetSdk < 34 → Play Store rejected apps targeting old targetSdk (policy deadline) // Rule: always keep targetSdk = latest to stay compliant with Play policy // Lint warning: using new API without version check // @SuppressLint("NewApi") — suppress when you've checked manually // @RequiresApi(Build.VERSION_CODES.O) — annotate your own methods that need newer APIs
- compileSdk: what APIs you can write with — always latest, doesn't affect runtime
- minSdk: who can install your app — balance coverage vs compatibility effort
- targetSdk: tells the OS which behaviour changes you've adapted to — must keep current
- Play Store policy: Google requires targetSdk within 1 year of latest release — keep it updated
- Version check: always guard new APIs with Build.VERSION.SDK_INT >= check at runtime
"The conceptual model: compileSdk is your toolkit at compile time. minSdk is your audience at runtime. targetSdk is your contract with the OS about which behaviour changes you've adopted. Developers often confuse compileSdk and targetSdk — they can differ. You might compile with SDK 35 but target SDK 34 while testing the SDK 35 behaviour changes."
Debug-works-release-crashes is almost always an R8 issue — something got stripped or renamed. The diagnosis is systematic: check the stack trace, use the mapping file, add keep rules, and narrow down the culprit.
// Step 1: Reproduce with minified build locally // Create a build type that has R8 but uses debug signing (easier to install) buildTypes { create("minifiedDebug") { initWith(getByName ("debug")) isMinifyEnabled = true // enable R8 proguardFiles(getDefaultProguardFile ("proguard-android-optimize.txt"), "proguard-rules.pro") signingConfig = signingConfigs.getByName ("debug") // debug signing — easy install } } // Step 2: Get and deobfuscate the stack trace // mapping.txt from the release build → retrace tool // ./gradlew assembleRelease // mapping.txt: app/build/outputs/mapping/release/mapping.txt // java -jar retrace.jar mapping.txt stacktrace.txt → readable stacktrace // Step 3: Common R8 crash types and fixes // ClassNotFoundException — class was stripped // Crash: java.lang.ClassNotFoundException: com.example.MyCallback -keep class com.example.MyCallback { *; } // NoSuchMethodException — method was removed // Crash: Method not found: onSuccess -keepclassmembers class com.example.** { public void onSuccess(...); public void onFailure(...); } // NullPointerException from Gson — field removed // Gson reads via reflection — R8 removes "unused" fields -keepclassmembers class com.example.models.** { *; } // Step 4: Use -printusage to see what was removed // proguard-rules.pro: -printusage build/outputs/usage.txt // After release build, check if your class appears in usage.txt (= was removed) // Step 5: Verify library consumer rules are applied // Libraries should ship their own .pro rules in META-INF/proguard/ // If library has outdated rules, file a bug and add manual rules
- minifiedDebug build type: reproduce R8 issues locally without dealing with release signing
- retrace: Google's tool to de-obfuscate crash stack traces using mapping.txt
- -printusage: generates a file listing everything R8 removed — find your missing class
- Gson + R8: the most common culprit — Gson reads fields via reflection, R8 removes "unused" fields
- Consumer rules: check META-INF/proguard/ in the library AAR — library should ship its own rules
"The diagnostic flowchart: (1) Get stack trace → deobfuscate with mapping.txt → (2) See ClassNotFoundException? Add -keep. See NPE? Probably Gson stripping fields → -keepclassmembers. See MethodNotFoundException? Add keep for that method. (3) If you can't reproduce, add a minifiedDebug build type — never debug R8 issues by releasing to production."
versionCode is an integer the Play Store uses to determine update ordering — must increase with every release. versionName is the human-readable string shown to users. Automating both prevents manual errors and ties releases to your CI pipeline.
android { defaultConfig { versionCode = 42 // integer, must increase each release versionName = "2.1.0" // string, shown to users } } // versionCode rules: // • Must be a positive integer // • Must be higher than the previous release (Play rejects downgrades) // • Max value: 2,100,000,000 // • Not shown to users (only versionName is visible) // Automation: read versionCode from CI environment // CI systems expose a build number (GitHub Actions: GITHUB_RUN_NUMBER) val ciVersionCode = System.getenv ("GITHUB_RUN_NUMBER")?.toIntOrNull () ?: 1 android { defaultConfig { versionCode = ciVersionCode versionName = "2.1.$ciVersionCode" } } // Each CI run → unique, incrementing versionCode — no manual tracking // Git-based versioning val gitVersionCode = "git rev-list --count HEAD".execute ().text ().trim ().toInt () val gitVersionName = "git describe --tags --always".execute ().text ().trim () android { defaultConfig { versionCode = gitVersionCode // total commit count — always increases versionName = gitVersionName // "v2.1.0-3-gabcdef" from git tag } } // Tie version to git tag → release v2.1.0 → tag "v2.1.0" → versionName = "v2.1.0" // ABI-based versionCode (for APK splits) // arm64: 2001000, arm: 1001000 — ensures correct update ordering per ABI val abiCodes =mapOf ("armeabi-v7a" to 1, "arm64-v8a" to 2)
- versionCode: integer only — Play Store enforces it must increase for each release
- versionName: any string — semantic versioning (2.1.0) is conventional
- CI automation: GITHUB_RUN_NUMBER is always increasing — perfect versionCode source
- Git tag versioning: git describe produces human-readable version from tags + commits
- ABI versionCodes: for APK splits, offset by ABI priority to ensure correct update path
"Manual versionCode management is a time bomb — eventually someone uploads a lower versionCode to Play and it's rejected. Automate it: use CI build number or git commit count. Both are monotonically increasing and never require human attention. Tie versionName to git tags so your crash reports show 'v2.1.0' not '42'."
Signing credentials (keystore password, key password) must never be committed to source control. The correct approach uses environment variables on CI and a local properties file for developers — both kept out of git.
// .gitignore — NEVER commit these keystore/release.jks keystore.properties local.properties // keystore.properties — local developer file (git-ignored) KEYSTORE_PATH=../keystore/release.jks KEYSTORE_PASSWORD=myKeystorePassword KEY_ALIAS=myKeyAlias KEY_PASSWORD=myKeyPassword // app/build.gradle.kts — read from properties OR environment variables import java.util.Properties val keystoreFile =rootProject .file ("keystore.properties") val keystoreProps = if (keystoreFile.exists ()) { Properties().apply {load (keystoreFile.inputStream ()) } } else null android { signingConfigs { create("release") { // Try local properties first, fall back to environment variables (CI) storeFile =file (keystoreProps?.getProperty ("KEYSTORE_PATH") ?: System.getenv ("KEYSTORE_PATH") ?: return@create) storePassword = keystoreProps?.getProperty ("KEYSTORE_PASSWORD") ?: System.getenv ("KEYSTORE_PASSWORD") keyAlias = keystoreProps?.getProperty ("KEY_ALIAS") ?: System.getenv ("KEY_ALIAS") keyPassword = keystoreProps?.getProperty ("KEY_PASSWORD") ?: System.getenv ("KEY_PASSWORD") } } buildTypes { release { signingConfig = signingConfigs.getByName ("release") } } } // CI (GitHub Actions): // Store keystore as base64 secret // Store passwords as GitHub Secrets // Decode keystore to file, set env vars, run ./gradlew bundleRelease // - name: Decode keystore // run: echo "${{ secrets.KEYSTORE_BASE64 }}" | base64 -d > keystore/release.jks
- Never in source control: keystore file and passwords in .gitignore — non-negotiable
- Dual source: local keystore.properties for developers, env vars for CI — same build script
- Base64 keystore in CI: encode keystore to base64, store as GitHub Secret, decode on CI runner
- return@create: skip signing config if credentials aren't available — prevents debug build failures
- Separate keystores: use different keystores for debug and release — debug keystore auto-generated
"If signing credentials are ever committed to git, rotate them immediately — even in a private repo. The pattern: developers have a local keystore.properties (git-ignored), CI has environment variables from GitHub Secrets. The build script reads from either source. The keystore file itself is stored encrypted in a password manager and shared via secure channels, never git."
BuildConfig is a generated Java class that Gradle creates at build time. It contains compile-time constants that differ per build type or flavor — like API URLs, feature flags, and debug modes. It's how you inject configuration without hardcoding it in source code.
// Enable BuildConfig generation (required in AGP 8+) android { buildFeatures { buildConfig = true // opt-in since AGP 8.0 } } // Built-in fields (always present): BuildConfig.DEBUG // Boolean — true in debug, false in release BuildConfig.APPLICATION_ID // "com.example.app" BuildConfig.BUILD_TYPE // "debug" / "release" / "staging" BuildConfig.FLAVOR // "free" / "premium" BuildConfig.VERSION_CODE // Int BuildConfig.VERSION_NAME // "1.0.0" // Custom fields — defined in build.gradle.kts android { defaultConfig { buildConfigField("String", "API_URL", "\"https://api.example.com\"") buildConfigField("Boolean", "ANALYTICS", "true") buildConfigField("int", "MAX_RETRIES", "3") } buildTypes { debug { buildConfigField("String", "API_URL", "\"https://staging.api.example.com\"") buildConfigField("Boolean", "ANALYTICS", "false") } } } // Access in Kotlin: val apiUrl = BuildConfig.API_URL if (BuildConfig.DEBUG) { Timber.plant (Timber.DebugTree()) } if (!BuildConfig.ANALYTICS) { analytics.disable () } // ⚠️ Don't store secrets in BuildConfig! // BuildConfig values are embedded in the APK and readable via reverse engineering // ❌ buildConfigField("String", "API_KEY", "\"secret123\"") — visible in decompiled APK // ✅ Store API keys in backend or Android Keystore for sensitive values
- Generated at build time: BuildConfig class is created by Gradle — not in your source code
- buildConfig=true: required since AGP 8.0 — opt-in to keep build fast if not using it
- Build-type-specific: debug overrides defaultConfig values — different API URLs per environment
- Don't store secrets: BuildConfig values are in the APK in plain text — visible in decompiled apps
- Common uses: API URLs, feature flags, analytics toggles, debug mode checks
"BuildConfig.DEBUG is the most useful field — it's true exactly in debug builds and false in release, automatically. Use it to plant Timber.DebugTree() and enable logging/StrictMode. The security caveat: BuildConfig values ARE visible in a decompiled APK — use it for URLs and config, not for secrets. Real secrets belong in the backend or encrypted local storage."
Dynamic Feature Modules let you deliver parts of your app on-demand — downloaded only when the user needs that feature. This reduces install size and startup time, but adds delivery complexity. Best for large features rarely used by all users.
// Dynamic Feature Module — downloaded after install // Example: AR product viewer — needed by 5% of users // ar/build.gradle.kts plugins {alias(libs.plugins.android.dynamic.feature) } android { // No applicationId — it inherits from :app } dependencies { implementation(project(":app")) // depends on base module implementation(libs.arcore) } // app/build.gradle.kts — declare the dynamic feature android { dynamicFeatures += setOf (":ar") } // Download and install the feature at runtime val splitInstallManager = SplitInstallManagerFactory.create (context) val request = SplitInstallRequest.newBuilder () .addModule ("ar") // module name matches build.gradle.kts directory name .build() splitInstallManager. startInstall (request) .addOnSuccessListener { // Module installed — now safe to use AR classes val intent = Intent().setClassName (packageName, "com.example.ar.ArActivity")startActivity (intent) } .addOnFailureListener { e -> showError(e) } // ✅ Good candidates for dynamic delivery: // - AR / VR features (large native libs) // - High-res asset packs (game levels, textures) // - Rarely-used features (accessibility tools, admin panel) // - Region-specific features (payments only in certain countries) // ❌ Bad candidates: // - Core navigation (user immediately needs it) // - Authentication (always needed) // - Small features (overhead not worth it)
- On-demand delivery: users download the feature only when they tap into it
- Reduces install size: users who never use AR don't download AR libraries
- SplitInstallManager: Google Play's API to request, monitor, and install dynamic features
- Classes unavailable until installed: must check module is installed before using its classes
- Good candidates: AR, large asset packs, rarely-used features, region-specific functionality
"Dynamic Feature Modules solve the 'app is huge because 5% of users use AR' problem. The 95% who never use AR never download the AR module. The trade-off: you must handle the download UX (show a spinner, handle failures, handle slow connections) and the feature's classes are unavailable until installed — you can't reference them directly, only via reflection or Intent."
Android Lint is a static analysis tool that checks your code and resources for potential bugs, performance issues, security vulnerabilities, and style violations — without running the app. It catches issues at build time that would otherwise reach users.
// Enable strict Lint in build.gradle.kts android { lint { abortOnError = true // fail build on Lint errors warningsAsErrors = true // treat warnings as errors (optional, strict) htmlReport = true // generate HTML report xmlReport = true // XML for CI parsing baseline = file("lint-baseline.xml") // ignore pre-existing issues disable +=setOf ("ObsoleteLintCustomCheck") // suppress specific rules enable +=setOf ("StopShip") // enable specific rules } } // Run Lint: // ./gradlew lint — all variants // ./gradlew lintDebug — debug variant only (faster) // Suppress in code: @SuppressLint("SetTextI18n") fun showCount(count: Int) { textView.text = "Count: $count" } // Custom Lint rule (in a separate :lint module) class NoHardcodedColorDetector : Detector(), XmlScanner { override fun getApplicableAttributes() =listOf ("color", "background") override fun visitAttribute(context: XmlContext, attribute: Attr) { if (attribute.value.startsWith ("#")) { context.report ( issue = ISSUE, location = context.getValueLocation (attribute), message = "Avoid hardcoded colors — use theme color attributes" ) } } companion object { val ISSUE = Issue.create ("HardcodedColor", "Hardcoded color", "Use @color or ?attr/colorPrimary instead of #RRGGBB", Category.CORRECTNESS, 6, Severity.WARNING, implementation = ...) } }
- abortOnError=true: Lint failures fail the build — catches issues before they ship
- baseline: suppress known pre-existing issues — allows adopting Lint incrementally
- Custom rules: write detectors for project-specific standards — no hardcoded colors, required annotations
- CI integration: run lintDebug on every PR — catches issues without full release build
- @SuppressLint: per-call suppression — use sparingly, document why
"The lint-baseline.xml strategy: enable Lint with abortOnError=true, generate a baseline that suppresses all existing issues (./gradlew lint -Dlint.baselines.continue=true), commit the baseline. From that point, any NEW Lint issue fails the build. You've adopted strict Lint without having to fix all existing issues immediately — fix them incrementally."
A well-structured multi-module project uses a composite build for convention plugins, a Version Catalog for all dependencies, and a CI pipeline that runs tests on every PR and deploys on tag push. Getting this right from day one saves weeks of Gradle debt later.
// settings.gradle.kts -- declare modules and include build-logic pluginManagement { includeBuild("build-logic") } include(":app", ":core:network", ":core:database", ":feature:home") // feature/home/build.gradle.kts -- 5 lines using convention plugin plugins { alias(libs.plugins.myapp.android.feature) } dependencies { implementation(project(":core:network")) } // GitHub Actions -- CI on every PR // on: [push, pull_request] // - run: ./gradlew test lintDebug ← fast, every PR // - run: ./gradlew bundleRelease ← only on tag push
- build-logic as composite build: convention plugins available to all modules via pluginManagement { includeBuild }
- libs.versions.toml: all dependency versions in one file -- type-safe accessors, IDE autocomplete, atomic upgrades
- Convention plugins: AndroidFeaturePlugin, AndroidLibraryPlugin -- shared compileSdk, minSdk, Hilt, KSP in one place
- CI split: test+lint on every PR (fast), bundleRelease+upload on git tag push (intentional release trigger)
- gradle/actions/setup-gradle: caches ~/.gradle on CI -- saves 3-5 minutes per run
"The CI split strategy: run ./gradlew test lintDebug on every PR — fast (2-3 min). Run connectedAndroidTest only on merge to main — slow, needs an emulator. Run bundleRelease + upload only on git tag push — production deployments are intentional events, not automatic per-commit. This prevents slow feedback loops on PRs."
The configuration cache saves the result of the build configuration phase — it skips re-evaluating all build.gradle scripts on subsequent builds. This saves 20-60 seconds per build. But many common patterns break it and must be fixed first.
// Enable in gradle.properties org.gradle.configuration-cache=true org.gradle.configuration-cache-problems=warn // warn instead of fail (during migration) // What configuration cache does: // First build: evaluate all build scripts → cache the task graph // Subsequent builds: load task graph from cache → skip script evaluation // Savings: 20-60 seconds on typical projects // Common patterns that BREAK configuration cache: // ❌ 1. Accessing Project at execution time tasks.register ("myTask") {doLast { val version = project.version // ❌ Project not serialisable } } // ✅ Fix: capture value at configuration time val version = project.version // captured at config time tasks.register ("myTask") {doLast { println(version) } // ✅ uses captured value } // ❌ 2. Using System.getenv() inside task action tasks.register ("printEnv") {doLast { println(System.getenv ("CI")) } // ❌ env not cached } // ✅ Fix: use providers API val ci = providers.environmentVariable ("CI") tasks.register("printEnv") { doLast { println(ci.orNull ()) } } // ❌ 3. Plugins that don't support configuration cache yet // Check: https://github.com/gradle/gradle/issues → configuration-cache label // Workaround: use configuration-cache-problems=warn until plugin is updated // Diagnose compatibility: // ./gradlew test --configuration-cache // Generates report: build/reports/configuration-cache/.../report.html
- Configuration cache: caches the build task graph — skips Groovy/Kotlin script evaluation
- Project not serialisable: can't access `project` in task actions — capture values at config time
- Providers API: use providers.environmentVariable() instead of System.getenv() in tasks
- Plugin compatibility: some third-party plugins break config cache — use warn mode during migration
- 20-60s savings: significant on large multi-module projects with many build scripts
"Configuration cache is one of the most impactful Gradle improvements in recent years, but the migration can be painful because it requires fixing all build script patterns that aren't cache-compatible. The strategy: enable with problems=warn, run ./gradlew test, read the HTML report, fix issues one by one. Don't try to fix everything at once."
An application module produces an APK/AAB — it can be installed on a device. A library module produces an AAR (Android Archive) — it provides code and resources to other modules but can't be installed directly.
// Application module (app/build.gradle.kts) plugins {alias(libs.plugins.android.application) // com.android.application } android { defaultConfig { applicationId = "com.example.app" // unique, required for application } } // Output: APK + AAB (installable, publishable) // Has: applicationId, signing config, split config // Can: be installed, have dynamic features, use instant apps // Library module (core/network/build.gradle.kts) plugins { alias(libs.plugins.android.library) // com.android.library } android { // NO applicationId — libraries don't have one // namespace used for generated R class namespace = "com.example.core.network" } // Output: AAR (Android Archive) file // Has: its own resources, assets, manifest, native libs // Can: be consumed by other modules via implementation(project(":core:network")) // Key differences: // Application Library // Plugin android.app android.library // Output APK + AAB AAR // applicationId Required ❌ Not allowed // Installable ✅ ❌ // Publishable ✅ (Play) ✅ (Maven) // R class app-level module-level (own namespace) // Android Test module (com.android.test) // Third plugin type — for separate Espresso/UI test modules // Applied to a module that ONLY contains tests, no production code
- Application: produces APK/AAB — the thing users install from Play Store
- Library: produces AAR — reusable code package consumed by other modules
- applicationId: required and unique for application modules — absent in library modules
- namespace: required in library modules — generates the R class for that module's resources
- Publishable: library AARs can be published to Maven for external consumption
"The architecture implication: in a multi-module app, there's exactly one application module (:app) and many library modules (:core:network, :feature:home). The application module is the entry point — it declares the manifest activities, application class, and pulls together all the feature modules. Feature modules never reference :app — dependencies only flow toward :app, never away."
Size regressions sneak in when someone adds a large library without realising its impact. Tracking AAB size in CI and failing builds that exceed a threshold prevents this — catching size increases in the PR that caused them.
// Method 1: Simple CI size check in GitHub Actions // .github/workflows/size_check.yml // - run: ./gradlew bundleRelease // - name: Check AAB size // run: | // SIZE=$(stat -c%s app/build/outputs/bundle/release/app-release.aab) // echo "AAB size: $SIZE bytes" // MAX=52428800 # 50MB threshold // if [ $SIZE -gt $MAX ]; then echo "❌ AAB too large!"; exit 1; fi // Method 2: Gradle task for APK size reporting tasks.register ("reportApkSize") { dependsOn("assembleRelease")doLast { val apk =fileTree ("${buildDir}/outputs/apk/release") .filter { it.name.endsWith(".apk") } . first () val sizeKb = apk.length () / 1024 println("APK size: ${sizeKb}KB") if (sizeKb > 50_000)error ("APK exceeds 50MB — check recent dependency additions") } } // Method 3: Diffuse — Jakewharton's AAB/APK diff tool // Compares two APKs and shows what changed in each section // diffuse diff old.apk new.apk // Output: // OLD: 32.1 MB NEW: 35.7 MB DIFF: +3.6 MB // classes.dex: +1.2MB (new library added) // res/: +2.4MB (large PNG assets added) // lib/arm64: +0MB // Method 4: android-size-report Gradle plugin // Generates PR comments with size diff // Run on PR → comment "APK grew by 2.3MB — lib/xyz.so increased" // What to track: // Download size (what user sees on Play Store) // Install size (space used on device) // Dex method count (65K method limit) // classes.dex size (reflects code bloat) // res/ and assets/ (reflects image/asset bloat)
- CI size threshold: fail build if AAB exceeds limit — catches regressions in the PR
- Diffuse: shows exactly which section grew — dex, resources, native libs
- PR comments: automated size diff comment on every PR — visible without running the build
- Track download vs install size: different metrics, both matter for user experience
- Method count: 65K dex method limit — multidex needed beyond it, Lint checks this
"Size regressions are invisible without CI tracking. Someone adds a 5MB image library, APK grows from 30MB to 35MB, it ships. Nobody noticed. With a CI size check and a PR comment showing '+5.2MB in res/', the developer sees it immediately and can fix it before merge. Diffuse is the best tool for understanding exactly why the size changed."
R8 Full Mode enables more aggressive optimisations than standard R8 -- class merging, interface removal, constructor argument propagation, and enum unboxing. The result is 5-8% smaller DEX output. The trade-off is that some patterns standard R8 preserved automatically now need explicit keep rules.
// Enable in gradle.properties android.enableR8.fullMode=true // Enum toString() -- Full Mode may optimise away the name enum class Status { PENDING, ACTIVE, DONE } // Fix: keep enum members if you use .name or .toString() // -keepclassmembers enum * { public static **[] values(); public static ** valueOf(java.lang.String); } // Default interface methods -- Full Mode may remove unused defaults interface Callback { fun onSuccess() {} // add -keep rule if accessed via reflection } // Build and check for warnings // ./gradlew bundleRelease → R8 warnings in output → address each one
- Full Mode additional optimisations: class merging, interface removal, constructor argument propagation, enum unboxing
- 5-8% smaller DEX than standard R8 -- meaningful for apps already close to a size budget
- Enum .name/.toString() may break: Full Mode can optimise away the string name -- add keep rules if you use enum names via reflection or serialisation
- Enable for new projects from day one -- much easier than retrofitting keep rules into an established app
- Standard R8 vs Full Mode: standard is compatible with older ProGuard tooling; Full Mode is more aggressive and faster
"R8 Full Mode is the 2025-26 recommendation for new projects — enable it from the start along with Kotlin Serialization (which avoids the reflection-based issues that make Full Mode painful). The 5-8% additional size reduction is meaningful at scale, and the extra keep rules needed are minimal if you're using annotation-based code generation rather than reflection."
A build system review catches configuration mistakes that silently slow down builds, bloat APK size, or create security vulnerabilities — before they accumulate into intractable debt.
// 1. ❌ KAPT when KSP is available kapt(libs.hilt.compiler) // ❌ slow — should be ksp() // ✅ ksp(libs.hilt.compiler) — 2x faster // 2. ❌ api() overused instead of implementation() api(libs.okhttp) // ❌ forces all consumer modules to recompile when OkHttp changes // ✅ implementation(libs.okhttp) — unless consumers genuinely need OkHttp types // 3. ❌ R8 disabled in release buildTypes { release { isMinifyEnabled = false } } // ❌ no shrinking/obfuscation // ✅ isMinifyEnabled = true + isShrinkResources = true // 4. ❌ Hardcoded version strings (no Version Catalog) implementation("com.google.dagger:hilt-android:2.48") // ❌ scattered, drift-prone // ✅ implementation(libs.hilt.android) from libs.versions.toml // 5. ❌ Signing credentials in build files storePassword = "MySecretPassword" // ❌ committed to git! // ✅ Read from local keystore.properties (git-ignored) or env vars // 6. ❌ targetSdk way behind latest targetSdk = 30 // ❌ Play Store will reject — must be within 1 year of latest // ✅ targetSdk = 35 (latest as of 2025) // 7. ❌ Universal APK without ABI filtering or AAB splits.abi.isEnable = false // ❌ shipping arm64 + arm + x86 to everyone // ✅ Use AAB (Play handles it) or ABI splits for direct distribution // 8. ❌ No Gradle caching enabled // gradle.properties missing: org.gradle.caching=true // ✅ org.gradle.caching=true + org.gradle.parallel=true // + kotlin.incremental=true + ksp.incremental=true // Bonus check: mapping.txt not archived in CI // ❌ ./gradlew bundleRelease but mapping.txt not saved as artifact // ✅ Archive mapping.txt as CI artifact, upload to Play Console
- KAPT→KSP: first thing to check — biggest build speed improvement available
- api() overuse: causes recompilation cascades — prefer implementation() always
- R8 disabled: production apps must have minification enabled — security and size
- No Version Catalog: scattered version strings cause drift and upgrade pain
- Signing in build files: immediate security issue — credentials in git are compromised
"In a build system review, I check security first (signing credentials), then correctness (targetSdk, R8 enabled), then performance (KAPT vs KSP, caching, api vs implementation). The most impactful fixes: KAPT→KSP saves minutes per build day. Signing credentials in git is a security incident. Missing mapping.txt means you can't debug production crashes."
Gradle models every build step as a task. Tasks declare inputs and outputs — Gradle builds a directed acyclic graph (DAG) of task dependencies and executes only what's needed. Understanding this lets you extend the build and diagnose why tasks run or are skipped.
// See the task graph for a build // ./gradlew assembleDebug --dry-run → lists tasks without running them // ./gradlew assembleDebug --scan → visual task graph in browser // Tasks run in dependency order: // preBuild → generateDebugSources → compileDebugKotlin → ... // → mergeDebugResources → packageDebugResources → assembleDebug // Define a custom task with dependencies tasks.register("printVersionInfo") { dependsOn("assembleDebug") // runs after assembleDebug doLast { println("Build complete: ${android.defaultConfig.versionName}") } } // Task inputs and outputs — enable UP-TO-DATE checks tasks.register<Copy>("copyApk") { dependsOn("assembleRelease") from("${buildDir}/outputs/apk/release") into("${rootDir}/artifacts") } // If from and into haven't changed since last run → task is UP-TO-DATE → skipped // This is how Gradle's incremental build works — avoid re-running unchanged tasks // finalizedBy — run a task after another (even on failure) tasks. named("test") { finalizedBy("generateTestReport") // generate report even if tests fail } // mustRunAfter — ordering without hard dependency tasks. named("lintDebug") { mustRunAfter("test") // if both run, lint goes after test } // but lintDebug doesn't force test to run
- DAG: Gradle resolves all task dependencies into an ordered execution graph before running anything
- UP-TO-DATE: if task inputs/outputs unchanged, Gradle skips it — core of incremental builds
- dependsOn: hard dependency — the listed task always runs first
- finalizedBy: runs after, even on failure — useful for cleanup and reporting tasks
- --dry-run: preview which tasks would run without executing them
"UP-TO-DATE checks are why incremental builds are fast. Gradle tracks every task's inputs (source files, config) and outputs (class files, APK). If nothing changed, the task is skipped entirely. When you change one file, only tasks that depend on that file re-run. --scan shows the full task graph and which tasks were UP-TO-DATE vs executed."
Gradle resolves dependencies transitively — if Retrofit depends on OkHttp 4.11, your project gets OkHttp 4.11 even if you didn't declare it. When two paths pull in different versions of the same library, Gradle must pick one — by default it picks the highest version.
// See the full dependency tree // ./gradlew app:dependencies --configuration releaseRuntimeClasspath // Output: // +--- com.squareup.retrofit2:retrofit:2.11.0 // | \--- com.squareup.okhttp3:okhttp:4.12.0 // +--- com.squareup.okhttp3:okhttp:4.9.0 (*) // (*) = version conflict, resolved to 4.12.0 (highest wins) // Force a specific version — override conflict resolution configurations.all { resolutionStrategy { force("com.squareup.okhttp3:okhttp:4.12.0") } } // Exclude a transitive dependency implementation(libs.retrofit) { exclude (group = "com.squareup.okhttp3", module = "okhttp") } // Use when: the transitive dep conflicts or you provide your own version // Detect version conflicts explicitly configurations.all { resolutionStrategy { failOnVersionConflict() // fail build instead of silently picking highest } } // BOM (Bill of Materials) — align versions across a family implementation(platform(libs.compose.bom)) // sets versions for all Compose libs implementation(libs.compose.ui) // no version needed — BOM controls it implementation(libs.compose.material3) // guaranteed compatible version // Check for dependency updates // ./gradlew dependencyUpdates (with ben-manes/gradle-versions-plugin) // Lists which of your dependencies have newer versions available
- Transitive dependencies: Gradle pulls in all of your dependencies' dependencies automatically
- Conflict resolution: when two paths need different versions, Gradle picks the highest by default
- resolutionStrategy.force: pin an exact version — overrides Gradle's default conflict resolution
- exclude: drop a transitive dependency entirely — useful when a library ships a conflicting version
- BOM: aligns a whole family of libraries to known-compatible versions — eliminates version guessing
"The Compose BOM is the best example of why BOMs matter. Compose has 15+ libraries (ui, material3, animation, runtime...) that must all be on compatible versions. Without the BOM you'd manually manage 15 version strings and hope they're compatible. With the BOM, declare one version, all 15 are aligned automatically."
Publishing a library requires configuring the maven-publish plugin, generating sources and docs JARs, and signing artifacts. The process differs slightly between Maven Central (strict, requires signing) and GitHub Packages (simpler, requires GitHub auth).
// library/build.gradle.kts plugins {alias(libs.plugins.android.library) id("maven-publish") id("signing") } android { publishing { singleVariant("release") { withSourcesJar() withJavadocJar() } } } afterEvaluate { publishing { publications { create<MavenPublication>("release") { from (components["release"]) groupId = "com.example" artifactId = "mylibrary" version = "1.0.0" pom { name.set ("My Library") description.set("A useful Android library") url. set ("https://github.com/example/mylibrary") licenses { license { name.set ("Apache-2.0") } } developers { developer { name.set ("Alice"); email.set ("[email protected]") } } scm { connection.set ("scm:git:github.com/example/mylibrary.git") url.set ("https://github.com/example/mylibrary") } } } } // GitHub Packages repository repositories { maven { name = "GitHubPackages" url = uri("https://maven.pkg.github.com/example/mylibrary") credentials { username = System.getenv ("GITHUB_ACTOR") password = System.getenv ("GITHUB_TOKEN") } } } } // Signing — required for Maven Central signing { val key = System.getenv ("GPG_SIGNING_KEY") val pwd = System.getenv ("GPG_SIGNING_PASSWORD")useInMemoryPgpKeys (key, pwd) sign(publishing.publications["release"]) } } // Publish: ./gradlew publishReleasePublicationToGitHubPackagesRepository
- maven-publish plugin: Gradle's built-in publishing support — generates POM and publishes artifacts
- withSourcesJar + withJavadocJar: required for Maven Central — consumers get IDE source navigation
- POM metadata: name, description, URL, license, developer, SCM — all required for Maven Central
- GitHub Packages: simpler auth via GITHUB_TOKEN — great for private or org-internal libraries
- Signing: GPG signature required for Maven Central — use in-memory key from CI environment variable
"GitHub Packages is the fastest way to share a library within a team — create a private repo, publish to GitHub Packages using the GITHUB_TOKEN, and consumers add your repo as a Maven repository. Maven Central takes more setup (account, GPG key, Sonatype staging) but makes the library available to the whole world without any repo configuration on the consumer side."
Bundles group related libraries that are always added together. Instead of declaring Room runtime, ktx, and compiler separately in every module, you declare a bundle once and reference one name.
// gradle/libs.versions.toml [versions] room = "2.6.1" hilt = "2.51.1" retrofit = "2.11.0" [libraries] room-runtime = { module = "androidx.room:room-runtime", version.ref = "room" } room-ktx = { module = "androidx.room:room-ktx", version.ref = "room" } room-compiler = { module = "androidx.room:room-compiler", version.ref = "room" } hilt-android = { module = "com.google.dagger:hilt-android", version.ref = "hilt" } hilt-compiler = { module = "com.google.dagger:hilt-compiler",version.ref = "hilt" } retrofit-core = { module = "com.squareup.retrofit2:retrofit", version.ref = "retrofit" } retrofit-gson = { module = "com.squareup.retrofit2:converter-gson", version.ref = "retrofit" } retrofit-scalars = { module = "com.squareup.retrofit2:converter-scalars", version.ref = "retrofit" } // Declare bundles — groups of libraries added together [bundles] room = ["room-runtime", "room-ktx"] // runtime deps together retrofit = ["retrofit-core", "retrofit-gson"] // networking stack // In build.gradle.kts — one line instead of three dependencies { // Without bundle: implementation(libs.room.runtime) implementation(libs.room.ktx) // ✅ With bundle: implementation(libs.bundles.room) // room-runtime + room-ktx ksp(libs.room.compiler) // compiler is ksp, not in bundle implementation(libs.bundles.retrofit) // retrofit + gson converter implementation(libs.hilt.android) ksp(libs.hilt.compiler) } // Convention plugin can also use bundles dependencies { add("implementation", libs.bundles.room) }
- Bundles: named groups in [bundles] section — reference multiple libs with one accessor
- Always-together pattern: Room runtime+ktx, Retrofit+converter — always added as a pair
- Convention plugins: bundles work perfectly in convention plugins for shared module config
- Compiler excluded: annotation processors (ksp/kapt) are declared separately — not in the bundle
- Refactoring: add a lib to a bundle in one place → all modules that use the bundle get it
"Bundles shine in multi-module projects. Your feature convention plugin adds libs.bundles.room for every feature module. When Room ships a new companion library you want in every module, add it to the bundle once — all feature modules get it automatically. Without bundles you'd edit every module's build.gradle.kts individually."
Multi-flavour CI means each build variant (freeIndiaRelease, premiumGlobalRelease) is built and tested independently. GitHub Actions matrix builds parallelise this -- all variants build simultaneously so total CI time equals one variant's time, not the sum of all variants.
// .github/workflows/release.yml -- matrix over flavors // strategy: // matrix: // flavor: [freeIndia, premiumIndia, freeGlobal, premiumGlobal] // steps: // - run: ./gradlew bundle${{ matrix.flavor }}Release // - run: ./gradlew test${{ matrix.flavor }}ReleaseUnitTest // Variant-specific Gradle commands // ./gradlew assembleFreeIndiaRelease → APK for free + India + release // ./gradlew bundlePremiumGlobalRelease → AAB for premium + global + release // Upload to different Play tracks per flavor (Gradle Play Publisher plugin) play { track.set("internal") // free → internal, premium → alpha serviceAccountCredentials.set(file("play-service-account.json")) }
- Matrix builds: GitHub Actions matrix strategy runs each flavor in parallel -- 4 flavors build simultaneously, not sequentially
- Variant task naming: Gradle capitalises each dimension -- bundleFreeIndiaRelease, testPremiumGlobalReleaseUnitTest
- Conditional upload: use if: contains(matrix.flavor, 'premium') to route variants to different Play tracks
- Firebase App Distribution: fastest non-Play distribution for internal testing -- wzieba/Firebase-Distribution-Github-Action
- Gradle Play Publisher: automates Play Store uploads from CI -- replaces manual console upload
"Matrix builds are the key insight for multi-flavour CI. Instead of one sequential pipeline that builds each variant one by one, matrix runs them in parallel — 4 variants build simultaneously. Total CI time = time for one variant, not four times that. For 10-minute builds across 4 variants: sequential = 40 minutes, matrix = 10 minutes."
Android's Dalvik Executable (DEX) format uses 16-bit method references — limiting each DEX file to 65,536 methods. Large apps with many libraries exceed this. Multidex splits code across multiple DEX files. R8 makes this largely irrelevant in release builds.
// Error without multidex (large apps): // "Cannot fit requested classes in a single dex file" // "method count: 65536 > 65536" // Enable multidex android { defaultConfig { multiDexEnabled = true } } dependencies { implementation(libs.androidx.multidex) } // Application class (for API < 21) class MyApp : MultiDexApplication() // OR class MyApp : Application() { override fun attachBaseContext(base: Context) { super.attachBaseContext (base) MultiDex.install (this) // install multidex manually } } // API 21+ (Android 5.0+): native multidex — no library needed android { defaultConfig { minSdk = 21 // ART natively supports multiple dex files multiDexEnabled = true } } // With minSdk 21+, just set multiDexEnabled = true — no library dependency // Why R8 makes this mostly irrelevant for release builds: // R8 shrinks your app aggressively — removes unused code // A release build with 100K methods → R8 removes unused → 30K methods // Below the 64K limit → single dex → no multidex needed // Multidex is mainly needed for DEBUG builds (R8 disabled) // Check method count // ./gradlew countDebugDexMethods (with dexcount gradle plugin) // Or: Android Studio → Build → Analyze APK → classes.dex → method count
- 65K limit: DEX format's 16-bit method reference ceiling — ~65,536 methods per file
- multiDexEnabled=true: tells D8/R8 to generate multiple DEX files when needed
- minSdk 21+: ART natively supports multidex — no multidex library needed
- R8 in release: shrinks methods well below 64K for most apps — multidex mainly a debug concern
- Analyze APK: shows method count per DEX file — use to verify you're below the limit
"In 2025 the 64K limit is mostly a solved problem: set minSdk to 21 (covers 99%+ of devices), enable multiDexEnabled=true, and enable R8. Release builds with R8 rarely hit the limit because R8 strips unused code. Debug builds sometimes hit it — that's when multidex actually kicks in, and why you might see slower debug app launch times."
Android selects the best matching resource at runtime based on device qualifiers — screen density, locale, API level, orientation. Providing too many density variants bloats the APK. AAB and ABI splits eliminate this by delivering only the right assets per device.
// Resource qualifier folders: // res/drawable/ → default (no qualifier) // res/drawable-mdpi/ → 160dpi (1x) // res/drawable-hdpi/ → 240dpi (1.5x) // res/drawable-xhdpi/ → 320dpi (2x) // res/drawable-xxhdpi/ → 480dpi (3x) ← most modern phones // res/drawable-xxxhdpi/ → 640dpi (4x) ← high-end phones // res/drawable-nodpi/ → never scaled (used for exact-pixel assets) // Problem: shipping 5 PNG versions for every icon = 5x the asset size // Universal APK includes ALL densities — user downloads all even if using only xxhdpi // Solution 1: Vector drawables — one file for all densities // res/drawable/ic_arrow.xml → scales to any density at runtime // ✅ Zero density variants needed android { defaultConfig { vectorDrawables.useSupportLibrary = true } } // Solution 2: AAB density splits — Play delivers only the right density // Upload AAB → user on xxhdpi gets only xxhdpi resources // No config needed — Play does it automatically from AAB // Solution 3: Restrict density for APK direct distribution android { splits { density { isEnable = true reset() include("xxhdpi", "xxxhdpi") // 95%+ of modern devices compatibleScreens("normal", "large", "xlarge") } } } // Other useful qualifiers: // res/values-v26/ → API 26+ only strings/styles // res/layout-land/ → landscape orientation // res/values-night/ → dark mode colors // res/values-en/ → English strings (override defaults)
- Qualifiers: Android picks the closest matching resource folder at runtime — density, locale, API, orientation
- Vector drawables: single XML scales to any density — eliminates 5 PNG variants per icon
- AAB density splits: Play automatically delivers only the matching density to each device
- density splits for APK: manually restrict to xxhdpi+xxxhdpi — covers 95%+ of modern devices
- nodpi qualifier: exact-pixel assets like notification icons — never scaled by the system
"The modern recommendation: use vector drawables for all icons and UI assets (no density folders needed), use WebP for photographs and complex images (one file per image), publish as AAB (Play handles density delivery). Following these three rules eliminates the density bloat problem entirely — you never need to think about mdpi/hdpi/xhdpi/xxhdpi folders again."
Baseline Profiles tell the Android Runtime (ART) which classes and methods to pre-compile ahead of time. Without them, ART JIT-compiles code on first run — causing startup jank. With them, critical code paths are compiled during app install.
// implementation("androidx.profileinstaller:profileinstaller:1.3.1") // androidTestImplementation("androidx.benchmark:benchmark-macro-junit4:1.2.3") // Step 1: Create a Macrobenchmark test to generate the profile @RunWith(AndroidJUnit4::class) class BaselineProfileGenerator { @get:Rule val rule = BaselineProfileRule() @Test fun generateBaselineProfile() { rule.collect (packageName = "com.example.app") { // Describe the critical user journey pressHome() startActivityAndWait() // app launch device.waitForIdle () device.findObject (By.text("Products")). click() // navigate to key screen device. waitForIdle () } // Generates: src/main/baseline-prof.txt } } // Step 2: Baseline profile generated (baseline-prof.txt) // Lcom/example/app/MainActivity; // Lcom/example/app/ProductViewModel; // Lcom/example/repository/ProductRepository; // ... (hundreds of class/method patterns) // Step 3: Include in build (automatic if file exists in src/main/) // AGP bundles baseline-prof.txt into the APK/AAB automatically // Step 4: Verify with Macrobenchmark @Test fun startupBenchmark() { benchmarkRule.measureRepeated(packageName = "com.example.app", metrics = listOf (StartupTimingMetric()), startupMode = StartupMode.COLD, iterations = 5 ) { pressHome(); startActivityAndWait() } // Reports: timeToFullDisplayMs median: 850ms (was 1400ms before profile) } // What baseline profiles improve: // ✅ Cold startup: 30-40% faster (critical code pre-compiled) // ✅ Frame jank on first scroll: reduced (rendering code pre-compiled) // ✅ Works from first launch — no warm-up period needed
- ART JIT problem: without profiles, code is compiled on first use — causes startup jank
- Baseline profile: list of class/method patterns to pre-compile during app install
- BaselineProfileRule: Macrobenchmark API to record which code runs during critical user journeys
- AGP integration: place baseline-prof.txt in src/main/ — AGP bundles it automatically
- 30-40% startup improvement: measured on real devices with Macrobenchmark
"Baseline profiles are one of the highest-impact, lowest-effort build improvements available in 2025. Generate a profile for your app's startup and main navigation flow — it takes 30 minutes to set up and delivers a 30-40% startup improvement for every user, on every install. Google requires them for featured Play Store apps."
AGP is the Gradle plugin that knows how to build Android projects — it defines all the android {} DSL blocks, build types, flavors, and tasks. AGP versions are tightly coupled to Gradle versions, Kotlin versions, and Android Studio versions. Upgrading requires care.
// AGP version in libs.versions.toml [versions] agp = "8.7.3" kotlin = "2.1.0" gradle = "8.11.1" // in gradle-wrapper.properties [plugins] android-application = { id = "com.android.application", version.ref = "agp" } android-library = { id = "com.android.library", version.ref = "agp" } // Compatibility matrix — MUST check before upgrading // AGP 8.7 → Gradle 8.9+, Kotlin 1.9+, Studio Meerkat // AGP 8.6 → Gradle 8.7+, Kotlin 1.9+, Studio Ladybug // AGP 8.5 → Gradle 8.7+, Kotlin 1.9+, Studio Koala // Always check: https://developer.android.com/build/releases/gradle-plugin // Safe upgrade process: // 1. Check compatibility matrix (AGP ↔ Gradle ↔ Kotlin ↔ Studio) // 2. Upgrade on a branch // 3. Fix deprecation warnings before they become errors // ./gradlew --warning-mode=all assembleDebug → shows all deprecation warnings // 4. Run full test suite and lint // 5. Check migration guide for breaking changes // Common AGP 8.x breaking changes: // BuildConfig generation: must now opt-in → buildFeatures { buildConfig = true } // namespace required: must set android.namespace in every module // Removed: compile, provided configurations (use implementation, compileOnly) // Android Studio's AGP upgrade assistant // Tools → AGP Upgrade Assistant → select target version → preview changes // Auto-fixes many breaking changes — use it first
- AGP defines the build: all android {} DSL, tasks, and build pipeline are provided by AGP
- Compatibility matrix: AGP version must match supported Gradle and Kotlin versions exactly
- AGP Upgrade Assistant: Android Studio tool that auto-applies migration changes
- --warning-mode=all: surface all deprecation warnings before they become build-breaking errors
- Namespace required: AGP 8.x requires namespace in every module's build.gradle.kts
"Always use Android Studio's AGP Upgrade Assistant (Tools menu) before manually editing versions. It understands the full migration path — adds namespace, migrates deprecated APIs, updates wrapper. Then run --warning-mode=all to find any remaining deprecations. Upgrading AGP without addressing deprecation warnings is how you get a build that breaks 6 months later when the deprecated API is finally removed."
A remote build cache shares task outputs across all developer machines and CI. When Alice builds a module and pushes the output, Bob's machine downloads it instead of rebuilding — saving minutes per build across the whole team.
// Local cache (default) — only on your machine // org.gradle.caching=true in gradle.properties // Caches in: ~/.gradle/caches/build-cache/ // Remote cache — shared across machines and CI // Options: // 1. Gradle Enterprise (paid, most powerful) // 2. Develocity (free tier available) // 3. Build Cache Node (self-hosted HTTP cache) // 4. GitHub Actions Cache + custom setup // settings.gradle.kts — configure remote cache buildCache { local { isEnabled = true isPush = true // also write to local cache } remote<HttpBuildCache> { url = uri("https://cache.example.com/cache/") isPush = System.getenv ("CI") != null // only CI pushes to remote credentials { username = System.getenv ("CACHE_USERNAME") ?: "" password = System.getenv("CACHE_PASSWORD") ?: "" } isEnabled = true } } // Critical: tasks must be cacheable // Only tasks annotated with @CacheableTask or built-in cached tasks benefit // All standard Gradle and Android tasks are cacheable // Custom tasks: annotate with @CacheableTask + declare @InputFiles / @OutputFiles // Verify cache hits on CI: // ./gradlew assembleDebug --build-cache // Look for "FROM-CACHE" in output — means output was fetched, not built // Typical improvement: // CI clean build without cache: 8 minutes // CI clean build with remote cache + warm cache: 90 seconds // (all unchanged module outputs fetched from cache)
- Remote cache: shares task outputs across all developer machines and CI runs
- isPush=CI only: developers read from cache, CI writes to it — prevents cache pollution from local WIP
- FROM-CACHE: Gradle prints this for cache hits — verify with --build-cache flag
- @CacheableTask: annotation for custom tasks — declares inputs/outputs so Gradle can cache
- 5-10x speed: warm remote cache can reduce CI builds from 8 minutes to under 90 seconds
"The isPush=CI pattern is important: only CI pushes to the remote cache, developers only pull. This prevents a developer's partial or broken build from poisoning the cache that other developers read. CI builds from a clean state — their outputs are trustworthy. Developer machines may have local modifications that would produce different outputs."
isDebuggable=true enables debugger attachment, allows log reading, and disables security protections. Shipping a debuggable APK to production is a serious security vulnerability — attackers can attach a debugger and inspect the app's memory and network traffic.
// What isDebuggable=true enables: // ✅ Attach debugger via Android Studio // ✅ Read Logcat output (adb logcat) // ✅ Run VM tool commands // ✅ Bypass some security checks (useful for testing) // ❌ In production: attacker can read memory, intercept network, bypass checks android { buildTypes { debug { isDebuggable = true // ✅ fine for development } release { isDebuggable = false // ✅ must be false — this is the default isMinifyEnabled = true } } } // Security checks that debuggable disables: // 1. SafetyNet / Play Integrity — detects debuggable APKs // Attestation fails if app is debuggable → can't access backend APIs that require attestation // 2. SSL pinning bypass via debugger: // Debuggable app → attach Frida → hook OkHttp → bypass SSL pinning // Non-debuggable + root detection: much harder to hook // 3. Runtime permission checks: // Debuggable apps can have permissions granted via adb without user consent // Verify your release is not debuggable: // aapt2 dump badging app-release.apk | grep -i debug // Should NOT show "application-debuggable" // Detect debuggable at runtime (for extra security): fun isDebuggable(context: Context): Boolean { return context.applicationInfo.flags and ApplicationInfo.FLAG_DEBUGGABLE != 0 } if (isDebuggable(context) && !BuildConfig.DEBUG) { // Should never happen in release — someone may have tampered with the APKcrashOrAlert () }
- isDebuggable=false: the default for release — never ship debuggable APKs to production
- Debugger attachment: debuggable apps can have memory inspected and code patched at runtime
- Play Integrity: Google's attestation API rejects debuggable apps — backend APIs that require integrity fail
- Frida/Xposed: popular Android hooking tools work far more easily on debuggable apps
- Runtime check: verify FLAG_DEBUGGABLE at runtime as an extra tamper-detection measure
"The most common mistake: a developer accidentally sets isDebuggable=true in the release build type to diagnose a production issue, then forgets to revert. Verify every release build: 'aapt2 dump badging app-release.apk | grep debuggable' — if it shows anything, the build is compromised. Add this check to your CI release pipeline."
R8 obfuscation renames classes, methods, and fields to single-letter names — making reverse-engineered code extremely hard to understand. The challenge is keeping obfuscation aggressive enough to protect IP while preserving functionality via careful keep rules.
// proguard-rules.pro — obfuscation configuration // Make class names unpredictable (enable by default with R8) -obfuscationdictionary obfuscation-dict.txt // custom rename dictionary -classobfuscationdictionary obfuscation-dict.txt -packageobfuscationdictionary obfuscation-dict.txt // obfuscation-dict.txt — confusing rename targets // O (letter O, looks like 0) // l (letter l, looks like 1 or I) // I (capital I, looks like l or 1) // Makes decompiled code: class O { void l(I lI) { I Il = new O(); } } // What to obfuscate (default — everything not kept): // com.example.core.business.logic.** → obfuscated (your IP) // com.example.feature.payment.** → obfuscated (sensitive logic) // What NOT to obfuscate (must keep readable): // Data models used with Gson/Retrofit (reflection-based parsing) -keepclassmembers class com.example.api.models.** { *; } // Exception handling — keep exception class names for crash reporting -keepnames class * extends java.lang.Exception // Native methods — JNI bridges must keep exact names -keepclasseswithmembernames class * { native <methods>; } // Serialization — Parcelable, Serializable -keepclassmembers class * implements android.os.Parcelable { public static final android.os.Parcelable$Creator *; } // Verify obfuscation strength: // jadx-gui app-release.apk → view decompiled code // Check: are class names single letters? Are your business logic classes unreadable? // If you can read the decompiled logic easily → obfuscation too weak
- Custom dictionary: rename to l/I/O characters — makes decompiled code visually unreadable
- Business logic obfuscated: payment algorithms, content protection, proprietary formulas
- Exception names kept: crash reporting needs readable exception class names for triage
- JNI methods kept: native bridge methods must match exact JNI naming convention
- jadx verification: decompile release APK to verify your IP is actually obfuscated
"Obfuscation slows down attackers but doesn't stop determined ones. Layer it with other protections: obfuscation + root/debugger detection + SSL pinning + Play Integrity attestation. The goal is raising the cost of attack high enough that the effort isn't worth it for most attackers. Always verify by decompiling your own release APK with jadx — see what an attacker sees."
Source sets define where Gradle looks for code, resources, and manifests for each build variant. Every build type and flavor gets its own source set directory — you can override or extend code per variant without if/else BuildConfig checks.
// Source set lookup order (highest priority first): // 1. src/freeIndiaDebug/ ← full variant (flavor1 + flavor2 + buildType) // 2. src/freeIndia/ ← flavor1 + flavor2 // 3. src/freeDebug/ ← flavor1 + buildType // 4. src/indiaDebug/ ← flavor2 + buildType // 5. src/free/ ← flavor1 dimension // 6. src/india/ ← flavor2 dimension // 7. src/debug/ ← build type // 8. src/main/ ← always included (base) // Example: different analytics implementation per flavor // src/main/java/com/example/Analytics.kt — interface // src/free/java/com/example/Analytics.kt — free implementation (basic) // src/premium/java/com/example/Analytics.kt — premium implementation (full) // ❌ Without source sets: class Analytics { fun track(event: String) { if (BuildConfig.IS_PREMIUM) { fullTrack(event) } else { basicTrack(event) } } } // ✅ With source sets — no if/else needed, cleaner separation: // src/free/java/Analytics.kt: class Analytics { fun track(event: String) { basicTrack(event) } } // src/premium/java/Analytics.kt: class Analytics { fun track(event: String) { fullTrack(event) } } // Custom source set configuration android { sourceSets {getByName ("main") { java.srcDirs ("src/main/kotlin", "src/generated/kotlin") res.srcDirs ("src/main/res", "src/main/res-extra") } } } // src/debug/res/values/strings.xml — adds debug-only strings // src/release/res/values/strings.xml — overrides prod strings // Resources merge across source sets — later set wins on conflict
- Priority order: full variant > flavor combos > individual flavors > build type > main
- Source set substitution: place the same class in flavor source sets — Gradle picks the right one
- No BuildConfig if/else: source sets provide a clean separation — the right code is compiled in
- Resources merge: all resource files from all active source sets are merged — later sets override conflicts
- Custom srcDirs: add generated code directories or split resources across folders
"Source sets eliminate the 'if BuildConfig.IS_PREMIUM' anti-pattern. Each flavor gets its own implementation of a class — the correct one is compiled in, dead code never reaches the APK. Free users don't have premium code in their APK at all. With BuildConfig if/else, both code paths are compiled in — the premium code just never runs."
Large build scripts are a maintenance problem — hard to read, test, and reuse. The refactoring path is: extract reusable config into convention plugins, extract custom task logic into buildSrc or build-logic, and extract utility functions into Gradle extensions.
// Before: 200-line monolithic build.gradle.kts — hard to maintain // After: 3 clean separations // 1. Convention plugins (build-logic/) — shared config across modules // Already covered in Q12 — AndroidFeaturePlugin, AndroidLibraryPlugin // 2. Gradle extensions — reusable utility functions // build-logic/convention/src/main/kotlin/Extensions.kt fun Project.configureAndroid (extension: CommonExtension<*, *, *, *, *, *>) { extension.apply { compileSdk = 35 defaultConfig { minSdk = 24 } compileOptions { sourceCompatibility = JavaVersion.VERSION_17 targetCompatibility = JavaVersion.VERSION_17 } }tasks .withType <KotlinCompile>().configureEach { compilerOptions { jvmTarget.set (JvmTarget.JVM_17) } } } // Used in AndroidFeaturePlugin: configureAndroid(extension) // apply all shared Android config // 3. Custom Gradle task classes — complex task logic in its own file abstract class GenerateChangelogTask : DefaultTask() { @get:InputFile abstract val rawChangelog: RegularFileProperty @get:OutputFile abstract val formattedChangelog: RegularFileProperty @TaskAction fun generate() { // complex logic here, not in build.gradle.kts val raw = rawChangelog.get ().asFile ().readText() formattedChangelog. get ().asFile ().writeText (format (raw)) } } // Registered in build.gradle.kts — just 3 lines: val changelog =tasks .register <GenerateChangelogTask>("generateChangelog") { rawChangelog.set (layout .projectDirectory .file ("CHANGELOG.md")) formattedChangelog.set (layout .buildDirectory .file ("changelog.txt")) }
- Convention plugins: extract shared Android config — compileSdk, Kotlin, Hilt — out of every module
- Extension functions on Project: reusable config blocks called from multiple convention plugins
- Custom task classes: move complex task logic out of build scripts — testable, type-safe, cacheable
- @InputFile/@OutputFile: proper task declaration enables UP-TO-DATE checks and caching
- Abstract task properties: lazy evaluation — file paths resolved at execution time, not configuration
"The refactoring signal: if your build.gradle.kts has if/else, loops, or functions beyond simple declarations, extract them. Convention plugins for shared config, extension functions for utilities, task classes for complex logic. A clean build.gradle.kts is plugins{} + dependencies{} + module-specific overrides only — maybe 20 lines."
Knowing the right Gradle flags saves significant time during development and debugging. These commands go beyond the basics and expose what's really happening in your build.
// ESSENTIAL FLAGS // Build a specific task ./gradlew :app:assembleDebug // build only :app module debug ./gradlew :feature:home:testDebug // test only :feature:home module // Skip tests (faster iteration) ./gradlew assembleDebug -x test -x lint // exclude test and lint tasks // Dry run — see what would run without running it ./gradlew assembleRelease --dry-run // Performance flags ./gradlew assembleDebug --parallel // parallel module execution ./gradlew assembleDebug --build-cache // enable build cache for this run ./gradlew assembleDebug --daemon // use Gradle daemon (default) ./gradlew assembleDebug --no-daemon // no daemon (useful for CI debugging) // DEBUGGING FLAGS ./gradlew assembleDebug --info // verbose build output ./gradlew assembleDebug --debug // very verbose (usually too much) ./gradlew assembleDebug --warning-mode=all // surface all deprecation warnings ./gradlew assembleDebug --stacktrace // full stack trace on build failure ./gradlew assembleDebug --scan // upload to Gradle Build Scan (browser report) // DEPENDENCY INSPECTION ./gradlew app:dependencies // full dependency tree ./gradlew app:dependencyInsight --dependency okhttp // why is okhttp included? ./gradlew app:dependencies --configuration releaseRuntimeClasspath // TASK INSPECTION ./gradlew tasks // list all available tasks ./gradlew tasks --all // list ALL tasks including internal ./gradlew :app:assembleDebug --rerun-tasks // force re-run even if UP-TO-DATE // CLEAN BUILDS ./gradlew clean assembleDebug // clean then build ./gradlew clean // delete build/ directories only
- Module-specific tasks: :module:task — only builds what you need, much faster
- -x test: skip test task — useful for fast debug builds when tests are slow
- --scan: generates a detailed web report — shows task timeline, cache hits, and bottlenecks
- dependencyInsight: shows exactly why a dependency is in your graph — traces the path
- --rerun-tasks: forces all tasks to run regardless of UP-TO-DATE — useful when debugging caching issues
"Three commands I use daily: (1) ./gradlew :feature:home:assembleDebug — build just the module I'm working on, not the whole app. (2) ./gradlew app:dependencyInsight --dependency okhttp — trace why a dependency version was chosen. (3) ./gradlew assembleDebug --scan — when a build is slow and I need to know exactly which task is the bottleneck."
Gradle supports both Groovy (.gradle files) and Kotlin (.gradle.kts files) for build scripts. Kotlin DSL is now the official recommendation — it offers IDE autocomplete, type safety, refactoring support, and compile-time error detection that Groovy lacks.
// GROOVY DSL — build.gradle (old style) plugins { id 'com.android.application' id 'org.jetbrains.kotlin.android' } android { compileSdk 35 defaultConfig { applicationId "com.example.app" minSdk 24 } } dependencies { implementation 'androidx.core:core-ktx:1.12.0' // string literal — typo-prone } // KOTLIN DSL — build.gradle.kts (recommended) plugins { alias(libs.plugins.android.application) // type-safe accessor alias(libs.plugins.kotlin.android) } android { compileSdk = 35 // = required in Kotlin DSL defaultConfig { applicationId = "com.example.app" minSdk = 24 } } dependencies { implementation(libs.androidx.core.ktx) // type-safe, autocomplete works } // Key Kotlin DSL advantages: // ✅ IDE autocomplete — Ctrl+Space shows all valid options // ✅ Compile-time errors — typos caught when you sync, not at runtime // ✅ Refactoring support — rename a variable and all references update // ✅ Navigation — Cmd/Ctrl+Click to jump to type definitions // ✅ Static type checking — wrong types are build errors // Gotchas in Kotlin DSL vs Groovy: // Kotlin requires = for property assignment: compileSdk = 35 (not compileSdk 35) // String concatenation: "${buildDir}/outputs" (not $buildDir/outputs) // Function calls require (): implementation(libs.retrofit) (not implementation libs.retrofit)
- Kotlin DSL: .gradle.kts files — type-safe, IDE autocomplete, compile-time errors
- Groovy DSL: .gradle files — dynamic, no type safety, errors surface at build time
- Google recommendation: Kotlin DSL is the official recommendation since AGP 8.x
- = required: Kotlin DSL uses property setter syntax — compileSdk = 35 not compileSdk 35
- Migration: rename .gradle to .gradle.kts and fix syntax — Android Studio helps automate this
"The practical difference: in Groovy DSL, if you misspell 'compileSdk' as 'compilSdk', the build silently ignores it. In Kotlin DSL, it's a compile error before the build even starts. That single benefit — catching typos at sync time instead of build time — makes Kotlin DSL worth the migration for any active project."
Macrobenchmark measures real user-visible performance — startup time, frame rendering, scroll smoothness — on a real device with release-like code. It's the only way to get accurate performance data because it uses the compiled, optimised app.
// Separate :macrobenchmark module // build.gradle.kts plugins {alias(libs.plugins.android.test) } android { targetProjectPath = ":app" experimentalProperties["android.experimental.self-instrumenting"] = true } dependencies { implementation(libs.benchmark.macro.junit4) } // app/build.gradle.kts — enable profiling in benchmark builds android { buildTypes { create("benchmark") { initWith( getByName ("release")) signingConfig = signingConfigs.getByName ("debug") // Enable profiling without debuggable proguardFiles("benchmark-rules.pro") } } } // Startup benchmark @RunWith(AndroidJUnit4::class) class StartupBenchmark { @get:Rule val benchmarkRule = MacrobenchmarkRule() @Test fun coldStartup() { benchmarkRule.measureRepeated ( packageName = "com.example.app", metrics =listOf (StartupTimingMetric()), compilationMode = CompilationMode.Full(), // simulate baseline profile startupMode = StartupMode.COLD, // kill process before each run iterations = 10 ) { pressHome() startActivityAndWait() // waits for Activity.onStart() } // Results in Android Studio: timeToInitialDisplay, timeToFullDisplay } @Test fun scrollBenchmark() { benchmarkRule. measureRepeated ( packageName = "com.example.app", metrics =listOf (FrameTimingMetric()), // frame rendering metrics startupMode = StartupMode.WARM, iterations = 5 ) { startActivityAndWait() device.findObject (By.res ("product_list")).fling (Direction.DOWN) } // Results: P50/P90/P99 frame times — spot jank } } // Run: ./gradlew :macrobenchmark:connectedBenchmarkAndroidTest
- Real measurements: runs on device with release-compiled code — microbenchmarks can't measure startup
- Separate module: benchmark module targets :app — doesn't pollute production build
- benchmark build type: release-like (R8 enabled) but allows profiling — accurate and measurable
- StartupTimingMetric: measures time to initial and full display — the numbers users feel
- FrameTimingMetric: measures frame rendering time — P90/P99 reveals jank that P50 hides
"Always use CompilationMode.None() to benchmark without baseline profiles, then CompilationMode.Full() to benchmark with them. The difference between these two numbers is the exact improvement your baseline profile delivers. This gives you a concrete metric: 'Our baseline profile reduced cold startup from 1400ms to 850ms — 39% improvement.'"
API keys embedded in APKs are extractable by anyone with a decompiler. The only truly safe option is to never put secrets in the APK. For keys that must be on-device, use Android Keystore and server-side validation to limit exposure.
// ❌ UNSAFE: hardcoded in source val apiKey = "sk_live_abc123secret" // visible in git, decompiled APK // ❌ UNSAFE: in BuildConfig (extractable from APK) buildConfigField("String", "API_KEY", "\"sk_live_abc123\"") // BuildConfig.API_KEY visible in decompiled APK — always // ❌ UNSAFE: in local.properties (fine for dev, but not a security solution) // local.properties is git-ignored but the key still ends up in BuildConfig // ✅ SAFE Option 1: Backend proxy — server holds the secret // App → your backend → third-party API // Your backend authenticates calls with the secret key // App only has a key to your backend (which you control) // ✅ SAFE Option 2: Remote config — fetch at runtime, don't embed // Firebase Remote Config: keys fetched at launch, stored in memory // Revokable: if key is compromised, update Remote Config without app update // ✅ SAFE Option 3: Android Keystore — for device-bound secrets // Generate a key ON device, never leaves hardware // Use for: user data encryption keys, local auth credentials // NOT for: third-party API keys (those belong on your server) // ✅ SAFER BuildConfig pattern: obfuscate the key retrieval // Build a key from parts + apply XOR — raises bar vs plain string // Still extractable by determined attacker — not truly safe fun getKey(): String { val part1 = BuildConfig.KEY_PART1 // "sk_live_" val part2 = BuildConfig.KEY_PART2 // "abc123" return part1 + part2 // still recoverable — not truly safe } // Real answer: the ONLY safe place for a secret is your server
- BuildConfig secrets: always extractable from decompiled APK — R8 obfuscation doesn't help
- Backend proxy: the only truly safe approach — secret lives on your server, never in the APK
- Remote Config: fetch keys at runtime — revokable without app update if compromised
- Android Keystore: for device-generated keys only — not suitable for third-party API keys
- local.properties: git-ignored so safe for source control, but key still ends up in BuildConfig
"The interview answer: 'No secret in an APK is safe — BuildConfig, strings.xml, native .so — all extractable. The correct architecture: app authenticates with your backend using user credentials, your backend calls third-party APIs with the secret key. The user's JWT is the only credential in the APK, and it's per-user and revokable.'"
Native libraries (.so files) require ABI-specific compilation, JNI bridging, and special build configuration. CMake and the Android NDK integrate with Gradle to compile C/C++ as part of your build.
// Option 1: Pre-compiled .so (most common — using a third-party library) // Place in: src/main/jniLibs/ // ├── arm64-v8a/libmylibrary.so // ├── armeabi-v7a/libmylibrary.so // └── x86_64/libmylibrary.so // AGP automatically packages these per ABI in splits/AAB // Filter ABIs to reduce size android { defaultConfig { ndk { abiFilters +=setOf ("arm64-v8a", "armeabi-v7a") // skip x86/x86_64 } } } // Option 2: Compile from C/C++ with CMake android { defaultConfig { externalNativeBuild { cmake { cppFlags += "-std=c++17" arguments += "-DANDROID_STL=c++_shared" } } } externalNativeBuild { cmake { path =file ("src/main/cpp/CMakeLists.txt") version = "3.22.1" } } } // JNI bridge — Kotlin calls native class NativeBridge { external fun processImage(pixels: IntArray, width: Int, height: Int): IntArray companion object { init { System.loadLibrary ("mylib") } // loads libmylib.so } } // Debug native crashes with ndk-stack // adb logcat | ndk-stack -sym app/build/intermediates/cmake/debug/obj/arm64-v8a // Converts native crash addresses to file:line references // R8 keep rules for JNI // -keepclasseswithmembernames class * { native <methods>; }
- jniLibs/: pre-compiled .so location — AGP packages the right ABI per device via AAB/splits
- abiFilters: restrict to arm64+arm — eliminates x86/x86_64 .so files from the APK
- CMake integration: externalNativeBuild block — Gradle invokes CMake as part of the build
- external fun: Kotlin/Java JNI bridge — System.loadLibrary loads the .so at runtime
- ndk-stack: convert native crash hexadecimal addresses to readable file:line references
"Native libraries are the #1 cause of large APKs — a single .so may be 5-10MB per ABI. With 4 ABIs (arm64, arm, x86, x86_64) that's 20-40MB just for one library. ABI filter to arm64+arm covers 99% of real devices and cuts native lib size in half. Then publish as AAB and Play delivers only the matching ABI — down to one copy per user."
The ideal build setup is fast, secure, maintainable, and CI-ready from day one. It takes 2-3 days to configure properly and saves hundreds of hours over the project's lifetime.
// ── 1. PROJECT STRUCTURE ───────────────────────────────── // build-logic/ ← convention plugins // gradle/ // libs.versions.toml ← Version Catalog // wrapper/ ← pinned Gradle version // app/ core/ feature/ ← modules // ── 2. gradle.properties ──────────────────────────────── org.gradle.jvmargs=-Xmx4g -XX:+HeapDumpOnOutOfMemoryError org.gradle.caching=true org.gradle.parallel=true org.gradle.configuration-cache=true kotlin.incremental=true ksp.incremental=true android.enableR8.fullMode=true android.nonTransitiveRClass=true // faster compile — each module only sees own R // ── 3. CONVENTION PLUGINS ─────────────────────────────── // AndroidApplicationPlugin: compileSdk=35, minSdk=24, signing config, R8 // AndroidLibraryPlugin: same SDK config, no signing, namespace required // AndroidFeaturePlugin: library + Hilt + Compose + common test deps // ── 4. RELEASE BUILD TYPE ─────────────────────────────── release { isMinifyEnabled = true isShrinkResources = true isDebuggable = false signingConfig = // from env vars, not hardcoded proguardFiles(getDefaultProguardFile ("proguard-android-optimize.txt"), "proguard-rules.pro") } // ── 5. CI PIPELINE ────────────────────────────────────── // PR: ./gradlew test lintDebug (fast — 2-3 min) // Main merge: ./gradlew test lintDebug bundleRelease (with size check) // Tag push: ./gradlew bundleRelease + upload to Play internal track // ── 6. EXTRAS ─────────────────────────────────────────── // Baseline profile (src/main/baseline-prof.txt) // APK size CI check (fail if AAB > threshold) // Lint baseline (lint-baseline.xml with abortOnError=true) // Renovate/Dependabot for automated dependency updates // mapping.txt archived in CI as artifact
- Convention plugins first: shared config before adding any feature modules — retrofitting is painful
- gradle.properties: all performance flags from day one — caching, parallel, config cache
- android.nonTransitiveRClass: each module only sees its own resources — faster compilation
- Lint baseline: enable strict Lint immediately on an empty project — no existing issues to suppress
- Renovate: automated dependency update PRs — keeps libraries current with minimal effort
"The most expensive build system debt: starting with a monolithic build script, no convention plugins, and Groovy DSL. When the project grows to 10 modules, everything needs refactoring at the worst time. Set up convention plugins, Version Catalog, and Kotlin DSL on day one — it takes 3 hours and saves weeks later. android.nonTransitiveRClass=true is the hidden gem: it makes R class compilation much faster in multi-module projects and catches resource name collisions early."
The namespace in build.gradle.kts defines the package for generated code (BuildConfig, R class). It replaces the package attribute previously set in AndroidManifest.xml. Separating namespace from applicationId allows different package naming for app ID and code generation.
// Before AGP 7.3 — package in AndroidManifest.xml // <manifest package="com.example.app"> → used for both R class and applicationId // ❌ Coupling: can't have different app ID and code package // AGP 7.3+ — namespace in build.gradle.kts (required in AGP 8+) android { namespace = "com.example.app" // package for generated R class and BuildConfig defaultConfig { applicationId = "com.example.app" // unique app ID on Play Store and device } } // ✅ Can now differ: android { namespace = "com.example.core.ui" // library module R class package // No applicationId for library modules — they don't install } // AndroidManifest.xml — no longer needs package attribute // <manifest> ← package removed // <application android:label="@string/app_name"> // Practical importance in multi-module projects: // :core:ui module: namespace = "com.example.core.ui" // :feature:home module: namespace = "com.example.feature.home" // Each gets its own R class: com.example.core.ui.R, com.example.feature.home.R // With nonTransitiveRClass: modules can ONLY reference their own resources // Migration: AGP Upgrade Assistant adds namespace automatically // Or add manually if missing (required for AGP 8.x): // Error without it: "Namespace not specified. Please specify a namespace..." // applicationId vs namespace in flavors: productFlavors { create("free") { applicationIdSuffix = ".free" // changes applicationId → com.example.app.free // namespace stays same → R class unchanged } }
- namespace: defines the package for generated R class and BuildConfig — not the installed app ID
- applicationId: the unique identifier on Play Store and device — can differ from namespace
- Required in AGP 8+: builds fail without namespace in every module
- Per-module namespaces: each library module gets its own R class — prevents resource conflicts
- nonTransitiveRClass: with unique namespaces, each module can only reference its own resources
"The namespace/applicationId split matters for white-label apps: namespace='com.example.app' (code generation, stays constant), applicationId='com.whitelabel.client1' (what the client sees on Play, differs per flavor). The R class always uses the namespace — your code never changes. The applicationId is purely for distribution identity."
LeakCanary automatically detects memory leaks during development and crashes with a clear heap dump analysis. The key is confining it strictly to debug builds — it has significant overhead and should never ship in production.
// Add ONLY to debug dependency — not in release dependencies { debugImplementation(libs.leakcanary.android) // ✅ debug only — not in release APK // implementation(libs.leakcanary.android) ❌ would be in release! } // LeakCanary requires NO code changes — fully automatic from the library // It hooks into the app lifecycle automatically via ContentProvider // When a leak is detected: notification + detailed trace in the UI // Typical leak it catches: class HomeFragment : Fragment() { private var binding: FragmentHomeBinding? = null override fun onCreateView(...) = FragmentHomeBinding.inflate (inflater).also { binding = it // ❌ binding holds a reference to the View }.root // Missing: override fun onDestroyView() { binding = null } ← leak! // LeakCanary reports: HomeFragment → binding → View tree → LEAK } // Fix: override fun onDestroyView() { super.onDestroyView () binding = null // ✅ release binding reference when view is destroyed } // Configure LeakCanary (optional) class MyApp : Application() { override fun onCreate() { super.onCreate () if (BuildConfig.DEBUG) { LeakCanary.config = LeakCanary.config.copy ( retainedVisibleThreshold = 3 // trigger after 3 retained objects ) } } } // In CI — run leak detection as part of automated tests // LeakCanary throws AssertionError in tests when leak detected // ./gradlew connectedDebugAndroidTest → fails if any screen leaks detected
- debugImplementation: the critical build configuration — LeakCanary only in debug builds
- Zero code setup: ContentProvider initialisation is automatic — no Application code needed
- Fragment binding leaks: most common — must nullify binding in onDestroyView()
- CI integration: LeakCanary throws in instrumented tests — automated leak detection in CI
- Heap dump: LeakCanary captures a heap dump and traces the shortest leak path — precise diagnosis
"debugImplementation is the correct configuration for any development-only tool — LeakCanary, Stetho, Flipper. These libraries add significant overhead and their code is completely excluded from the release APK. A common mistake is putting them in implementation — they end up in production builds, slowing down the app and increasing APK size."
By default, a module's R class contains all resources from that module and all its transitive dependencies. With nonTransitiveRClass=true, each module's R class contains only its own resources. This means changing a color in :core:ui no longer forces all 10 feature modules to recompile -- only :core:ui recompiles.
// gradle.properties android.nonTransitiveRClass=true // Before: :feature:home R class contained resources from :core:ui // R.drawable.ic_logo (defined in :core:ui) -- accessible from :feature:home // After: must reference cross-module resources explicitly val logo = com.example.core.ui.R.drawable.ic_logo // explicit module reference // Android Studio auto-migration: Refactor → Migrate to Non-Transitive R Classes // Fixes all reference errors automatically
- Without nonTransitiveRClass: change a color in :core:ui → all 10 feature modules recompile (their R class contains that color)
- With nonTransitiveRClass: change a color in :core:ui → only :core:ui recompiles
- Code change required: cross-module resource references must use the fully-qualified R class name
- Android Studio migration: Refactor → Migrate to Non-Transitive R Classes -- auto-fixes all references
- Combined with Gradle build cache, this can cut incremental build time by 60-80% on large multi-module projects
"nonTransitiveRClass is the least-known build performance option with one of the highest impacts. In a project with 10 feature modules, changing a single color in :core:ui without this flag triggers recompilation of all 10 feature modules (they all have that color in their R class). With this flag, only :core:ui recompiles. Use Android Studio's built-in migration refactoring — it fixes all the code references automatically."
Vulnerable dependencies need immediate action — you can't wait for an upstream fix that may take weeks. The tools are: force a safe version of the transitive dependency, patch the vulnerable class with a custom overlay, or replace the library entirely.
// Scenario: okhttp 4.9.0 has a CVE. Your library depends on it. // The library hasn't released a fix yet. // OPTION 1: Force a safe version via resolution strategy configurations.all { resolutionStrategy { // Force the patched version everywhere force ("com.squareup.okhttp3:okhttp:4.12.0") } } // ✅ Quick fix — one line // ✅ Applies to all transitive dependencies // ⚠️ The library must be compatible with the newer version // OPTION 2: Exclude the vulnerable transitive dep and add safe version directly implementation(libs.some.library) {exclude (group = "com.squareup.okhttp3", module = "okhttp") } implementation("com.squareup.okhttp3:okhttp:4.12.0") // add safe version directly // OPTION 3: Dependency substitution — swap entire library configurations.all { resolutionStrategy { dependencySubstitution { substitute( module ("com.vulnerable:library:1.0")) .using (module ("com.safe:replacement:2.0")) } } } // OPTION 4: OWASP Dependency Check — detect vulnerabilities in CI // id("org.owasp.dependencycheck") version "9.0.6" dependencyCheck { failBuildOnCVSS = 7.0 // fail CI if any dep has CVSS score >= 7 suppressionFile = "dependency-check-suppressions.xml" // known acceptable CVEs } // ./gradlew dependencyCheckAnalyze → HTML report of all CVEs // Track vulnerabilities automatically: // Renovate/Dependabot: creates PRs when security updates are available // GitHub Dependabot alerts: notifies of known vulnerable dependencies
- resolutionStrategy.force: override any transitive dependency version — fastest temporary fix
- exclude + direct dep: remove vulnerable transitive dep and add safe version directly
- OWASP Dependency Check: CI plugin that fails builds when CVE score exceeds threshold
- Renovate/Dependabot: automated PRs for security updates — catches vulnerabilities before they're exploited
- suppression file: acknowledge acceptable CVEs with justification — prevents false-positive failures
"resolutionStrategy.force is the emergency fix — apply it immediately when a CVE is reported. Then add OWASP Dependency Check to CI so future vulnerabilities are caught automatically before they reach production. The suppression file is for false positives or accepted risks — every entry should have a comment explaining why it's acceptable and when to reassess."
A messy build system needs a prioritised plan, not a big-bang rewrite. Start with zero-risk wins in gradle.properties, then migrate to Kotlin DSL and Version Catalog, then extract convention plugins. Each phase delivers measurable build time improvement while keeping CI green.
// Month 1: gradle.properties -- zero risk, immediate impact org.gradle.caching=true org.gradle.parallel=true kotlin.incremental=true ksp.incremental=true // Month 1: KAPT → KSP for Hilt and Room (biggest single build win) // Change: kapt(libs.hilt.compiler) → ksp(libs.hilt.compiler) // Month 2: Version Catalog + Kotlin DSL // libs.versions.toml + rename .gradle → .gradle.kts // Month 3: Convention plugins + configuration cache pluginManagement { includeBuild("build-logic") } org.gradle.configuration-cache=true
- Month 1 wins (zero risk): enable caching+parallel in gradle.properties, migrate KAPT→KSP -- saves 30-60s per build immediately
- Profile first with --scan: identify the actual bottleneck before optimising -- don't guess
- Month 2 (reorganise without changing behaviour): Version Catalog + Kotlin DSL -- IDE autocomplete, type safety, no functional change
- Month 3 (structural): convention plugins eliminate build script duplication, configuration cache saves 20-40s per build
- Measure: record clean and incremental build times before and after each phase -- quantify the improvement for stakeholders
"The sequencing matters: Month 1 focuses on zero-risk wins that prove value and build trust. Month 2 reorganises without changing functionality. Month 3 does the structural changes that require the team to adapt. Starting with convention plugins in week 1 would disrupt everyone. Starting with gradle.properties delivers results without disruption, creating support for the harder changes later."
25 questions on memory leaks, overdraw, ANR, startup time, frame rendering, and profiling with Android Studio tools for 2025-26 interviews.
A memory leak occurs when objects are kept in memory after they're no longer needed — the garbage collector can't reclaim them because something still holds a reference. On Android, leaking a Context or Activity is particularly expensive because it drags the entire view hierarchy into memory.
// The most common Android memory leaks: // 1. Static reference to Context or Activity object ImageCache { var context: Context? = null // ❌ static holds Activity forever } // Fix: use Application context, never Activity in static fields object ImageCache { lateinit var appContext: Context // ✅ Application lives as long as the app } // 2. Fragment View Binding not cleared in onDestroyView class HomeFragment : Fragment() { private var binding: FragmentHomeBinding? = null override fun onDestroyView() { super.onDestroyView () binding = null // ✅ must clear — Fragment outlives its View } } // 3. Non-static inner class holding outer class reference class MyActivity : Activity() { inner class MyTask : AsyncTask<...>() { // ❌ inner class holds Activity ref override fun doInBackground(...) { /* long running */ } } } // Fix: use static class + WeakReference, or better: coroutines in ViewModel // 4. Listener registered but never unregistered override fun onResume() { super.onResume () locationManager.requestUpdates (listener) // ❌ never removed } // Fix: unregister in onPause() override fun onPause() { super.onPause () locationManager.removeUpdates (listener) // ✅ } // 5. Coroutine launched in wrong scope class HomeFragment : Fragment() { fun loadData() { GlobalScope.launch { api.fetchData () } // ❌ never cancelled on Fragment destroy } } // Fix: viewLifecycleOwner.lifecycleScope or viewModelScope
- Static Context: leaks the entire Activity view hierarchy — use Application context for singletons
- Fragment binding: Fragment outlives its View — must null the binding in onDestroyView()
- Inner class: non-static inner classes hold an implicit reference to the outer class
- Unregistered listeners: always pair register with unregister in matching lifecycle callbacks
- GlobalScope: coroutines launched in GlobalScope run forever — always use structured scopes
"When an Activity leaks, you're not just leaking the Activity object — you're leaking everything it references: views, bitmaps, adapters, the entire view hierarchy. A single leaked Activity can retain 10-50MB of memory. On rotation-heavy apps, each rotation creates a new Activity and leaks the old one — heap grows until OOM."
LeakCanary detects leaks automatically by watching objects that should be garbage collected -- if they're still alive after 5 GC cycles, it captures a heap dump and traces the shortest reference path. The Memory Profiler gives you a live heap graph and lets you capture heap dumps on demand for manual investigation.
// LeakCanary -- zero config, add to debugImplementation // debugImplementation("com.squareup.leakcanary:leakcanary-android:2.13") // LeakCanary trace output (notification + logcat): // HomeFragment ↓ binding (strong ref) // FragmentHomeBinding → View tree (LEAK) // Fix: binding = null in onDestroyView() override fun onDestroyView() { super.onDestroyView() binding = null // Fragment outlives its View -- must release binding reference } // Memory Profiler heap dump workflow // 1. Navigate to screen → rotate 3x → press GC button → Capture Heap Dump // 2. Filter by class: search "Activity" -- 4 instances instead of 1 = leak // 3. Click instance → Retention path shows who is holding the reference
- LeakCanary: zero config -- add to debugImplementation, it hooks in automatically via ContentProvider
- Rotation test: navigate to a screen, rotate 3 times, force GC, capture heap dump -- more instances than expected = leak
- LeakCanary trace: shows the exact reference chain keeping the object alive -- usually 2-3 hops to the root cause
- Memory Profiler: use for leaks LeakCanary misses (non-Activity/Fragment leaks, slow growth leaks)
- Retention path: clicking an instance in the heap dump shows which object is holding a reference -- the fix is always at the nearest strong reference
"The rotation test is the fastest manual leak check: open a screen, rotate the device 5 times, press GC in the profiler, capture a heap dump. Search for your Activity class — if you see 6 instances instead of 1, you have a leak. LeakCanary automates this exact check and shows you exactly which reference chain is keeping the old instances alive."
Overdraw happens when a pixel is drawn more than once in the same frame — a background behind a background behind a view. Each overdraw wastes GPU time and can cause frame drops on low-end devices. The GPU has to colour the same pixel multiple times for no visible benefit.
// Enable overdraw visualiser: // Developer Options → Debug GPU Overdraw → Show overdraw areas // Colors: // True color (white) = no overdraw (drawn once) ✅ // Blue = 1x overdraw // Green = 2x overdraw // Pink = 3x overdraw // Red = 4x+ overdraw ❌ problem area // Common overdraw causes and fixes: // 1. Window background + View background + child background // Fix: remove redundant backgrounds // styles.xml: <item name="android:windowBackground">@null</item> // OR remove background from root layout if it matches window bg // 2. Nested layouts each with their own background // <LinearLayout android:background="@color/white"> ← outer bg // <RelativeLayout android:background="@color/white"> ← duplicate // Fix: keep background only on the outermost needed container // 3. Custom View drawing opaque content over previous draws class MyCustomView : View(...) { override fun onDraw(canvas: Canvas) { // Tell the system this View is fully opaque — skip drawing underneath if (isOpaque ()) { canvas.clipRect (left, top, right, bottom) // clip to view bounds } // draw content } } // 4. Jetpack Compose — overdraw less common but still possible // Compose avoids many overdraw issues by default (no XML view hierarchy) // Still watch for: modifier.background() on multiple nested composables Box(modifier = Modifier.background (Color.White)) { // outer bg Box(modifier = Modifier.background (Color.White)) { // ❌ redundant inner bg Text("Hello") } }
- Overdraw: same pixel drawn multiple times per frame — wastes GPU cycles, causes jank
- Debug GPU Overdraw: Developer Options visualiser — blue=1x, green=2x, red=4x+ (bad)
- Window background: set to @null in theme if your root view has its own background
- Nested backgrounds: only the outermost visible background needed — remove inner duplicates
- Custom Views: use clipRect to tell the system what's opaque — skip drawing underneath
"The window background is the invisible overdraw culprit. Every app has a window background (usually white or the theme color). If your root layout also has a white background, every pixel is drawn twice before a single View is rendered. Set android:windowBackground=@null in your theme when your layout covers the entire window — instant 1-layer overdraw reduction everywhere."
ANR occurs when the main thread is blocked for too long — 5 seconds for user input events, 10 seconds for broadcast receivers, 20 seconds for service operations. Android kills the app with a dialog. The fix: never block the main thread.
// ANR triggers: // • 5 seconds: no response to input event (user tap blocked) // • 10 seconds: BroadcastReceiver.onReceive() takes too long // • 20 seconds: Service operations on main thread // Common ANR causes: // 1. Network/DB on main thread override fun onCreate(...) { val data = OkHttpClient().newCall (request).execute () // ❌ blocks main thread val user = db.userDao().getUserBlocking (id) // ❌ Room on main thread } // Fix: coroutines, suspend functions // 2. Holding a lock that another thread holds @Synchronized fun processOnMain() { // ❌ deadlock risk heavyWork() } // 3. Long broadcast receiver class MyReceiver : BroadcastReceiver() { override fun onReceive(ctx: Context, intent: Intent) { doLongWork() // ❌ must complete in 10 seconds } } // Fix: start a foreground service or use goAsync() override fun onReceive(ctx: Context, intent: Intent) { val result =goAsync () // extends time budget CoroutineScope(Dispatchers.IO).launch {doLongWork () result.finish () // must call finish() or ANR still occurs } } // Detect ANR-prone code with StrictMode StrictMode.setThreadPolicy ( StrictMode.ThreadPolicy.Builder() .detectAll () // detect all violations .penaltyDeath () // crash in debug — can't ignore .build () ) // Analyse ANR traces: // adb bugreport → extract /data/anr/anr_*.txt // Look for main thread stack trace — shows exactly where it was blocked
- 5-second rule: any input event blocked for 5s triggers ANR — user sees the "Wait/Close" dialog
- Never block main thread: no network, no database, no file I/O, no long computation
- StrictMode: crash the debug build on any main-thread violation — the best prevention tool
- goAsync(): extends broadcast receiver time budget — still must complete and call finish()
- ANR traces: /data/anr/ contains thread dumps — main thread stack shows the blocking call
"ANR traces are the most useful debugging artefact. When an ANR occurs, Android writes the full thread dump to /data/anr/. The main thread's stack trace shows exactly where it was blocked — 'at java.net.SocketInputStream.read' means a network call on main thread. adb bugreport extracts these traces. Play Console shows ANRs from production users with their traces."
Slow startup is almost always caused by too much work on the main thread during Application.onCreate() or Activity.onCreate(). The fix is a combination of deferring initialisation, lazy loading, and generating a Baseline Profile.
// Step 1: Measure — get the real number // adb shell am start -W com.example.app/.MainActivity // Output: // TotalTime: 3847ms ← cold start duration // WaitTime: 3849ms // Step 2: Profile — Android Studio CPU Profiler // Run → Profile → Method Trace → start app → see flame chart // Identify: which methods take the most time in onCreate? // Step 3: Common culprits and fixes // ❌ Synchronous SDK initialisation in Application.onCreate() class MyApp : Application() { override fun onCreate() { FirebaseApp.initializeApp (this) // 200ms Timber.plant (Timber.DebugTree()) // fast, OK MapsInitializer.initialize (this) // 400ms ❌ analytics.initialize () // 300ms ❌ doesn't need to be eager } } // ✅ Fix: defer non-critical init to background thread override fun onCreate() { Timber.plant (Timber.DebugTree()) // fast — keep synchronous ProcessLifecycleOwner.get ().lifecycle .addObserver (LifecycleEventObserver { _, event -> if (event == Lifecycle.Event.ON_START) { MainScope().launch (Dispatchers.IO) { analytics.initialize () // defer to after first frame } } }) } // ✅ App Startup library — order-aware, dependency-tracking // implementation("androidx.startup:startup-runtime:1.1.1") class AnalyticsInitializer : Initializer<Analytics> { override fun create(context: Context): Analytics = Analytics.init (context) override fun dependencies() =emptyList () } // ✅ Baseline Profile — pre-compile critical startup code // Generates: src/main/baseline-prof.txt // Result: JIT warm-up cost eliminated → 30-40% faster cold start
- Measure first: adb shell am start -W gives you the exact cold/warm/hot start times
- CPU Profiler method trace: flame chart shows which methods consume time in onCreate()
- Defer SDK init: move non-critical SDKs to background thread or after first frame
- App Startup library: replaces per-SDK ContentProviders with a single ordered initializer
- Baseline Profile: eliminates JIT warm-up — the single biggest startup improvement available
"The App Startup library solves an invisible problem: every library that needs early initialization registers a ContentProvider in its AAR. An app with 10 SDKs has 10 ContentProviders all initializing at startup before your Application.onCreate() even runs. App Startup consolidates all of them into one — reducing ContentProvider overhead significantly."
Android must complete a measure → layout → draw → GPU render cycle within 16.67ms to hit 60fps. Any frame exceeding this budget is dropped and the user sees a stutter (jank). The frame is rendered by two threads working together: the main thread records drawing commands, and RenderThread executes them on the GPU.
// 60fps = 16.67ms per frame. 90fps = 11.11ms. 120fps = 8.33ms // Detect jank: Developer Options → Profile GPU Rendering → On screen as bars // Green line = 16ms threshold. Bars above it = dropped frames. // Most common jank causes: // 1. Allocating objects in onDraw() triggers GC pauses private val paint = Paint() // ✅ allocate once as field override fun onDraw(canvas: Canvas) { canvas.drawRect(bounds, paint) // no allocation here } // 2. Expensive work in RecyclerView.onBindViewHolder() override fun onBindViewHolder(holder: VH, pos: Int) { holder.price.text = items[pos].priceFormatted // ✅ pre-formatted in ViewModel }
- 16ms budget: measure + layout + draw + GPU render must all complete within 16.67ms -- exceed it and the frame is dropped
- Profile GPU Rendering: the fastest jank diagnostic -- enable in Developer Options, look for bars above the green 16ms line
- Never allocate in onDraw(): allocating Paint, RectF, Path on every frame triggers GC which pauses all threads
- Pre-format data in ViewModel: date formatting, currency formatting, string building -- do it once on a background thread, not on every bind
- ConstraintLayout over nested LinearLayouts: reduces layout hierarchy depth, fewer measure passes per frame
"The most common jank cause in RecyclerView: onBindViewHolder doing synchronous image loading, string formatting, or complex calculations. Everything in onBindViewHolder runs on the main thread during scroll — it must complete in a fraction of the 16ms frame budget. Prepare all data before binding, use async image loading (Coil), and pre-format strings."
Janky RecyclerView scrolling usually comes from expensive onBindViewHolder, deep view hierarchies, or rebinding identical data. The fix combines profiling, ViewHolder optimisation, DiffUtil, and prefetching.
// Step 1: Diagnose — Profile GPU Rendering overlay // Developer Options → Profile GPU Rendering → On screen as bars // Scroll the RecyclerView — watch for bars above green 16ms line // Step 2: CPU Profiler — Method Trace during scroll // Profiler → CPU → Record → scroll RecyclerView → stop // Find: onBindViewHolder() taking > 2ms? That's the problem. // Common fixes: // FIX 1: Move computation out of onBindViewHolder // ❌ Formatting in bind override fun onBindViewHolder(holder: VH, position: Int) { val item = items[position] holder.price.text = NumberFormat.getCurrencyInstance(). format(item.price) // ❌ slow } // ✅ Pre-format in the data class or ViewModel data class ProductUiModel(val priceFormatted: String) // formatted once // FIX 2: DiffUtil — only rebind changed items class ProductDiffCallback : DiffUtil.ItemCallback<ProductUiModel>() { override fun areItemsTheSame(o: ProductUiModel, n: ProductUiModel) = o.id == n.id override fun areContentsTheSame(o: ProductUiModel, n: ProductUiModel) = o == n } // ListAdapter uses DiffUtil automatically — only calls onBind for changed items // FIX 3: setHasStableIds — skip full rebind when data source same adapter.setHasStableIds(true) // tell RecyclerView items have unique stable IDs override fun getItemId(position: Int) = items[position].id. hashCode ().toLong () // FIX 4: RecycledViewPool — share pool across multiple RecyclerViews val sharedPool = RecyclerView.RecycledViewPool() horizontalRv.setRecycledViewPool(sharedPool) verticalRv.setRecycledViewPool(sharedPool) // FIX 5: Prefetch — pre-bind items before they scroll into view val llm = LinearLayoutManager(context) llm.initialPrefetchItemCount = 4 // pre-bind 4 items ahead of scroll recyclerView.layoutManager = llm
- Profile GPU Rendering + CPU Profiler: find whether jank is in layout, bind, or draw phase
- Pre-format data: never format numbers or dates in onBindViewHolder — do it in ViewModel
- ListAdapter + DiffUtil: only rebinds changed items — avoids full notifyDataSetChanged()
- setHasStableIds: RecyclerView skips full rebind when it can match items by ID
- initialPrefetchItemCount: pre-binds items on a background thread before they scroll into view
"The biggest RecyclerView win: replace notifyDataSetChanged() with ListAdapter. notifyDataSetChanged() forces RecyclerView to rebind every visible item every time — even if only one item changed. ListAdapter with DiffUtil only calls onBindViewHolder for items that actually changed. The difference on a list of 50 items: 50 bind calls vs 1."
Android Studio's CPU Profiler has four recording modes, each answering a different question. Instrumented trace tells you exactly which methods were called. Sampled trace tells you how long each method took without much overhead. System Trace reveals thread scheduling, locks, and frame timing. Callstack Sample is for native code.
// Open: View → Tool Windows → Profiler → CPU → Record // Add custom sections visible in all trace types import androidx.tracing.trace trace("UserRepository.load") { dao.getUsers() // appears as a labelled block in the timeline } // System Trace -- best for jank diagnosis // Shows: vsync signal, frame boundaries, thread scheduling, lock waits // Capture via adb for production-like conditions: // adb shell perfetto -o /data/misc/perfetto-traces/trace -t 5s gfx view sched // adb pull /data/misc/perfetto-traces/trace → open in https://ui.perfetto.dev
- Instrumented trace: records every method entry/exit -- 100% coverage but 2-10x overhead, timing is inaccurate, use for call count analysis
- Sampled trace: captures call stack every 1ms -- low overhead, accurate timing, may miss very short methods -- use to find slow methods
- System Trace: kernel-level -- shows vsync, frame pipeline, thread scheduling, lock waits -- use for jank and startup diagnosis
- Callstack Sample: for native C/C++ code -- samples the native call stack
- Choose based on your question: 'What runs?' → Instrumented. 'How long does X take?' → Sampled. 'Why is this frame slow?' → System Trace
"System Trace is the most powerful and underused profiling mode. It shows: which CPU core each thread ran on, when threads were blocked waiting for locks, the vsync signal and each frame's render timeline, and the green 16ms line. When you see a janky scroll in System Trace, you can see exactly which work pushed the frame over 16ms and on which thread."
Bitmaps are the #1 cause of OOM crashes on Android. A single 12MP photo decoded at full resolution takes 48MB of RAM. The fix is always to use a proper image loading library and never decode bitmaps manually at full size.
// Why Bitmaps are expensive: // 12MP camera image: 4000 × 3000 pixels // Each pixel = 4 bytes (ARGB_8888) // Memory = 4000 × 3000 × 4 = 48,000,000 bytes = 48MB // Show 5 images in a list → 240MB → OOM! // ❌ Never do this: val bitmap = BitmapFactory.decodeFile (path) // decodes at full resolution imageView.setImageBitmap(bitmap) // no recycling, no caching // ✅ Always use Coil or Glide — they handle everything: // implementation("io.coil-kt:coil-compose:2.7.0") // In Compose: AsyncImage( model = ImageRequest.Builder(context) .data(imageUrl) . crossfade(true) . size (800, 600) // ✅ downsample to display size .memoryCachePolicy (CachePolicy.ENABLED) .diskCachePolicy (CachePolicy.ENABLED) .build (), contentDescription = null ) // Coil automatically: // ✅ Decodes at display size, not file size // ✅ Memory cache (LRU) — reuse decoded bitmaps // ✅ Disk cache — avoid re-downloading // ✅ Recycles bitmaps when views are gone // ✅ Respects lifecycle — cancels loads when view is destroyed // If you MUST decode manually — sample down first val options = BitmapFactory.Options().apply { inJustDecodeBounds = true BitmapFactory.decodeFile (path, this) // get dimensions, no pixel data inSampleSize =calculateInSampleSize (this, targetW, targetH) inJustDecodeBounds = false } val sampled = BitmapFactory.decodeFile (path, options) // now at reduced size // Config optimization: RGB_565 instead of ARGB_8888 options.inPreferredConfig = Bitmap.Config.RGB_565 // 2 bytes/pixel vs 4 — no alpha
- Full resolution = OOM: a single 12MP photo decoded at ARGB_8888 = 48MB — decode at display size
- Always use Coil/Glide: they sample down, cache in memory and disk, and cancel on lifecycle
- inSampleSize: halves both dimensions each power of 2 — inSampleSize=4 → 1/16th the pixels
- RGB_565: 2 bytes per pixel instead of 4 — 50% memory saving when alpha not needed
- inJustDecodeBounds: read file dimensions without decoding pixels — measure before decoding
"The interview answer to 'how do you handle Bitmaps': 'I never handle Bitmaps directly — I use Coil. It handles downsampling to display size, LRU memory cache, disk cache, and lifecycle-aware cancellation. Manual Bitmap management in 2025 is a solved problem — using a library is not laziness, it's the correct engineering decision.'"
Cold start creates everything from scratch -- a new process, Application object, and Activity. Warm start skips process and Application creation. Hot start just resumes an existing Activity. Cold start is the hardest to optimise and the one that matters most for user perception. TTID is when the first frame is drawn; TTFD is when content is actually visible.
// Measure cold start time from command line // adb shell am start -W com.example.app/.MainActivity // Output: TotalTime: 2840ms (cold start duration) // Signal when content is ready (feeds TTFD metric in Play Console) override fun onResume() { super.onResume() viewModel.contentReady.observe(this) { ready -> if (ready) reportFullyDrawn() // Play Console records this as TTFD } } // SplashScreen API -- eliminates white flash, zero extra startup cost val splash = installSplashScreen() // must be called before super.onCreate() splash.setKeepOnScreenCondition { !viewModel.isReady }
- Cold start: new process + Application.onCreate() + Activity.onCreate() + first frame -- typically 1-5 seconds, where optimisation matters most
- Warm start: process alive, Activity recreated -- skips Application.onCreate(), typically 300-700ms
- Hot start: Activity resumes from back stack -- just lifecycle callbacks, feels instant at < 100ms
- TTID vs TTFD: TTID is the first frame drawn (layout visible), TTFD is when content is actually loaded -- users perceive TTFD as startup time
- reportFullyDrawn(): tells Play Console when your content is genuinely ready -- without it Play measures TTID which hides slow data loading
"reportFullyDrawn() is underused but important for Play Console startup metrics. Without it, Android measures TTID — when the layout is first visible. But your layout may show a spinner for 2 more seconds while data loads. reportFullyDrawn() marks when the content is actually usable — that's the number users perceive as 'startup time'."
The white flash on app launch comes from the window background being rendered before your first Activity frame. The modern solution is the SplashScreen API (Android 12+) with the androidx.core SplashScreen compat library for older devices.
// implementation("androidx.core:core-splashscreen:1.0.1") // Step 1: Define splash screen theme // res/values/themes.xml <style name="Theme.App.Starting" parent="Theme.SplashScreen"> <item name="windowSplashScreenBackground">@color/brand_green</item> <item name="windowSplashScreenAnimatedIcon">@drawable/ic_logo</item> <item name="windowSplashScreenAnimationDuration">500</item> <item name="postSplashScreenTheme">@style/Theme.App</item> </style> // Step 2: Set as app theme in AndroidManifest.xml <activity android:theme="@style/Theme.App.Starting"> // Step 3: Install in MainActivity.onCreate() BEFORE super/setContent class MainActivity : ComponentActivity() { override fun onCreate(savedInstanceState: Bundle?) { val splashScreen =installSplashScreen () // ← must be first super.onCreate (savedInstanceState) // Keep splash visible while loading data splashScreen.setKeepOnScreenCondition { !viewModel.isDataReady // returns true = keep showing splash }setContent { AppTheme { AppNavigation() } } } } // How it solves the white flash: // Old way: window background = white → user sees white flash // SplashScreen API: windowSplashScreenBackground = brand color // → user sees brand color immediately (zero flash) // → smooth transition to app content // Custom exit animation (optional) splashScreen.setOnExitAnimationListener { splashScreenView -> ObjectAnimator.ofFloat (splashScreenView, View.ALPHA, 1f, 0f).apply { duration = 300doOnEnd { splashScreenView.remove () }start () } }
- White flash cause: window background renders before first Activity frame — visible gap
- SplashScreen API: sets the brand color as window background — shown instantly before any Java runs
- installSplashScreen: must be called before super.onCreate() — sets up the compat shim
- setKeepOnScreenCondition: hold the splash until async data is ready — prevents content flash
- androidx.core compat: works back to API 23 — same API on all supported versions
"The old DIY splash screen pattern (a dedicated SplashActivity that sleeps for 2 seconds, then starts MainActivity) adds 2 seconds to cold start for zero reason. The SplashScreen API shows the brand logo at ZERO extra cost — it's displayed during the window creation phase that was previously showing a white flash. No extra Activity, no artificial delay."
StrictMode monitors your app at runtime for performance violations — disk reads on the main thread, network calls, memory leaks, and more. It's the easiest way to catch performance anti-patterns before they reach users.
// Enable in Application.onCreate() — debug builds only! class MyApp : Application() { override fun onCreate() { super.onCreate () if (BuildConfig.DEBUG) { // Thread policy — main thread violations StrictMode.setThreadPolicy ( StrictMode.ThreadPolicy.Builder() .detectDiskReads () // file read on main thread .detectDiskWrites () // file write on main thread .detectNetwork() // network on main thread . detectCustomSlowCalls () // calls to beginSection() .penaltyLog () // log to Logcat .penaltyDeath() // crash app — can't ignore . build () ) // VM policy — object/resource violations StrictMode.setVmPolicy ( StrictMode.VmPolicy.Builder() .detectLeakedSqlLiteObjects () // unclosed Cursors .detectLeakedClosableObjects() // unclosed streams . detectActivityLeaks() // leaked Activities . detectLeakedRegistrationObjects() // unregistered receivers . penaltyLog() . penaltyDeath() . build () ) } } } // Custom slow call detection: StrictMode.noteSlowCall("expensiveOperation") // flags this as a slow call violation // Temporarily suppress for intentional main-thread operations: val old = StrictMode. allowThreadDiskReads () try { prefs.getString ("key", null) } // unavoidable SharedPrefs read finally { StrictMode.setThreadPolicy (old) }
- ThreadPolicy: catches main-thread disk/network/custom slow calls — the ANR prevention tool
- VmPolicy: catches leaked Cursors, streams, Activities, and unregistered receivers
- penaltyDeath: crashes the debug build — violations can't be ignored or forgotten
- Debug only: StrictMode has overhead — wrap in BuildConfig.DEBUG, never ship to production
- allowThreadDiskReads: temporary bypass for intentional main-thread operations — always restore
"StrictMode with penaltyDeath is the single most effective performance tool for development. It turns invisible performance anti-patterns into crashes that block you from moving on. Every new Android project should have StrictMode enabled from day one — finding a disk read on the main thread on day 100 is much harder than on day 1."
Compose performance issues come from unnecessary recomposition — composables recomposing when their inputs haven't changed. The tools are the Recomposition Counter in Layout Inspector, stability annotations, and derivedStateOf.
// Tool 1: Layout Inspector — Recomposition Counter // Android Studio → Layout Inspector → Recomposition tab // Shows: how many times each composable recomposed // Red numbers = recomposing too frequently // Tool 2: Composition Tracing (Android 12+) // implementation("androidx.compose.runtime:runtime-tracing:1.0.0") // Enables Compose function names in Perfetto traces // COMMON FIX 1: Unstable lambdas causing recomposition // ❌ New lambda created on every recomposition of parent ParentComposable { ChildComposable(onClick = { doSomething() }) // new lambda each time! } // ✅ Remember the lambda val onClick =remember { {doSomething () } } ChildComposable(onClick = onClick) // COMMON FIX 2: Unstable class — mark @Stable // ❌ Regular class — Compose doesn't know if it's changed class User(val name: String, val score: Int) // ✅ data class — Compose infers stability via equals() data class User(val name: String, val score: Int) // ✅ @Stable annotation — tell Compose this is stable manually @Stable class User(val name: String, val score: Int) // COMMON FIX 3: derivedStateOf — avoid unnecessary recomposition val listState =rememberLazyListState () // ❌ Recomposes on EVERY scroll event val showFab = listState.firstVisibleItemIndex > 0 // ✅ Only recomposes when the boolean VALUE changes val showFab byremember {derivedStateOf { listState.firstVisibleItemIndex > 0 } } // COMMON FIX 4: key() in LazyColumn — stable recomposition LazyColumn { items(products, key = { it.id }) { product -> // ✅ stable key ProductCard(product) } }
- Recomposition Counter: Layout Inspector shows how often each composable recomposes — find hotspots
- Unstable lambdas: new lambda on each recomposition = child always recomposes — use remember
- data class stability: Compose uses equals() for data classes — stable by default
- derivedStateOf: converts frequently-changing state into a derived value — only recomposes on value change
- LazyColumn key: stable keys prevent full-list recomposition when one item changes
"derivedStateOf is the most impactful Compose performance fix for scrolling UIs. A LazyColumn with 1000 items firing a recomposition on every scroll pixel is common — listState.firstVisibleItemIndex changes 60 times per second. Wrapping the FAB visibility in derivedStateOf means the FAB composable only recomposes when it actually needs to show or hide — twice per interaction instead of 60 times per second."
Performance class (PerformanceClass) is an API introduced in Android 12 that categorises devices into tiers. You can query the tier and adapt your app's feature set — higher quality effects on flagship phones, simpler UI on low-end devices.
// Query device performance class val perfClass = if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.S) { Build.VERSION.MEDIA_PERFORMANCE_CLASS // API 31+ } else { 0 // not available — treat as low-end } // Performance classes: // 0 = no class (older/low-end devices) // 30 = Android 12 baseline performance (API 30 requirements) // 31 = Android 12 L performance // 33 = Android 13 performance // Adapt features based on performance class when { perfClass >= 33 -> { // Flagship — enable all effectsenableBlurEffects ()enableParticleAnimations ()useHighResAssets () videoPlayer.setQuality (Quality.HD_1080p) } perfClass >= 30 -> { // Mid-range — balancedenableBasicAnimations () videoPlayer.setQuality(Quality.HD_720p) } else -> { // Low-end — minimal effects disableAnimations () videoPlayer.setQuality (Quality.SD_480p) } } // Alternative: check RAM and CPU cores directly val ram = ActivityManager.MemoryInfo() val am = context.getSystemService (ActivityManager::class.java) am.getMemoryInfo (ram) val isLowRam = ram.totalMem < 2L * 1024 * 1024 * 1024 // < 2GB RAM val cpuCores = Runtime.getRuntime ().availableProcessors () // isLowRamDevice() — system-level flag val isLowRam = am.isLowRamDevice() // Go edition devices and old phones
- PerformanceClass: standardised device tier — no more RAM/CPU heuristics
- Adaptive features: blur effects, particle animations, video quality scaled to device capability
- isLowRamDevice(): system-set flag for Go and very low-end devices — disable heavy features entirely
- Memory check: totalMem < 2GB → reduce texture quality, limit concurrent loads
- Available processors: more cores → more background parallelism is safe
"PerformanceClass is the right way to segment devices in 2025 — better than RAM thresholds or brand heuristics. A MediaPerformanceClass of 33 guarantees specific camera, codec, and RAM capabilities. Use it for media-heavy features. Use isLowRamDevice() for the most aggressive low-end optimisation — those devices have 512MB-1GB RAM and can't run many things concurrently."
Microbenchmark measures the performance of small units of code — JSON parsing, sorting algorithms, regex matching, custom view drawing. Unlike System.currentTimeMillis(), it handles JIT warmup, garbage collection, and clock accuracy automatically.
// androidTestImplementation("androidx.benchmark:benchmark-junit4:1.2.3") // Benchmark module build.gradle.kts — CRITICAL config android { defaultConfig { testInstrumentationRunnerArguments["androidx.benchmark.suppressErrors"] = "EMULATOR" } } // The benchmark test @RunWith(AndroidJUnit4::class) class JsonParsingBenchmark { @get:Rule val benchmarkRule = BenchmarkRule() private val json = """{"id":"1","name":"Alice","email":"[email protected]"}""" @Test fun gsonParsing() { benchmarkRule.measureRepeated { Gson().fromJson (json, User::class.java) } } @Test fun kotlinSerializationParsing() { benchmarkRule.measureRepeated { Json. decodeFromString <User>(json) } } // Results (example): // gsonParsing: 42,310 ns/op // kotlinSerializationParsing: 18,240 ns/op ← 2.3x faster } // BenchmarkRule.measureRepeated handles: // ✅ JIT warmup: runs code until JIT-compiled (stable timing) // ✅ Multiple iterations: statistical accuracy // ✅ GC pauses: excluded from timing // ✅ Clock precision: nanosecond accuracy // Run benchmarks: // ./gradlew :benchmark:connectedAndroidTest // Results in: build/outputs/connected_android_test_additional_output/ // JSON report with min/median/max times // runWithTimingDisabled — exclude setup from timing benchmarkRule.measureRepeated { val data =runWithTimingDisabled {loadTestData () } // setup not countedprocessData (data) // only this is measured }
- BenchmarkRule: handles JIT warmup, GC exclusion, and statistical accuracy automatically
- nanosecond precision: far more accurate than System.currentTimeMillis() for small operations
- runWithTimingDisabled: exclude test setup from measurements — only measure the target code
- Real device required: benchmarks on emulators are unreliable — run on physical device
- Use case: compare implementations, verify performance doesn't regress across releases
"Microbenchmark is how you make data-driven performance decisions. 'Kotlin Serialization is faster than Gson' is a claim — measuring both with BenchmarkRule turns it into a fact: '2.3x faster, 42µs vs 18µs per parse'. Use it when choosing between implementations, and add benchmark tests for performance-critical code so regressions are caught automatically in CI."
Layout inflation parses XML and instantiates View objects reflectively — it's one of the most expensive operations in the UI lifecycle. Reducing hierarchy depth, using ViewStub for conditional layouts, and async inflation all reduce its cost.
// What makes layout inflation slow: // 1. Deep nested hierarchy — each level multiplies measure passes // 2. Many Views — each requires reflection-based instantiation // 3. Large drawables resolved during inflation // 4. Complex ConstraintLayout with many constraints // FIX 1: Flatten hierarchy with ConstraintLayout // ❌ 4 nested LinearLayouts (each forces a measure pass) // ✅ Single ConstraintLayout with all views (one measure pass) // FIX 2: ViewStub — defer inflation of rarely-shown views // <ViewStub android:id="@+id/error_stub" // android:layout="@layout/error_view" // android:inflatedId="@+id/error_view" /> // Inflated only when needed: val stub =findViewById <ViewStub>(R.id.error_stub) stub.inflate () // ← only called when error occurs // ✅ error_view XML is NOT parsed during activity startup // FIX 3: AsyncLayoutInflater — inflate on background thread // implementation("androidx.asynclayoutinflater:asynclayoutinflater:1.0.0") AsyncLayoutInflater(context).inflate ( R.layout.fragment_home, parentView ) { view, resId, parent -> parent?.addView(view) // callback on main thread } // ✅ XML parsing happens on background thread // ✅ main thread not blocked during inflation // ❌ Views still added to hierarchy on main thread // FIX 4: Jetpack Compose — no XML inflation at all // Compose compiles to direct function calls — no reflection, no XML parsing // Composables are faster to "inflate" than equivalent XML views // Measure inflation time: Trace. beginSection ("HomeFragment.inflate") val view =layoutInflater .inflate (R.layout.fragment_home, parent, false) Trace.endSection () // Visible in Perfetto trace — compare before/after optimisations
- Nested hierarchies: each level multiplies layout measurement passes — flatten with ConstraintLayout
- ViewStub: placeholder that defers inflation until needed — error views, empty states
- AsyncLayoutInflater: parse XML on background thread — main thread unblocked during inflation
- Compose: no XML parsing or reflection — compiled directly to function calls, faster to start
- Trace.beginSection: label inflation in Perfetto — measure improvement before/after changes
"ViewStub is underused. An Activity with an error state, an empty state, and a loading state — all three layouts inflated at startup even though only one is ever shown. With ViewStub: only the initial layout inflates — error and empty stubs inflate on demand. For a complex error screen with 10 views, that's 10 View objects not created until actually needed."
Excessive object allocation triggers the garbage collector frequently, which pauses all threads and causes frame drops. The solution is to identify allocation hotspots in the profiler and eliminate allocations in performance-critical paths.
// Detect GC pressure: // Logcat filter: "GC_" or "Dalvik" or "art" // "GC freed 1523K in 12ms" → GC is running frequently → allocation hotspot // Memory Profiler — Allocation Recording // Profiler → Memory → Record (Java/Kotlin allocations) → scroll list → stop // Sort by: "Count" → find which objects are created most // Sort by: "Size" → find which objects consume most memory // Common allocation hotspots: // 1. Boxing primitives in hot paths val scores: List<Int> =listOf (1, 2, 3) // boxes ints to Integer objects val scores: IntArray =intArrayOf (1, 2, 3) // ✅ no boxing, stack-friendly // 2. String concatenation in loops var result = "" for (item in items) { result += item.name // ❌ creates new String object every iteration } val sb = StringBuilder() for (item in items) { sb.append (item.name) // ✅ single object, append in place } // 3. Lambda closures creating objects fun doForEach(items: List<Int>) { items.forEach { value -> process(value) } // ❌ closure object per call } // In extremely hot paths: use for loop instead of forEach // 4. Object pools for frequently-created objects object RectPool { private val pool =mutableListOf <RectF>() fun acquire(): RectF = if (pool.isNotEmpty ()) pool.removeAt (0) else RectF() fun release(rect: RectF) { rect.setEmpty (); pool.add (rect) } } // Android itself uses Pools.SimplePool for this pattern // In custom View onDraw — reuse objects private val tempRect = RectF() // ✅ allocated once, reused every frame private val paint = Paint() // ✅ allocated once
- GC log detection: "GC freed Xms in Yms" in Logcat — frequent entries = allocation hotspot
- Allocation recording: Memory Profiler shows which object types are created most frequently
- IntArray vs List<Int>: avoids boxing primitives to Integer objects in performance-critical code
- StringBuilder: reuse a single buffer for string concatenation — avoid N string objects
- Object reuse: allocate once as class fields; reset and reuse in onDraw, tight loops, game loops
"The golden rule for custom View performance: allocate ZERO objects in onDraw(). No Paint(), no RectF(), no String formatting. Allocate all objects as class-level fields in the View constructor and reuse them on every draw call. The garbage collector runs on the main thread — every GC pause is a potential dropped frame."
Play Console's Android Vitals tracks real user startup times across your entire user base — broken down by cold/warm/hot start, device tier, and country. This gives you production data that no local profiling can match.
// Android Vitals → Core Vitals → App Startup // Metrics shown: // • Slow cold start rate: % of cold starts taking > 5 seconds // • Slow warm start rate: % of warm starts taking > 2 seconds // • Slow hot start rate: % of hot starts taking > 1.5 seconds // Google's "bad behaviour" thresholds — exceeding them affects Play Store ranking // Segment data by: // • Device tier (low-end vs flagship) // • Android version // • Country // • App version (before/after a release) // Custom startup metrics via Firebase Performance // implementation("com.google.firebase:firebase-perf-ktx") class MainActivity : ComponentActivity() { override fun onCreate(savedInstanceState: Bundle?) { super.onCreate (savedInstanceState) val trace = Firebase.performance.newTrace ("app_startup_custom") trace.start ()setContent { AppTheme { AppContent() } } // Stop trace when content is actually ready viewModel.contentReady.observe (this) { ready -> if (ready) { trace.stop () // custom TTFD in Firebase dashboard } } } } // reportFullyDrawn() — feeds data to Play Console TTFD metric override fun onResume() { super.onResume () viewModel.isContentReady.collectLatest { ready -> if (ready)reportFullyDrawn () // signals Play Console: content is visible } } // Track improvement after baseline profile deploy: // Play Console → App Startup → filter by version // Before v2.1.0: slow cold start rate = 15% // After v2.1.0: slow cold start rate = 4% ← baseline profile effect
- Play Console Android Vitals: real user startup data — thousands of devices, real conditions
- Bad behaviour thresholds: >5s cold start affects Play Store ranking — monitor actively
- Segment by version: compare before/after a release — validate that your fixes actually helped users
- Firebase Performance custom traces: measure TTFD with content-ready granularity
- reportFullyDrawn(): signals Play Console when content is genuinely ready — not just first frame
"Production startup data from Play Console often reveals surprises. Low-end device users (which may be your largest segment in India and SE Asia) can have 3x worse startup times than your test device. The Android Vitals data segmented by device tier shows the real picture — where baseline profiles and deferred initialisation matter most."
Thread contention happens when the main thread waits to acquire a lock held by a background thread — or when background threads compete for shared resources. It's invisible in normal profiling but shows up in System Trace as "lock wait" blocks.
// Detect in System Trace (Perfetto): // Main thread shows orange "sleeping" blocks → waiting for lock // Expand thread row → see "monitor-lock" events // Common contention scenarios: // 1. Main thread calling @Synchronized function object Cache { @Synchronized fun get(key: String): String? { /* expensive lookup */ return data[key] } } // Main thread calls Cache.get() → IO thread holds lock → main thread waits → jank // Fix: use concurrent data structures instead of synchronized private val cache = ConcurrentHashMap<String, String>() fun get(key: String) = cache[key] // ✅ lock-free reads with ConcurrentHashMap // 2. Mutex in coroutines — avoid holding across suspensions private val mutex = Mutex() // ❌ Holds mutex across a suspend point mutex.withLock {expensiveNetworkCall () // ❌ other coroutines blocked during network call } // ✅ Hold mutex only for the state update, not the network call val result =expensiveNetworkCall () // outside mutex mutex.withLock { cache[key] = result } // ✅ mutex only for state write // 3. Actor pattern — serialise access without locks val counterActor =actor <Int>(Dispatchers.Default) { var count = 0 for (delta in channel) { count += delta } } // Single coroutine owns the state — no lock needed // 4. StateFlow replaces synchronized shared mutable state private val _state = MutableStateFlow(AppState()) // All writes to _state go through update {} which is thread-safe _state.update { it.copy (isLoading = true) } // ✅ atomic CAS, no locks
- System Trace detection: orange "sleeping" blocks on main thread = waiting for a lock
- ConcurrentHashMap: lock-free reads in most cases — replace synchronized HashMap for caches
- Mutex scope: hold Mutex only for state mutation, not during the work that produces the value
- Actor pattern: single coroutine owns mutable state — no locks needed for sequential access
- StateFlow.update: atomic compare-and-set — thread-safe state updates without manual locking
"Thread contention is the hardest performance bug to diagnose — it doesn't show up as slow code in method traces, it shows as 'sleeping' on the main thread. Perfetto/System Trace is the only tool that makes this visible. Once identified, the fix is usually: (1) use lock-free data structures, (2) shrink the critical section, or (3) move the lock out of the main thread's call path entirely."
Battery drain on mobile is primarily caused by radio wakeups (network and GPS), CPU wake locks, and frequent sensor polling. Writing battery-efficient code means batching network requests, using JobScheduler/WorkManager, and avoiding persistent wake locks.
// Battery drain causes (in order of impact): // 1. Cellular radio wakeups (expensive — 20-second tail time) // 2. GPS polling // 3. Partial wake locks held too long // 4. Frequent alarm firing // 5. Excessive CPU usage in background // FIX 1: Batch network requests — reduce radio wakeups // Instead of: analytics event per user action (10 radio wakeups) // Use: WorkManager batched upload every 30 min (1 radio wakeup) val workRequest = PeriodicWorkRequestBuilder<AnalyticsWorker>( repeatInterval = 30, TimeUnit.MINUTES ).setConstraints (Constraints(requiresNetwork = true)).build () // FIX 2: WorkManager constraints — run only when conditions met val constraints = Constraints.Builder() .setRequiredNetworkType (NetworkType.UNMETERED) // WiFi only .setRequiresCharging (true) // only when charging .setRequiresBatteryNotLow(true) // not when battery < 15% . build () // FIX 3: Doze mode compatibility // Android 6+ puts device in Doze when screen off + stationary + unplugged // Doze blocks: network, wake locks, alarms (except AlarmManager.setExactAndAllowWhileIdle) // WorkManager handles Doze automatically — don't use raw AlarmManager for periodic work // FIX 4: GPS — use lowest acceptable accuracy val request = LocationRequest.create ().apply { priority = Priority.PRIORITY_BALANCED_POWER_ACCURACY // city-level, uses WiFi interval = 60_000 // check every 60s, not every second } // PRIORITY_HIGH_ACCURACY uses GPS chip — 10x more power than BALANCED // FIX 5: Battery Health API — query device battery status val batteryStatus = IntentFilter(Intent.ACTION_BATTERY_CHANGED).let { filter -> context.registerReceiver (null, filter) } val isCharging = batteryStatus?.getIntExtra (BatteryManager.EXTRA_PLUGGED, -1) != 0 // Defer heavy work when not charging
- Radio tail time: cellular radio stays awake 20s after a network call — batch to minimise wakeups
- WorkManager constraints: declare when work should run — system defers to optimal time
- Doze mode: system blocks background activity — WorkManager is Doze-compatible, raw alarms are not
- GPS accuracy tiers: BALANCED uses WiFi/cell triangulation — 10x less power than GPS chip
- isCharging check: defer heavy sync/backup operations until the device is plugged in
"Battery optimization interview: every network call wakes the cellular radio and it stays awake for ~20 seconds (the 'tail time'). 100 single-event analytics requests = 100 radio wakeups = ~33 minutes of radio-on time. Batch them into 5 requests = 5 wakeups = ~1.7 minutes. That's the difference between your app draining the battery and being a good citizen."
Perfetto is the most powerful Android performance tool -- it captures kernel-level events, CPU scheduling, all thread activity, and custom trace events in a unified timeline. Use it when Profiler shows a slow frame but CPU method traces don't explain why -- the answer is usually thread preemption, lock contention, or a slow binder call to a system service.
// Capture via adb (production-like conditions, no Studio overhead) // adb shell perfetto -o /data/misc/perfetto-traces/trace.perfetto-trace \ // -t 10s sched freq idle am wm gfx view dalvik binder_driver // adb pull /data/misc/perfetto-traces/trace.perfetto-trace // Open in: https://ui.perfetto.dev // Add custom trace events -- visible in Perfetto timeline import androidx.tracing.trace trace("ProductList.loadFromDb") { dao.getProducts() } // Async trace -- for operations that span threads Trace.beginAsyncSection("ImageLoad", cookie) // ... work on another thread ... Trace.endAsyncSection("ImageLoad", cookie)
- Perfetto reveals what CPU Profiler misses: thread preemption (UI thread de-scheduled by another thread), lock contention, binder IPC latency
- Frame analysis workflow: find a red frame bar → zoom into its time range → check main thread and RenderThread activity → find what pushed it over 16ms
- Custom trace events: trace{} or Trace.beginSection() make your own code visible in the timeline alongside framework events
- Async traces: Trace.beginAsyncSection/endAsyncSection for operations that cross thread boundaries -- matched by a cookie ID
- Compose runtime tracing: implementation('androidx.compose.runtime:runtime-tracing') makes composable function names appear in Perfetto automatically
"The scenario where Perfetto is essential: 'scroll is janky but CPU profiler shows nothing slow.' Perfetto reveals: the main thread is actually idle during the jank, but RenderThread is waiting for a binder call to SurfaceFlinger. Or the main thread gets preempted by a high-priority background thread. These are impossible to diagnose without a system-level tracer."
Android Vitals in Play Console tracks real user performance data -- crashes, ANRs, startup times, and frame rates -- across your entire user base. Google defines 'bad behaviour' thresholds; apps that exceed them get a lower search ranking and may show a warning label before install.
// Android Vitals thresholds (exceeding = "bad behaviour") // Crash rate: > 1.09% of daily active users // ANR rate: > 0.47% of daily active users // Slow cold start: > 25% of cold starts taking > 5 seconds // Slow frames: > 50% of frames taking > 16ms // Frozen frames: > 0.1% of frames taking > 700ms // Report fully drawn -- feeds TTFD into Play Console startup metrics reportFullyDrawn() // Firebase Performance -- custom traces for your own flow timing val trace = Firebase.performance.newTrace("checkout_flow") trace.start() // ... checkout completes ... trace.stop()
- Crash rate bad threshold: > 1.09% of daily users -- exceeding this hurts Play Store ranking
- ANR rate bad threshold: > 0.47% -- any thread blocking the main thread for > 5s triggers this
- Slow cold start bad threshold: > 25% of cold starts taking > 5 seconds -- most impactful on low-end devices
- Frozen frames: > 700ms frames -- these are complete UI hangs, not just jank -- Play tolerates almost zero
- Segment by version: compare vitals before/after each release to pinpoint which version caused a regression
"Android Vitals data is segmented by device tier — your premium device metrics look fine, but 60% of your users are on low-end devices where startup takes 4 seconds and crashes happen 5% of the time. Filter by 'Low RAM devices' in Vitals. In markets like India, this segmentation is where the most impactful user experience issues hide."
A slow leak doesn't crash immediately — memory grows gradually until the system kills the app. Detecting it requires a heap dump comparison workflow — capture at start, use the app for 30 minutes, capture again, compare the two dumps.
// Slow leak symptoms: // Memory Profiler shows steadily growing heap over time // App eventually gets OOM killed (no crash dialog — just closes) // LeakCanary may not catch it (leak source may not be an Activity/Fragment) // Workflow: Heap Dump Comparison // 1. Launch app → capture heap dump A (baseline) // 2. Use the app for 30 min — navigate all screens, scroll lists // 3. Force GC (press GC button in profiler) // 4. Capture heap dump B // 5. Compare A vs B — look for classes with growing instance count // Android Studio Heap Dump Analysis: // Open dump → filter by "Allocations per class" // Sort by count descending // Look for: growing Bitmap/byte[] counts → image cache leak // growing custom class counts → model object leak // growing Handler/Runnable → message queue leak // Common slow leak: unbounded cache object UserCache { private val cache = HashMap<String, User>() // ❌ grows forever fun put(id: String, user: User) = cache.put (id, user) } // Fix: LruCache or WeakReference cache private val lruCache = LruCache<String, User>(100) // ✅ max 100 entries // Handler leak (classic slow leak) class MyActivity : Activity() { private val handler = object : Handler(Looper.getMainLooper ()) { override fun handleMessage(msg: Message) { /* uses Activity fields */ } } override fun onDestroy() { handler.removeCallbacksAndMessages (null) // ✅ remove pending messages super.onDestroy () } }
- Heap dump comparison: baseline vs 30-minute-later dump — growing class counts reveal leaks
- Unbounded cache: HashMap with no eviction policy grows forever — use LruCache
- Handler leak: pending messages in MessageQueue hold references — always removeCallbacksAndMessages in onDestroy
- Memory Profiler timeline: steady upward slope = slow leak — spiky but returning = just GC pressure
- LeakCanary limitation: primarily catches Activity/Fragment leaks — custom slow leaks need manual investigation
"The heap dump comparison technique is the only reliable way to find slow leaks. The key: capture after forcing GC (not before) — this removes all short-lived objects so only truly retained objects remain. Compare instance counts between dumps. A class growing from 50 to 800 instances over 30 minutes is your culprit, even if each instance is small."
The AndroidX Tracing library (androidx.tracing) provides a Kotlin-idiomatic API to add custom trace events to your code. These appear as labelled blocks in Perfetto, Android Studio's System Trace, and Macrobenchmark outputs — making your own code sections visible alongside framework events.
// implementation("androidx.tracing:tracing:1.1.0") // implementation("androidx.tracing:tracing-ktx:1.1.0") import androidx.tracing.trace import androidx.tracing.Trace // Kotlin DSL — cleanest API trace("UserRepository.fetchUsers") { dao.getUsers () // appears in Perfetto as "UserRepository.fetchUsers" block } // Manual begin/end (for Java or complex async cases) Trace.beginSection ("parseProductList") try {parseJson (jsonString) } finally { Trace.endSection() // must always call endSection — even on exception } // Async tracing — for operations that span threads val cookie = 42 // unique ID for this trace event Trace. beginAsyncSection ("ImageLoad", cookie) // ... work happens on another thread ... Trace.endAsyncSection ("ImageLoad", cookie) // matches on cookie, not thread // Track values over time Trace.setCounter ("active_coroutines", activeCount) // visible as graph in Perfetto // Compose function tracing — automatic with runtime-tracing // implementation("androidx.compose.runtime:runtime-tracing:1.0.0") // All @Composable functions appear by name in Perfetto traces // No code changes needed — the compiler plugin adds trace calls // Overhead: trace events have ~microsecond cost // Keep trace names short — they're stored as strings // Use in performance-critical paths sparingly // Remove from production if tracing extremely hot paths
- trace{}: Kotlin lambda DSL — automatically handles beginSection/endSection even on exception
- beginAsyncSection: trace operations that cross thread boundaries — matched by cookie not thread
- setCounter: plot a value over time in Perfetto — track active coroutines, cache size, queue depth
- Compose runtime-tracing: zero-code composable function names in traces — compiler plugin adds them
- Microsecond cost: trace overhead is tiny — safe in most code, but skip in innermost tight loops
"Custom trace events are what separate professional performance debugging from guesswork. Without them, Perfetto shows framework calls but your code is a black box. With trace{} around your repository calls, database queries, and parsing operations, you see exactly which of YOUR code is responsible for a slow frame. Add them once, they pay off every time you profile."
A systematic performance code review catches the anti-patterns that accumulate into user-visible slowness — before they reach production. Each item maps to a real performance failure mode.
// 1. ❌ Network/DB/File IO on main thread fun onCreate() { db.userDao().getUserBlocking () } // ❌ ANR risk // ✅ suspend functions + coroutines // 2. ❌ View Binding not cleared in Fragment.onDestroyView private var binding: FragmentHomeBinding? = null // ❌ missing null in onDestroyView // ✅ binding = null in onDestroyView() // 3. ❌ Object allocation inside onDraw() override fun onDraw(canvas: Canvas) { val p = Paint() } // ❌ GC every frame // ✅ Paint as class field, allocated once // 4. ❌ notifyDataSetChanged() instead of DiffUtil adapter.notifyDataSetChanged() // ❌ rebinds everything even if nothing changed // ✅ ListAdapter with DiffUtil.ItemCallback // 5. ❌ Static reference to Activity or Context companion object { var ctx: Context? = null } // ❌ Activity leak // ✅ Use ApplicationContext for singletons // 6. ❌ GlobalScope coroutine in Fragment/Activity GlobalScope. launch { api.getData () } // ❌ never cancelled // ✅ viewLifecycleOwner.lifecycleScope or viewModelScope // 7. ❌ Unnecessary recomposition (Compose) LazyColumn { items(list) { ItemCard(it) } } // ❌ no key — unstable // ✅ items(list, key = { it.id }) { ItemCard(it) } // 8. ❌ Bitmap decoded at full resolution BitmapFactory.decodeFile (path) // ❌ 48MB for a 12MP photo // ✅ Use Coil/Glide with automatic downsampling // 9. ❌ Registered listener never unregistered locationManager.requestUpdates (listener) // ❌ in onResume with no onPause removal // ✅ Remove in corresponding lifecycle callback // 10. ❌ Heavy computation on main thread during startup class MyApp : Application() { override fun onCreate() { analytics.initialize () } // ❌ 400ms synchronous init } // ✅ Defer non-critical SDKs to background thread or after first frame
- IO on main thread: ANR risk — everything DB/network/file must be on a background thread
- Binding leaks: Fragment outlives its View — null binding in onDestroyView always
- onDraw allocations: GC pauses on every frame — allocate zero objects in onDraw
- notifyDataSetChanged: rebinds entire list — DiffUtil only rebinds changed items
- Static Context: Activity memory leak — use Application context in long-lived singletons
"In a performance code review I check stability (leaks, IO on main thread, unregistered listeners) before rendering performance (onDraw allocations, DiffUtil, recomposition). A memory leak causes an eventual crash — more user impact than a janky scroll. Then rendering, then startup. Triage by user impact."
Android has a tiered memory management system. As RAM fills up, the OS progressively terminates background processes before eventually sending low-memory callbacks to foreground apps. Understanding these tiers helps you write apps that survive memory pressure without crashing.
// Android memory pressure tiers (low → critical): // 1. Normal — plenty of RAM, all processes running // 2. Moderate — OS starts killing cached processes (user won't notice) // 3. Low — OS killing background processes aggressively // 4. Critical — only foreground app + critical services alive // 5. OOM — foreground app itself killed (rare) // ComponentCallbacks2 — respond to memory pressure class MyApp : Application() { override fun onTrimMemory(level: Int) { super.onTrimMemory (level) when (level) { ComponentCallbacks2.TRIM_MEMORY_UI_HIDDEN -> { // App went to background — release UI caches imageCache.evictAll () } ComponentCallbacks2.TRIM_MEMORY_RUNNING_LOW, ComponentCallbacks2.TRIM_MEMORY_RUNNING_CRITICAL -> { // Foreground but system is low — release non-essential memory thumbnailCache.evictAll () prefetchedData.clear () } ComponentCallbacks2.TRIM_MEMORY_COMPLETE -> { // Background, about to be killed — release everythingreleaseAllCaches () } } } } // ActivityManager — query current memory state val info = ActivityManager.MemoryInfo() (context.getSystemService (ActivityManager::class.java)).getMemoryInfo (info) // info.availMem — currently available RAM in bytes // info.totalMem — total RAM on device // info.lowMemory — true if system is in low memory state // info.threshold — RAM level that triggers low memory callbacks
- onTrimMemory: called when system needs memory — your chance to release caches before being killed
- TRIM_MEMORY_UI_HIDDEN: app backgrounded — safe to release all UI-related caches
- TRIM_MEMORY_COMPLETE: about to be killed — release everything, you may not get another chance
- MemoryInfo.lowMemory: check if system is currently in low memory state before doing heavy work
- Cached processes: Android kills these first — apps in background are always at risk when RAM is low
"onTrimMemory(TRIM_MEMORY_UI_HIDDEN) is the signal your app just went to the background. This is the best time to release image caches, decoded bitmaps, and prefetched data — the user can't see them anyway. Responding correctly here means your app is less likely to be killed when in the background, preserving the user's place when they return."
A photo gallery is the hardest image loading challenge on Android — thousands of high-resolution images, fast scroll, limited RAM. The solution combines Coil with a LazyVerticalGrid, thumbnail-first loading, and memory/disk cache tuning.
// Coil with custom cache configuration // val imageLoader = ImageLoader.Builder(context) // .memoryCache { // MemoryCache.Builder(context) // .maxSizePercent(0.25) // use 25% of available RAM for image cache // .build() // } // .diskCache { // DiskCache.Builder() // .directory(context.cacheDir.resolve("image_cache")) // .maxSizeBytes(512L * 1024 * 1024) // 512MB disk cache // .build() // } // .build() // Gallery grid — LazyVerticalGrid val photos by viewModel.photos.collectAsStateWithLifecycle () LazyVerticalGrid(columns = GridCells.Fixed(3)) { items(photos, key = { it.id }) { photo -> AsyncImage( model = ImageRequest.Builder(context) .data(photo.thumbnailUrl) // ✅ thumbnail first, not full-res .size (300, 300) // ✅ decode at display size .precision (Precision.INEXACT) // ✅ allow cached size reuse .memoryCachePolicy (CachePolicy.ENABLED) .diskCachePolicy (CachePolicy.ENABLED) .crossfade (200) .placeholder (R.drawable.placeholder_grey) .build (), contentDescription = null, modifier = Modifier.aspectRatio (1f), contentScale = ContentScale.Crop ) } } // Prefetch for smoother scrolling val gridState =rememberLazyGridState () LaunchedEffect(gridState) {snapshotFlow { gridState.firstVisibleItemIndex } .collect { firstVisible -> // Prefetch thumbnails 12 items ahead of visible area val prefetchRange = firstVisible + 12..firstVisible + 24 photos.getOrNull () ?.subList (minOf (prefetchRange.first, photos.size),minOf (prefetchRange.last, photos.size)) ?.forEach { imageLoader.enqueue (ImageRequest.Builder(context).data(it.thumbnailUrl).build ()) } } }
- Thumbnails not full-res: load small previews for the grid — full-res only when user opens the photo
- size(300,300): Coil decodes at display size — never loads a 12MP image for a 100dp thumbnail
- Precision.INEXACT: allows cache hits from slightly different sizes — better cache utilisation
- maxSizePercent(0.25): 25% of RAM for image cache — adjust based on available device memory
- Prefetch ahead: enqueue thumbnail loads before they scroll into view — eliminates loading placeholders
"The most impactful gallery optimisation: never load full-resolution images in the grid. A 12MP photo is 48MB decoded. 9 visible at once = 432MB. Thumbnails at 300×300 pixels = 0.36MB each. 9 thumbnails = 3.24MB. That's a 133x memory reduction. Only load full-res when the user taps to open an individual photo."
RenderThread was introduced in Android 5.0 to offload GPU command execution from the main thread. The main thread records drawing commands into a RenderNode display list; RenderThread replays them on the GPU. This means property animations and scroll physics can continue smoothly even if the main thread briefly spikes.
// Property animations run on RenderThread -- main thread not involved ObjectAnimator.ofFloat(view, "translationX", 0f, 200f).start() // Compose graphicsLayer -- transforms executed on RenderThread Box(modifier = Modifier.graphicsLayer { alpha = animatedAlpha // RenderThread only -- no recomposition scaleX = animatedScale // RenderThread only translationX = offset // RenderThread only }) // Hardware layer -- promote View to its own GPU texture view.setLayerType(View.LAYER_TYPE_HARDWARE, null) // Animations on this layer run entirely on RenderThread -- main thread not involved // Use sparingly: each layer consumes GPU memory
- RenderThread: executes GPU commands asynchronously -- main thread records, RenderThread draws, they run in parallel
- Property animations: ObjectAnimator and Compose animations backed by graphicsLayer run on RenderThread -- smooth even during main thread work
- graphicsLayer{}: the Compose way to leverage RenderThread -- alpha, scale, rotation, translation execute without recomposition
- Hardware layers: LAYER_TYPE_HARDWARE promotes a View to its own GPU texture -- ideal for complex Views being animated
- Perfetto: look for the RenderThread row -- if it's consistently backed up, you're GPU-bound; if it's waiting for main thread, you're CPU-bound
"graphicsLayer{} in Compose is how you animate without triggering recomposition. alpha, scale, rotation, and translation inside graphicsLayer run entirely on RenderThread — the Composable function is never called again during the animation. This is why Compose animations are smooth even during complex UI updates: the rendering and the business logic are decoupled."
High-performance custom Views require zero object allocation in onDraw, hardware acceleration, efficient invalidation, and using ValueAnimator for smooth frame-synced updates.
class WaveformView(context: Context, attrs: AttributeSet?) : View(context, attrs) { // ✅ All objects allocated ONCE — never inside onDraw private val wavePaint = Paint(Paint.ANTI_ALIAS_FLAG).apply { color = ContextCompat.getColor (context, R.color.accent) style = Paint.Style.STROKE strokeWidth = 4f } private val path = Path() // reused every frame private val bounds = RectF() // reused every frame private var animPhase = 0f // current animation state // ✅ ValueAnimator — synced to display vsync, runs on main thread private val animator = ValueAnimator.ofFloat (0f, 1f).apply { duration = 2000 repeatCount = ValueAnimator.INFINITE interpolator = LinearInterpolator()addUpdateListener { anim -> animPhase = anim.animatedValue as Floatinvalidate () // ← triggers onDraw, synced to vsync } } override fun onAttachedToWindow() { super.onAttachedToWindow (); animator.start () } override fun onDetachedFromWindow() { animator.cancel (); super.onDetachedFromWindow () } override fun onDraw(canvas: Canvas) { // ✅ Zero allocations here — reuse path, bounds, paint path.reset () val amplitude = height / 4f val frequency = 2f * Math.PI.toFloat() / width path.moveTo (0f, height / 2f) for (x in 0..width) { val y = height / 2f + amplitude * Math.sin ((x * frequency + animPhase * 2 * Math.PI).toDouble()).toFloat () path.lineTo (x.toFloat (), y) } canvas.drawPath (path, wavePaint) } }
- Pre-allocate everything: Paint, Path, RectF as class fields — zero allocations in onDraw
- ValueAnimator: vsync-synced callbacks — smoother than posting Runnables manually
- invalidate() in animator: triggers onDraw exactly once per frame — no over-drawing
- onAttached/onDetached: start animator when view enters window, cancel when it leaves
- path.reset(): clears path contents without allocating a new Path object
"The onAttachedToWindow/onDetachedFromWindow pair is the correct lifecycle for View animations — not onResume/onPause. A View can be attached to a window while the Activity is paused (e.g., in a dialog). Stopping the animator in onDetachedFromWindow guarantees it stops when the View actually leaves the screen, regardless of the Activity lifecycle."
These two View methods trigger different parts of the rendering pipeline. Calling the wrong one is either too expensive (requestLayout when invalidate would do) or incorrect (invalidate when the size changed).
// invalidate() — triggers onDraw() only // Use when: visual appearance changed, size/position unchanged // Cost: just onDraw() — no measure, no layout class ColorView(...) : View(...) { private var fillColor = Color.RED fun setColor(color: Int) { fillColor = colorinvalidate () // ✅ color changed, size same → just redraw } } // requestLayout() — triggers measure → layout → draw // Use when: size or position needs to change // Cost: full measure + layout pass for this view and potentially parents class ExpandableView(...) : View(...) { private var expanded = false fun toggle() { expanded = !expandedrequestLayout () // ✅ height changes → must remeasure } override fun onMeasure(wms: Int, hms: Int) {val h =if (expanded) 200.dp else 50.dp setMeasuredDimension (MeasureSpec .getSize (wms), h) } } // Both together — when size AND appearance change fun updateState(newText: String) { text = newTextrequestLayout () // size may change with new textinvalidate () // appearance also changed } // Note: requestLayout() does NOT automatically call onDraw() // You need both if you want immediate visual update + size update // postInvalidate() — call invalidate from a background thread Thread {updateData ()postInvalidate () // ✅ safe from background thread // invalidate() directly from background thread = crash }.start()
- invalidate(): schedules onDraw() only — no measure/layout — use for appearance-only changes
- requestLayout(): schedules full measure + layout + draw — use when size or position changes
- Both together: requestLayout() does not trigger onDraw() — call both when size AND appearance change
- postInvalidate(): thread-safe version of invalidate() — use when calling from background threads
- Performance: prefer invalidate() over requestLayout() — layout passes are expensive, especially with deep hierarchies
"requestLayout() is 10-100x more expensive than invalidate() because it triggers measure and layout for the view and propagates up the hierarchy. A common mistake: calling requestLayout() when only a color changed — that triggers a full layout pass for nothing. Always ask: 'Did the SIZE change?' If yes → requestLayout(). If only the visual changed → invalidate()."
FrameMetrics gives you per-frame timing data programmatically — no developer tools needed. You can collect it in production to detect jank on real user devices and report to your analytics system.
// FrameMetrics — available API 24+ @RequiresApi(24) class FrameMetricsMonitor(private val activity: Activity) { private val handler = Handler(HandlerThread("FrameMetrics").also { it.start () }.looper) private val listener = Window.OnFrameMetricsAvailableListener { _, metrics, _ -> val totalMs = metrics.getMetric (FrameMetrics.TOTAL_DURATION) / 1_000_000L val inputMs = metrics.getMetric (FrameMetrics.INPUT_HANDLING_DURATION) / 1_000_000L val drawMs = metrics.getMetric (FrameMetrics.DRAW_DURATION) / 1_000_000L val syncMs = metrics.getMetric (FrameMetrics.SYNC_DURATION) / 1_000_000L val gpuMs = metrics.getMetric (FrameMetrics.GPU_DURATION) / 1_000_000L when { totalMs > 700 -> reportFrozenFrame(totalMs) // > 700ms = frozen totalMs > 16 -> reportSlowFrame(totalMs) // > 16ms = janky } // Report breakdown to analytics for P95/P99 analysis analytics.logFrameTiming ( screen = activity.localClassName, totalMs = totalMs, drawMs = drawMs, gpuMs = gpuMs ) } fun start() { activity.window.addOnFrameMetricsAvailableListener (listener, handler) } fun stop() { activity.window.removeOnFrameMetricsAvailableListener (listener) } } // FrameMetrics breakdown: // INPUT_HANDLING_DURATION — touch event processing time // ANIMATION_DURATION — ValueAnimator callbacks // LAYOUT_MEASURE_DURATION — measure + layout pass // DRAW_DURATION — onDraw() recording time // SYNC_DURATION — main→RenderThread sync // GPU_DURATION — GPU rasterisation time // TOTAL_DURATION — sum of all phases
- FrameMetrics: per-frame timing in production — no developer tools, real user data
- Per-phase breakdown: INPUT, ANIMATION, LAYOUT_MEASURE, DRAW, SYNC, GPU — pinpoints which phase causes jank
- Background handler: FrameMetrics callback on a HandlerThread — never process on main thread
- Frozen frame threshold: > 700ms is a frozen frame — much worse than a slow frame
- P95/P99 analytics: track frame time percentiles — P50 hides outlier jank that P99 reveals
"FrameMetrics is how you catch production jank that never appears during development. Your test device is a flagship — 60fps always. Your users have mid-range phones where GPU_DURATION exceeds 16ms on complex screens. With FrameMetrics reporting to your analytics, you see 'P95 frame time on screen X is 42ms on Android 11 devices' — a specific, actionable performance bug from real users."
Compose LazyColumn performance is driven by stable keys, avoiding unnecessary recomposition inside items, and keeping item composables lean. Getting these right means smooth 60fps scrolling on mid-range devices.
// ✅ Pattern 1: Stable keys — prevent full list recomposition LazyColumn { items( items = products, key = { it.id }, // ✅ stable String key — Compose tracks by ID contentType = { it.type } // ✅ ViewHolder reuse equivalent ) { product -> ProductCard(product = product) } } // ✅ Pattern 2: Stable item composable — use @Stable data class @Stable data class ProductUiModel( val id: String, val name: String, val priceFormatted: String // ✅ pre-formatted — no formatting in composable ) // ✅ Pattern 3: Avoid lambda capture causing recomposition val onFavClick: (String) -> Unit =remember { { id -> viewModel.toggleFav (id) } } LazyColumn { items(products, key = { it.id }) { product -> ProductCard( product = product, onFavClick = onFavClick // ✅ same lambda instance — no recomposition ) } } // ✅ Pattern 4: Pagination with Paging 3 val products = viewModel.products.collectAsLazyPagingItems () LazyColumn { items(products, key = { it.id }) { product -> product?.let { ProductCard(it) } ?: PlaceholderCard() } item { when (products.loadState.append) { is LoadState.Loading -> LoadingSpinner() is LoadState.Error -> ErrorRow { products.retry () } else -> {} } } } // ✅ Pattern 5: rememberUpdatedState for callbacks @Composable fun ProductCard(product: ProductUiModel, onFavClick: (String) -> Unit) { val currentOnFavClick byrememberUpdatedState (onFavClick) val onClick =remember { { currentOnFavClick(product.id) } } // onClick stable reference → ProductCard won't recompose on parent lambda change }
- key parameter: Compose tracks items by key — inserts/removes animate correctly, no full recomposition
- contentType: equivalent to RecyclerView viewType — allows composable reuse across items
- @Stable data class: Compose skips recomposition when data class equals() returns true
- remembered lambdas: unstable lambdas cause item recomposition on every parent update
- rememberUpdatedState: capture latest callback value without breaking stability
"The most impactful LazyColumn optimisation is often the simplest: add key = { it.id }. Without it, Compose treats the list as positional — insert an item at position 0 and every item recomposes. With a stable key, only the new item composes and existing items are matched by ID. This is the Compose equivalent of DiffUtil, and it's one line."
High idle CPU means something is running that shouldn't be — a polling loop, a stuck animation, a runaway coroutine, or a repeating alarm. The CPU Profiler in sampled mode shows what's running even when the app appears idle.
// Step 1: Profile — CPU Profiler (Sampled) while app is "idle" // Leave app on home screen but don't interact // If CPU > 5% while idle → something is running it shouldn't be // Common causes of idle CPU: // 1. Animation not stopped val animator = ObjectAnimator.ofFloat (view, "rotation", 0f, 360f).apply { repeatCount = ValueAnimator.INFINITE } animator.start () // ❌ If view goes offscreen or fragment detaches — animator still runs! // Fix: cancel in onPause() or onDestroyView() override fun onPause() { super.onPause (); animator.cancel () } // 2. Polling with while(true) or Handler.postDelayed loop val handler = Handler(Looper.getMainLooper ()) val pollRunnable = object : Runnable { override fun run() {checkForUpdates () handler.postDelayed (this, 1000) // ❌ runs forever, even in background } } // Fix: stop the loop in onStop(), restart in onStart() override fun onStop() { handler.removeCallbacks(pollRunnable) } // 3. Flow collected without cancellation class MyActivity : Activity() { override fun onCreate(...) { GlobalScope. launch { // ❌ never cancelled locationFlow.collect { updateMap(it) } } } } // Fix: use lifecycleScope.launch or repeatOnLifecycle lifecycleScope.launch {repeatOnLifecycle (Lifecycle.State.STARTED) { locationFlow.collect {updateMap (it) } // ✅ auto-cancels when STOPPED } } // 4. WakeLock held too long val wl = powerManager.newWakeLock (PowerManager.PARTIAL_WAKE_LOCK, "app:sync") wl.acquire (10 * 60 * 1000L) // ✅ always set timeout — prevents infinite hold
- Sampled CPU Profiler while idle: shows what's running — method names reveal the culprit
- Infinite animations: ObjectAnimator with INFINITE must be cancelled on lifecycle events
- Handler polling loops: postDelayed loops run forever — remove callbacks in onStop
- GlobalScope coroutines: never cancelled — use lifecycleScope with repeatOnLifecycle
- WakeLock timeout: always set acquire(timeout) — prevents accidental infinite CPU activity
"repeatOnLifecycle is the modern fix for the runaway coroutine problem. It automatically cancels the inner coroutine when the lifecycle drops below the specified state and restarts it when it rises above. A Flow collected with repeatOnLifecycle(STARTED) stops running when the app backgrounds — exactly what you want for location, sensors, or network polling."
Android can kill your app's process at any time when it's in the background. Understanding process death — and designing for it with SavedStateHandle — turns what could be a broken UX into a seamless restoration experience.
// Process death: Android kills your process (not the app) when RAM is needed // User doesn't see a crash — they just see a "cold start" on next open // But: the OS restores the back stack, navigation state, and saved instance state // Simulate process death for testing: // 1. Launch app, navigate to a deep screen // 2. Press Home // 3. adb shell am kill com.example.app ← kills process // 4. Switch back to app — should restore seamlessly // SavedStateHandle — survives process death @HiltViewModel class SearchViewModel @Inject constructor( private val savedState: SavedStateHandle // persists across process death ) : ViewModel() { var searchQuery by savedState.saveable {mutableStateOf ("") } private set fun updateQuery(q: String) { searchQuery = q } } // searchQuery restored after process death — user sees their search query still there // What survives process death (automatic): // ✅ Navigation back stack (Compose Navigation) // ✅ SavedStateHandle values // ✅ onSaveInstanceState() Bundle // ✅ Room database (persisted to disk) // ✅ DataStore (persisted to disk) // What does NOT survive process death: // ❌ ViewModel state (non-saved) // ❌ In-memory variables // ❌ Pending coroutines / queued work // Design principle: treat process death as a normal event // Any UI state the user cares about → SavedStateHandle // Any data the user created → Room/DataStore immediately
- Process death: normal Android behaviour — not a crash, OS reclaims RAM while app is backgrounded
- SavedStateHandle: ViewModel property store that survives process death — use for UI state
- savedState.saveable: Compose State delegate on SavedStateHandle — state survives rotation AND death
- Test with adb kill: simulate process death manually — verify seamless restoration
- Room/DataStore: persist immediately on user action — don't wait for onSaveInstanceState
"Process death is the most under-tested scenario in Android development. Most developers never test it. The fix: after every navigation, run 'adb shell am kill your.package' then switch back. If the app restores perfectly — you're done. If it shows a blank screen or crashes — you have SavedStateHandle work to do. Make this part of your QA checklist."
Android increasingly restricts background work to protect battery life. WorkManager is the only guaranteed way to run background work — it handles Doze mode, App Standby, Background Execution Limits, and process death automatically.
// Background execution restrictions timeline: // Android 6: Doze mode — blocks network/alarms when screen off + stationary // Android 7: Background network restrictions // Android 8: Background Service Limits — no background services without foreground notification // Android 12: Foreground service restrictions tightened // Android 14: Foreground service type declaration required // Solution: WorkManager — designed for all these restrictions // One-time sync work — runs when constraints met, survives process death val syncWork = OneTimeWorkRequestBuilder<SyncWorker>() .setConstraints (Constraints(requiresNetwork = true)) .setBackoffCriteria(BackoffPolicy.EXPONENTIAL, 15, TimeUnit.MINUTES) . setExpedited (OutOfQuotaPolicy.RUN_AS_NON_EXPEDITED_WORK_REQUEST) .build() WorkManager. getInstance (context).enqueueUniqueWork ( "sync", ExistingWorkPolicy.KEEP, syncWork ) // Periodic work — runs ~every 6 hours, battery-efficient val periodicSync = PeriodicWorkRequestBuilder<SyncWorker>(6, TimeUnit.HOURS) .setConstraints (Constraints.Builder() .setRequiredNetworkType (NetworkType.CONNECTED) .setRequiresBatteryNotLow(true) . build ()) .build () // The Worker itself — Doze-safe, runs when constraints met @HiltWorker class SyncWorker @AssistedInject constructor( @Assisted appContext: Context, @Assisted params: WorkerParameters, private val repo: SyncRepository ) : CoroutineWorker(appContext, params) { override suspend fun doWork(): Result { return try { repo.sync () Result.success () } catch (e: Exception) { if (runAttemptCount < 3) Result.retry () else Result.failure () } } }
- WorkManager: the only API that works across all Android background restrictions — Doze, App Standby, limits
- setConstraints: declare what conditions must be met — OS schedules at the best time
- setBackoffCriteria: exponential retry with cap — resilient to transient failures
- enqueueUniqueWork: prevent duplicate work — KEEP or REPLACE depending on semantics
- setRequiresBatteryNotLow: don't drain a user's already-low battery — responsible scheduling
"The Android background work API history is a graveyard of deprecated approaches: AsyncTask, IntentService, raw AlarmManager, JobScheduler, Firebase JobDispatcher. WorkManager wraps the best available mechanism per Android version and handles all the restrictions automatically. In 2025, WorkManager is the correct answer for any background work that needs to be reliable."
Hardware acceleration routes drawing through the GPU via OpenGL ES / Vulkan instead of the CPU. It's enabled by default since Android 4.0 and dramatically improves rendering performance — but some Canvas operations are unsupported or produce different visual results.
// Hardware acceleration: enabled by default at application level // AndroidManifest.xml: android:hardwareAccelerated="true" (default) // Per-View override for problematic views: view.setLayerType (View.LAYER_TYPE_SOFTWARE, null) // force software for this view view.setLayerType (View.LAYER_TYPE_HARDWARE, null) // force hardware layer view.setLayerType (View.LAYER_TYPE_NONE, null) // default (inherit from window) // ✅ GPU-accelerated (fast): canvas.drawBitmap (...) // texture upload → GPU draw canvas.drawRect(...) // direct GPU primitive canvas. drawText(...) // GPU text rendering canvas. drawRoundRect(...) // GPU // ❌ Not hardware-accelerated (falls back to software): canvas. drawBitmapMesh(...) // complex mesh deformation canvas. drawPicture(...) // Picture objects // Paint.setXfermode(PorterDuffXfermode) — some modes not GPU accelerated // canvas.clipPath() with non-rectangular clips — software on older APIs // Canvas.clipPath() issue — visual difference // Software: anti-aliased edges // Hardware: jagged edges (fixed in API 26 with outline clipping) if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.O) { view.outlineProvider = ViewOutlineProvider.BOUNDS view.clipToOutline = true // ✅ GPU-accelerated rounded corners } // Detect if hardware accelerated: canvas.isHardwareAccelerated // true in onDraw when using GPU path if (!canvas.isHardwareAccelerated) { // fall back to software-compatible drawing }
- Enabled by default: API 14+ all apps hardware accelerated — check manifest if turned off
- LAYER_TYPE_SOFTWARE: force software rendering for specific views with unsupported operations
- LAYER_TYPE_HARDWARE: explicit GPU texture — use for complex animated views
- clipPath on older APIs: anti-aliasing differs between software and hardware — use clipToOutline instead
- canvas.isHardwareAccelerated: check in onDraw to adapt drawing code per render path
"The most common hardware acceleration bug: a custom View uses canvas.clipPath() for rounded corners, looks perfect in Android Studio preview (software), but has jagged edges on device (hardware). The fix: use view.clipToOutline = true with a ViewOutlineProvider — this uses GPU-accelerated outline clipping available from API 21, which is smooth on hardware."
Expensive recomposition is the most common Compose performance issue. The fix involves identifying which composables recompose unnecessarily (Layout Inspector) and applying targeted optimisations: state hoisting, derivedStateOf, and composable function scoping.
// Problem: ProductListScreen recomposes whenever ANY state changes // Even state that only affects the FAB triggers full screen recomposition // ❌ BAD — all state in one composable @Composable fun ProductListScreen(vm: ProductViewModel) { val state by vm.state.collectAsStateWithLifecycle () // state.products + state.isLoading + state.scrollPosition + state.fabVisible // Any change → entire screen recomposes! ProductList(state.products) // recomposes even on scrollPosition change Fab(state.fabVisible) // recomposes even on products change } // ✅ GOOD — separate state reads into separate composables @Composable fun ProductListScreen(vm: ProductViewModel) { ProductListContent(vm) // reads products + loading FabSection(vm) // reads only fabVisible } @Composable fun ProductListContent(vm: ProductViewModel) { val products by vm.products.collectAsStateWithLifecycle () val isLoading by vm.isLoading.collectAsStateWithLifecycle () // Only recomposes when products or isLoading changes } @Composable fun FabSection(vm: ProductViewModel) { val fabVisible by vm.fabVisible.collectAsStateWithLifecycle () // Only recomposes when fabVisible changes } // derivedStateOf — convert frequent scroll state to rare boolean val listState =rememberLazyListState () val fabVisible byremember {derivedStateOf { listState.firstVisibleItemIndex > 3 } } // FAB only recomposes when boolean changes (2x per interaction) // NOT on every scroll pixel (60x per second) // Composition Local — avoid prop drilling that causes recomposition val LocalTheme =compositionLocalOf { AppTheme() } // Reads from LocalTheme don't cause recomposition unless LocalTheme changes
- State isolation: split big state objects into separate flows — each composable reads only what it needs
- Composable scoping: smaller composables recompose independently — isolate expensive sections
- derivedStateOf: convert high-frequency state (scroll position) to low-frequency derived value
- Layout Inspector: use recomposition count to identify which composables recompose most
- CompositionLocal: share theme/config without prop drilling — doesn't trigger recomposition unless changed
"The state scoping principle: a composable should only read state it actually renders. If ProductListContent reads a fabVisible boolean it never uses, any fab state change triggers ProductListContent recomposition — wasted work. Split state into separate StateFlows in the ViewModel, each composable collects only its own slice. One state change = one composable recomposes, not the whole screen."
The Network Profiler in Android Studio visualises all HTTP requests your app makes -- their timing, payload size, and response codes. It's invaluable for finding duplicate requests triggered by rotation, oversized API payloads, and serial requests that could run in parallel.
// Open: Profiler → + → Network → see live request timeline // OkHttp logging interceptor -- detailed request/response in Logcat (debug only) val logging = HttpLoggingInterceptor().apply { level = if (BuildConfig.DEBUG) Level.BODY else Level.NONE } val client = OkHttpClient.Builder().addInterceptor(logging).build() // Chucker -- in-app network inspector, shareable with QA (debug builds only) // debugImplementation("com.github.chuckerteam.chucker:library:4.0.0") val chucker = ChuckerInterceptor.Builder(context).build() val client = OkHttpClient.Builder().addInterceptor(chucker).build()
- Waterfall pattern: serial requests that could run in parallel show as sequential bars -- parallelise with async { } + awaitAll()
- Duplicate requests: same URL called multiple times -- usually caused by ViewModel recreating on rotation, fix with viewModelScope.launch in init{}
- Payload inspection: click a request → Body tab → see the full JSON -- look for fields you're fetching but not displaying
- Chucker: in-app network log accessible via notification -- share with QA team without requiring Android Studio
- Size matters: a 200KB response for a list screen is normal; a 2MB response for a settings screen is a backend API design issue
"The most common finding in Network Profiler: duplicate requests. A screen loads, user rotates, ViewModel recreates, same 3 API calls fire again. Fix: move the API call to viewModelScope.launch in init{} — only runs once per ViewModel lifetime. Or use Paging 3's cachedIn(viewModelScope) — pages cached in ViewModel survive rotation."
Work during startup that the user can't see is waste — it delays the first frame without any visible benefit. Startup tracing with Perfetto reveals exactly what's running before the first frame and which of it is actually necessary.
// Capture startup trace: // adb shell am start -W --start-profiler /data/misc/perfetto-traces/startup.trace \ // -P /data/misc/perfetto-traces/startup.trace com.example.app/.MainActivity // OR: Android Studio → Profiler → CPU → Start recording → launch app // Startup trace shows timeline of Application.onCreate() through first frame // Common unnecessary startup work: // 1. SharedPreferences read on main thread (blocks) // Fix: migrate to DataStore (async) or read in background // 2. SDK initialization that isn't needed for first frame class MyApp : Application() { override fun onCreate() { // ❌ Analytics SDK: user can't see analytics before first frame // ❌ Push notification setup: no notification received during startup // ❌ Crash reporting: crashes during startup handled differently // ✅ Only what's needed for first frame: Timber.plant (Timber.DebugTree()) // needed for debug logging DaggerAppComponent.create ().inject (this) // DI graph (may be fast) } } // App Startup library — ordered, traceable initialization class AnalyticsInitializer : Initializer<Unit> { override fun create(context: Context) { Trace.beginSection ("AnalyticsInitializer") analytics.initialize (context) Trace.endSection () } override fun dependencies() =listOf (FirebaseInitializer::class.java) } // Lazy initialization — only when first used val analyticsClient bylazy { Analytics.create (context) } // Initialized on first access — not during startup // PostDelayed initialization — after first frame rendered override fun onResume() { super.onResume () Handler(Looper.getMainLooper ()).postDelayed ({initNonCriticalSdks () // after first frame is drawn }, 500) }
- Startup trace: Perfetto shows every method between process start and first frame
- First-frame critical path: only init what's needed to draw the first visible frame
- App Startup tracing: Trace.beginSection in Initializer.create() — each SDK's startup cost visible
- Lazy: initialize expensive singletons on first use, not at startup
- postDelayed(500ms): defer non-critical work until after first frame renders and user perceives app is ready
"Ask 'does the user need this for the first frame?' for every line in Application.onCreate(). Analytics: no — user can't see it. Crash reporting: maybe — depends on SDK. Navigation graph: yes — needed to display the first screen. DI graph: yes — needed to inject the ViewModel. Everything else: defer. This single question can cut 500ms from cold startup."
App size has three distinct meanings with different optimisation strategies. Download size is what users see on Play Store before installing -- reduce with AAB and R8. Install size is the storage footprint after installation -- typically 2-3x the download size due to ART compilation. Memory footprint is RAM used while running -- affects how often the OS kills your process.
// Analyze APK -- Android Studio → Build → Analyze APK // Shows: res/ classes.dex lib/ assets/ -- identify largest contributors // Check memory footprint (heap limit per device) val am = context.getSystemService(ActivityManager::class.java) val heapLimitMb = am.memoryClass // device-recommended heap limit in MB val largeHeapMb = am.largeMemoryClass // if android:largeHeap="true" in manifest // Runtime memory breakdown (from adb) // adb shell dumpsys meminfo com.example.app // Shows: Java Heap, Native Heap, Code, Graphics, Stack, System
- Download size: what users see before installing -- optimise with AAB (Play generates per-device APKs) + R8 + WebP images
- Install size: 2-3x the download size -- ART compiles DEX to native code at install, occupying additional storage
- Memory footprint: RAM while running -- affects process kill priority; exceed memoryClass and you'll see frequent OOM kills
- Analyze APK: Build → Analyze APK in Android Studio -- shows which section (res, dex, lib) is the largest
- adb shell dumpsys meminfo: breaks down RSS into Java heap, native heap, graphics, code -- find where memory is going at runtime
"These three sizes have different optimisation strategies. Download size → AAB + R8 + WebP. Install size → fewer native libs, fewer resources. Memory footprint → image cache limits, fewer retained objects, release caches in onTrimMemory. An app can have a 10MB download size but a 300MB memory footprint — small to install, but killed constantly on low-end devices."
Performance regressions are caught by combining automated benchmarks in CI (catches code-level regressions), Firebase Performance Monitoring (catches production regressions), and Android Vitals alerts (Play Console alerting).
// LAYER 1: Microbenchmarks in CI — catch code-level regressions // Add benchmark tests for critical paths: @Test fun productListJsonParsing() { benchmarkRule.measureRepeated { Json.decodeFromString <List<Product>>(sampleJson) } // CI fails if median > 5ms — regression detected before merge } // LAYER 2: Macrobenchmark in CI — catch startup/scroll regressions // Run on a dedicated physical device in CI @Test fun coldStartupRegression() { benchmarkRule.measureRepeated ( packageName = "com.example.app", metrics =listOf (StartupTimingMetric()), startupMode = StartupMode.COLD, iterations = 5 ) { pressHome(); startActivityAndWait() } // Assert median < 1500ms — fail PR if startup regressed } // LAYER 3: Firebase Performance — production regression detection // Custom traces for critical user flows val checkoutTrace = Firebase.performance.newTrace ("checkout_flow") checkoutTrace.start () // ... user completes checkout ... checkoutTrace.stop () // Firebase dashboard: alert if P95 checkout_flow > 3s // Set up alerts: Performance → Traces → Add alert // LAYER 4: Play Console Vitals alerts // Android Vitals → Set threshold alerts for: // • Crash rate spike > 2% // • ANR rate spike > 0.5% // • Slow cold start rate > 30% // Email alert sent when threshold crossed // LAYER 5: Version-based comparison // Android Vitals → Filter by app version // Compare v2.1.0 vs v2.0.0 on the same metric // Slow start rate: v2.0.0 = 8%, v2.1.0 = 22% → regression from v2.1.0
- Microbenchmark CI: catch parsing/algorithm regressions in PRs — before merge
- Macrobenchmark CI: catch startup/scroll regressions on physical device — before release
- Firebase Performance alerts: P95 custom trace threshold — production regression notification
- Play Vitals alerts: crash/ANR/slow start threshold email — before users leave bad reviews
- Version comparison: filter Vitals by version to pinpoint which release caused a regression
"The four-layer approach: Microbenchmark (code) → Macrobenchmark (device) → Firebase (production) → Play Vitals (users). Each layer catches different types of regressions. A performance regression that passes all four layers either doesn't exist or is too small to matter. The most common gap in teams: they have none of these layers and only discover regressions from 1-star reviews."
Lazy loading defers work until it's actually needed — don't load data you're not showing, don't initialise objects you might not use. In lists it means loading on demand; in navigation it means loading screens only when navigated to.
// LAZY LOADING IN LISTS — Paging 3 // Load 20 items → user scrolls near end → load next 20 val flow = Pager( config = PagingConfig( pageSize = 20, prefetchDistance = 5 // load next page when 5 items from end ), pagingSourceFactory = { dao.paginate () } ).flow.cachedIn (viewModelScope) // LAZY LOADING IMAGES — Coil loads only visible items LazyColumn { items(products, key = { it.id }) { product -> AsyncImage(model = product.imageUrl, ...) // loads when scrolled into view // Coil automatically cancels loads for items that scroll away } } // LAZY INITIALISATION — object created on first use class AnalyticsManager { private val heavyClient bylazy { HeavyAnalyticsClient.create () // created only when first method is called } fun track(event: String) { heavyClient.log (event) } } // LAZY NAVIGATION — Compose destination loaded on navigate NavHost(navController, startDestination = "home") {composable ("home") { HomeScreen() } composable("detail/{id}") { DetailScreen() } // not instantiated until navigated to } // LAZY DEPENDENCY INJECTION — Hilt Provider class HomeViewModel @Inject constructor( private val repo: Provider<HeavyRepository> // not created until repo.get() called ) : ViewModel() { fun loadIfNeeded() { if (shouldLoad) repo.get ().load () // HeavyRepository created on first call } }
- Paging 3: load data in pages — only fetches what the user scrolls to see
- Coil: automatically cancels image loads for off-screen items — never loads what's not visible
- Kotlin lazy: delegate initialises on first access — ideal for expensive singletons
- Compose NavHost: destination composables not instantiated until navigated to
- Hilt Provider: inject a factory instead of an instance — create the object only when needed
"Lazy loading's most impactful application: don't load data for screens the user hasn't visited. A common mistake is loading all data for all tabs on launch — even tabs the user may never open. Use ViewModel lifecycle scoped to the NavBackStackEntry: the ViewModel (and its data loading) is only created when the user navigates to that screen."
Real-time UIs require careful optimisation — updating 50 list items every second on the main thread will cause jank. The solution is surgical UI updates via DiffUtil (Views) or stable keys with derivedStateOf (Compose), combined with smart update throttling.
// ❌ Naive approach — full list update every second viewModel.liveData.collect { items -> adapter.submitList (items) // DiffUtil runs on every update — OK but can miss frames // If items is 1000 entries updating every 100ms → DiffUtil on main thread → jank } // ✅ Throttle updates to prevent overloading main thread viewModel.liveScores .sample (100) // emit at most once per 100ms — cap at 10fps .flowOn(Dispatchers.Default) // process updates on background thread . collect { scores -> adapter.submitList (scores.toList ()) } // ✅ DiffUtil in background — for large lists // ListAdapter already does this automatically on a background thread // But: if updates come faster than DiffUtil processes, it queues // Solution: deduplicate with conflate() viewModel.liveData .conflate () // drop intermediate updates — only process latest .collect { adapter.submitList (it) } // ✅ Compose — surgical updates with stable keys + derivedStateOf val scores by vm.scores.collectAsStateWithLifecycle () LazyColumn { items(scores, key = { it.matchId }) { score -> // Each ScoreRow recomposes only if ITS score changed // Other rows are skipped by Compose's smart recomposition ScoreRow(score = score) } } // Highlight changes briefly @Composable fun ScoreRow(score: LiveScore) { val bg byanimateColorAsState ( targetValue =if (score.justUpdated) Color.Yellowelse Color.Transparent, animationSpec =tween (500) ) Box(modifier = Modifier.background (bg)) { ScoreContent(score) } }
- sample(100ms): cap update rate — 10 UI updates per second is plenty for live scores
- conflate(): drop skipped frames — only process the latest update, skip intermediates
- Compose stable keys: only the changed score row recomposes — other rows stay intact
- flowOn(Dispatchers.Default): process diff on background thread — main thread just receives result
- animateColorAsState: flash highlight on update — tells users which item just changed
"The update rate question: 'How fast does the UI really need to update?' Stock prices at 60fps means rendering 60 new frames per second with identical-looking data. Users can't perceive changes faster than 200ms. Use sample(200) to cap at 5 updates/second — indistinguishable from real-time to users, but 12x less work for the rendering pipeline."
Android GPU Inspector (AGI) is a deep-GPU profiling tool -- it captures a single frame and breaks it down to individual draw calls, shader execution times, texture memory, and fill rate. It complements Android Studio's Profiler which is CPU-focused. For most apps, CPU Profiler + Perfetto is sufficient; reach for AGI only when FrameMetrics shows high GPU_DURATION.
// AGI: download from developer.android.com/agi // Requires physical device with ARM Mali, Qualcomm Adreno, or Imagination GPU // Does NOT work on emulator // Instrument your app for AGI frame captures // AGI uses the GPU debugger API -- no code changes needed for frame capture // What AGI shows per draw call: // GPU time, vertex count, texture samples, shader invocations // Use FrameMetrics to find high-GPU frames first, then investigate with AGI window.addOnFrameMetricsAvailableListener({ _, metrics, _ -> val gpuMs = metrics.getMetric(FrameMetrics.GPU_DURATION) / 1_000_000L if (gpuMs > 8) log("High GPU frame: ${gpuMs}ms") // flag for AGI investigation }, handler)
- AGI frame capture: one frozen frame broken down to individual GPU draw calls -- see which draw call costs the most GPU time
- Draw call count: too many draw calls stall the GPU pipeline -- AGI shows the count per frame, target < 200 for UI apps
- Shader profiling: identifies expensive GLSL/SPIR-V shader code -- relevant for apps with custom OpenGL effects
- Use FrameMetrics first: GPU_DURATION > 8ms signals a GPU-bound frame -- that's when to open AGI for the frame details
- Most apps don't need AGI: standard View/Compose apps are CPU-bound, not GPU-bound -- AGI is for game-level GPU debugging
"For standard Android apps with Views or Compose, Android Studio Profiler + Perfetto covers 95% of performance issues. AGI is the specialist tool — reach for it when Perfetto shows high GPU_DURATION in FrameMetrics and you need to know exactly which draw operations are expensive. Game developers and apps with heavy custom rendering use AGI routinely; typical app developers rarely need it."
A performance strategy built from day one is exponentially cheaper than retrofitting it. Establish four things before writing feature code: preventive tools (StrictMode, LeakCanary), architectural patterns that are inherently efficient (offline-first, Paging 3), performance budgets with numbers, and monitoring that alerts before users notice.
// Day 1: preventive tools -- crash on violations in debug if (BuildConfig.DEBUG) { StrictMode.setThreadPolicy( StrictMode.ThreadPolicy.Builder().detectAll().penaltyDeath().build() ) } // debugImplementation("com.squareup.leakcanary:leakcanary-android") // gradle.properties -- performance budgets enforced in CI // Macrobenchmark: fail if cold start > 1.5s // AAB size check: fail if download size > 20MB // Lint: abortOnError=true // Baseline Profile -- generate before v1.0 ships // src/main/baseline-prof.txt covers startup + main navigation // 30-40% cold start improvement at zero runtime cost
- Day 1 preventive tools: StrictMode (crash on main-thread violations) + LeakCanary (auto-detect leaks) -- these cost zero effort and catch issues immediately
- Architectural performance: offline-first (Room as source of truth = fast loads), Paging 3 (never load full list), Coil (correct image loading)
- Performance budgets before features: cold start < 1.5s, scroll P90 > 55fps, APK < 20MB -- objective CI pass/fail criteria
- Baseline Profile before v1.0: generate once, commit to source, 30-40% startup improvement for every user from the first install
- Monitoring setup: Firebase Performance custom traces + Play Vitals alerts + Macrobenchmark in CI -- four independent early-warning layers
"The ROI calculation: fixing a memory leak takes 1 hour on day 1 (StrictMode crashes immediately). It takes 5 hours on day 30 (debug production crash reports, reproduce, fix). On day 300, it's a production incident affecting users. Performance debt compounds like financial debt — the interest rate is very high. Day 1 investment in StrictMode + LeakCanary pays for itself within the first week."
A foreground service is a Service that shows a persistent notification — it tells the user (and the OS) that the app is doing important ongoing work. The OS gives it much higher priority than background services. Android 14 requires declaring the foreground service type explicitly.
// Use foreground service for: media playback, navigation, ongoing calls, file downloads // Must show: persistent notification in the notification shade // Priority: not killed by OS (unlike background services) // AndroidManifest.xml — declare permission and service type (Android 14 required) // <uses-permission android:name="android.permission.FOREGROUND_SERVICE" /> // <uses-permission android:name="android.permission.FOREGROUND_SERVICE_MEDIA_PLAYBACK" /> // <service android:name=".MusicService" android:foregroundServiceType="mediaPlayback" /> // Modern foreground service with notification class MusicService : Service() { override fun onStartCommand(intent: Intent?, flags: Int, startId: Int): Int { val notification = NotificationCompat.Builder(this, "music_channel") .setContentTitle ("Now Playing") .setSmallIcon (R.drawable.ic_music) .setOngoing(true) // user can't dismiss . build ()if (Build.VERSION.SDK_INT >= 29) {startForeground (NOTIF_ID, notification, ServiceInfo.FOREGROUND_SERVICE_TYPE_MEDIA_PLAYBACK) }else {startForeground (NOTIF_ID, notification) }return START_STICKY } } // Android 14 foreground service types: // camera: camera capture // connectedDevice: Bluetooth, USB // dataSync: uploading/downloading // health: fitness tracking // location: ongoing navigation // mediaPlayback: music/video playback // mediaProjection: screen recording // microphone: voice recording // phoneCall: ongoing calls // remoteMessaging: messaging apps // specialUse: other (requires Play approval) // WorkManager expedited work — alternative to foreground service for tasks OneTimeWorkRequestBuilder<UploadWorker>() .setExpedited (OutOfQuotaPolicy.RUN_AS_NON_EXPEDITED_WORK_REQUEST) .build ()
- Foreground service: persistent notification + high OS priority — for user-visible ongoing work
- foregroundServiceType: required in manifest for Android 14+ — must match the actual work done
- startForeground: must be called within 5 seconds of service start — else ANR-like behaviour
- START_STICKY: OS restarts service after being killed — appropriate for media playback
- WorkManager expedited: alternative to foreground service for tasks — WorkManager handles type declaration
"Android 14's foreground service type requirement closes a common abuse pattern: apps that declared a vague foreground service to avoid background limits. Now you must declare 'mediaPlayback' or 'dataSync' — and Play Store reviewers can verify your service actually does what the type says. Using 'specialUse' requires justification in your Play Store listing."
Race conditions and synchronisation bugs are the hardest category of Android bugs — they're intermittent, hard to reproduce, and don't leave obvious stack traces. The fix is eliminating shared mutable state by using coroutine-native patterns.
// Common race condition: two threads modify the same list class CartRepository { private val items =mutableListOf <CartItem>() // ❌ not thread-safe fun add(item: CartItem) { items.add (item) } // called from any thread fun remove(id: String) { items.removeIf { it.id == id } } } // ConcurrentModificationException when add + remove run simultaneously // FIX 1: Mutex — serialise access class CartRepository { private val mutex = Mutex() private val items =mutableListOf <CartItem>() suspend fun add(item: CartItem) = mutex.withLock { items.add (item) } suspend fun remove(id: String) = mutex.withLock { items.removeIf { it.id == id } } } // FIX 2: Single-thread dispatcher — all access on one thread class CartRepository { private val dispatcher = Dispatchers.IO.limitedParallelism (1) // single thread private val items =mutableListOf <CartItem>() suspend fun add(item: CartItem) = withContext(dispatcher) { items.add (item) } } // FIX 3: StateFlow — immutable snapshots, thread-safe updates class CartRepository { private val _items = MutableStateFlow(emptyList <CartItem>()) val items: StateFlow<List<CartItem>> = _items fun add(item: CartItem) { _items.update { current -> current + item } // ✅ atomic CAS — thread-safe } fun remove(id: String) { _items.update { current -> current.filter { it.id != id } } // ✅ atomic } } // Detect race conditions: // ThreadSanitizer (TSan) — catches data races at runtime // ./gradlew assembleDebug -PenableTsan=true (NDK apps) // For JVM: use -ea -Djava.util.concurrent.ThreadSanitizer
- Mutex.withLock: serialises concurrent access — only one coroutine executes the block at a time
- limitedParallelism(1): single-threaded dispatcher — all accesses sequential without explicit locks
- StateFlow.update: atomic compare-and-set — thread-safe without locks for simple state mutations
- Immutable lists in StateFlow: emit new list reference — readers always see a consistent snapshot
- ThreadSanitizer: detects data races at runtime — requires NDK, but finds races that testing misses
"The best solution to race conditions is eliminating shared mutable state entirely. StateFlow.update() uses atomic CAS (Compare-And-Swap) — no lock needed, no blocking. The pattern: state is immutable List, update creates a new List, StateFlow atomically swaps. Readers always see a complete, consistent list. No race, no mutex, no complexity."
Low-end devices (< 2GB RAM, old CPUs) are a significant portion of Android users, especially in emerging markets. An app that's smooth on a Pixel may be unusable on a Redmi 9. Adaptive performance degrades gracefully based on device capability.
// Detect low-end device val am = context.getSystemService (ActivityManager::class.java) val isLowRam = am.isLowRamDevice () // system flag — Go/budget devices val memInfo = ActivityManager.MemoryInfo() am.getMemoryInfo(memInfo) val ramGb = memInfo.totalMem / (1024 * 1024 * 1024) // Adaptive strategy data class PerformanceTier(val level: Int) // 1=low, 2=mid, 3=high fun getPerformanceTier(context: Context): PerformanceTier { val am = context. getSystemService (ActivityManager::class.java) val info = ActivityManager.MemoryInfo() am.getMemoryInfo (info) return when { am.isLowRamDevice () -> PerformanceTier(1) info.totalMem < 3L * 1024 * 1024 * 1024 -> PerformanceTier(2) else -> PerformanceTier(3) } } // Apply tier-based adaptations when (performanceTier.level) { 1 -> { // Low-end: maximum savingsdisableAllAnimations () imageLoader.memoryCache ?.resize (20) // tiny image cache videoQuality = Quality.SD_360p prefetchDistance = 0 // no prefetching maxConcurrentTasks = 1 } 2 -> { // Mid-range: balancedenableBasicAnimations () imageLoader.memoryCache ?.resize (30) videoQuality = Quality.HD_720p } 3 -> { // Flagship: everything onenableAllAnimations ()enableBlur () videoQuality = Quality.HD_1080p } }
- isLowRamDevice(): system flag for Go edition and budget devices — disable heavy features
- totalMem check: RAM-based tier — < 3GB = mid-range, >= 3GB = flagship
- Animation reduction: disable complex animations on low-end — CPU/GPU are the bottleneck
- Image cache sizing: reduce memory cache size on low-RAM — prevents OOM kills
- Prefetch distance = 0: no eager loading on low-end — save RAM for visible content
"Test on a Redmi 9 or similar budget device before every release if your market includes India. What looks smooth on your Pixel dev device can be a slideshow on a 2GB RAM phone. The isLowRamDevice() flag covers the most extreme cases — devices that report themselves as constrained. For mid-range, use the totalMem check. Profile on actual budget hardware, not just emulator."
Coroutine performance issues are subtle — wrong dispatcher, too many coroutines, blocking calls inside suspend functions, and structured concurrency violations. The tools are coroutine debugging in Android Studio and Perfetto with coroutine tracing.
// Enable coroutine debugging (debug builds) // In Application.onCreate(): System.setProperty ("kotlinx.coroutines.debug", "on") // Coroutine stack traces become readable in debugger and crash reports // ISSUE 1: Blocking call inside suspend function suspend fun loadData(): Data { return OkHttpClient().newCall (request).execute () // ❌ blocks coroutine thread } // Fix: use Retrofit suspend functions (non-blocking) or wrap with withContext(IO) suspend fun loadData(): Data = withContext(Dispatchers.IO) { blockingClient.fetchSync () // ✅ runs on IO thread pool, main thread free } // ISSUE 2: Wrong dispatcher — CPU-intensive on Main suspend fun processImage(bitmap: Bitmap): Bitmap { returnapplyFilter (bitmap) // ❌ heavy CPU work — if called from Main, causes jank } // Fix: always specify dispatcher for CPU-heavy work suspend fun processImage(bitmap: Bitmap) = withContext(Dispatchers.Default) {applyFilter (bitmap) // ✅ Default = CPU-optimised thread pool } // ISSUE 3: Coroutine leak — not cancelled on lifecycle end // Already covered: use viewModelScope / lifecycleScope // ISSUE 4: Too many coroutines — coroutine overhead // Each coroutine: ~100 bytes memory + scheduling overhead // 10,000 concurrent coroutines: fine // 100,000 concurrent coroutines: memory pressure // Fix: use Flow operators instead of launching a coroutine per item // ❌ items.forEach { launch { process(it) } } // ✅ items.asFlow().flatMapMerge(concurrency = 8) { process(it) } // ISSUE 5: Dispatcher.Main misuse // Dispatchers.Main.immediate vs Dispatchers.Main: // .immediate: runs immediately if already on Main (no coroutine dispatch overhead) // Use for: UI updates that are latency-sensitive viewModelScope.launch (Dispatchers.Main.immediate) { _state.value = UiState.Loading // immediate → no frame delay for UI update }
- kotlinx.coroutines.debug: readable coroutine names in stack traces — essential for debugging
- Blocking in suspend: wrapping blocking calls without withContext(IO) starves the coroutine thread
- Dispatchers.Default: CPU-intensive work — separate thread pool from IO, right tool for computation
- flatMapMerge(concurrency): bounded parallelism — process N items concurrently, not all at once
- Dispatchers.Main.immediate: skip dispatch overhead when already on main thread — latency-sensitive UI updates
"The dispatcher choice rule: Dispatchers.IO for I/O-bound work (network, disk), Dispatchers.Default for CPU-bound work (sorting, image processing, JSON parsing of large files). Using IO for CPU work wastes the IO thread pool. Using Default for blocking I/O blocks the CPU threads. The distinction matters at scale: a 60-thread IO pool being used for CPU work stalls all network calls."
A performance recovery plan is prioritised by user impact: crashes affect more users than jank, jank affects more users than slow startup. Fix stability first, then perceived speed, then memory. Each phase delivers a measurable metric improvement -- quantify before and after to justify the investment to stakeholders.
// Week 1-2: fix crashes first (2% crash rate = 1 in 50 sessions) // Play Console → Android Vitals → Crashes → sort by affected users → fix top 3 // Enable StrictMode + LeakCanary → fix all violations before moving on // Week 3-4: startup (3s → target 1.5s) // CPU Profiler method trace of Application.onCreate() → defer non-critical SDKs class MyApp : Application() { override fun onCreate() { Timber.plant(Timber.DebugTree()) // fast -- keep // defer analytics, maps, push -- move to background after first frame } } // Week 5-6: jank → ListAdapter + DiffUtil, remove onDraw() allocations // Week 7-8: memory → LruCache, onTrimMemory(), heap dump comparison
- Crashes first: 2% crash rate means 1 in 50 sessions ends in a crash -- fix top 3 crash types from Play Console before touching performance
- Startup second: 3s cold start exceeds Play's 'bad behaviour' threshold (5s) but is still user-visible -- defer non-critical Application.onCreate() work
- Jank third: replace notifyDataSetChanged() with ListAdapter + DiffUtil, zero-allocation onDraw(), remove overdraw
- Memory last: slow leak is invisible until OOM kill -- heap dump comparison (start vs 30-min session), LruCache for unbounded HashMaps
- Measure every phase: record build time, crash rate, startup, and frame times before and after -- quantify the ROI for stakeholders
"The prioritisation principle: fix what prevents users from using the app before fixing what makes it slow. A crash during checkout is worse than checkout taking 2 extra seconds. A 3-second startup is worse than occasional scroll jank. Memory growth is worst because it's invisible until the OOM kill. Always lead with crash fix, then startup, then rendering, then memory."
Structured answers for the human side of interviews — leadership, conflict, ownership, growth mindset, and culture fit at top Android teams.
Use the Present → Past → Future framework. Keep it under 2 minutes, stay technical but accessible, and end by connecting your story to this role.
- Present: Your current role, team size, tech stack, impact ("I currently work at X building Y used by Z million users")
- Past: One or two previous experiences that show growth, highlight technical depth (Kotlin, Compose, architecture patterns)
- Future: Why this role — be specific about the company's product, tech stack, or mission that excites you
- Avoid: reciting your resume line-by-line; keep it conversational
- Tailor 20% of it to the role — mention Jetpack Compose if they use it, mention scale if it's a big-tech role
Practice this out loud. It's the most asked question and most people ramble. A crisp 90-second answer signals confidence and communication skill.
Use STAR (Situation, Task, Action, Result). Choose a problem with real technical depth — a performance regression, a memory leak, a complex architecture decision.
- Situation: Set the context (app scale, team size, timeline)
- Task: What you specifically owned — be clear about your role vs the team's
- Action: Walk through your debugging/design process — tools used (Profiler, LeakCanary, Flipper), hypotheses tested, trade-offs considered
- Result: Quantify — "reduced ANR rate by 40%", "cut app startup by 1.2s", "eliminated all OOM crashes in the next release"
- Show how you communicated the problem and solution to stakeholders
Interviewers want to see your problem-solving process, not just the answer. Narrate your thinking: "My first hypothesis was X, I ruled it out because of Y, then I found Z."
This tests your ability to be assertive without being combative, and to commit once a decision is made. Show both — disagreement and commitment.
- Describe the context: what the decision was, why you disagreed (technical rationale, not personal preference)
- How you raised it: data-backed argument, one-on-one first, then team discussion — not passive-aggressive silence
- What happened: either you persuaded them with evidence, or you accepted the outcome and executed fully
- Key principle: "Disagree and commit" — Amazon's leadership principle applies widely. Show you can commit even when you lose the argument
- Avoid: saying you always agree, or that you kept fighting after the decision was final
Pick a real technical disagreement — e.g. "I pushed for Compose but the team chose XML; I wrote the migration guide anyway and we eventually transitioned." Shows maturity.
Don't give a fake weakness ("I work too hard"). Pick a real one that is NOT core to the role, show self-awareness, and demonstrate active improvement.
- Good Android-engineer weaknesses: "I tend to over-engineer solutions — I'm learning to ship iteratively and refine later"; "I used to avoid proactive communication with PMs — I've started weekly async updates"
- Structure: State the weakness → Give a real example of how it caused an issue → What you changed → Evidence of improvement
- Show growth trajectory, not a static flaw
- Avoid: weaknesses that are red flags for the role (e.g. "I struggle to write clean code") or non-answers ("I'm a perfectionist")
The best answers show genuine self-awareness + a concrete system you put in place. Interviewers don't expect you to be perfect — they're testing honesty and growth mindset.
This tests ownership and leadership without authority. Even if you're not a lead, pick a feature, migration, or tech initiative you drove end-to-end.
- Describe the scope: what the project was, how many people involved, what "done" looked like
- Planning: how you broke it down, estimated timelines, identified risks
- Challenges: team misalignment, scope creep, tech blockers — be specific and show how you navigated each
- Stakeholder management: how you kept PMs, designers, backend informed
- Result: shipped on time? What did users/metrics show? What would you do differently?
Good examples for Android engineers: "I led the migration from AsyncTask to Coroutines across 30 screens", "I owned the dark mode rollout", "I drove the modularisation of our app." Pick one with clear before/after metrics.
Show that you don't just sacrifice quality silently — you communicate, scope-cut intelligently, and track tech debt.
- Triage ruthlessly: what's must-have vs nice-to-have for this release?
- Communicate early: tell your manager you're at risk before the deadline, not after it — flag it at 70% confidence, not 100%
- Scope cut, don't quality cut: ship fewer features at full quality rather than all features with bugs
- Track debt: anything you cut corners on goes into a ticket immediately — don't let it disappear
- Show examples: "We had a 2-week sprint compressed to 1 week — I cut the animations feature, kept the core flow, and filed 3 debt tickets. We shipped clean."
Avoid the naive answer "I work overtime to get it done." That signals poor planning and poor boundaries. Show strategic thinking, not heroics.
Be honest and ambitious, but connect your goals to what this company can offer. Avoid vague ("I want to grow") and overly political ("I want to be CTO").
- Two valid paths: IC (Principal/Staff Engineer — deep technical expertise, system design, mentoring) or management (EM — building teams, processes, product strategy)
- Be specific about what "growth" means technically: "I want to be the go-to person for performance and architecture at scale"
- Connect to this role: "This company's scale and the complexity of the Android stack here is exactly the environment where I can develop that"
- Show you've thought about it: mention specific skills you want to develop (distributed systems, ML on-device, platform engineering)
Companies want people with a long runway. Saying "I want to stay an Android dev forever" can sound stagnant; saying "I plan to move into product in 6 months" raises a red flag. Aim for ambitious-but-realistic.
This tests empathy, directness, and whether you can have hard conversations. Show you give feedback early, privately, and with care — not in a PR comment thread in front of everyone.
- Set the context: what the issue was (code quality, reliability, communication — not personality)
- Approach: privately, with specific examples ("I noticed in the last 3 PRs that error handling is missing from network calls — here's why it matters")
- Technique: SBI model — Situation, Behaviour, Impact. Stick to observable facts, not character judgements
- Outcome: did they improve? Did you follow up? Show you invested in their growth
- Avoid: letting it fester until it becomes a team issue, or delivering feedback in public code reviews
For senior/lead roles, this question is critical. They're testing if you'll avoid hard conversations (red flag) or handle them skillfully. Show you've done it, it was uncomfortable, and you did it anyway.
Be honest but strategic. Never badmouth your current employer — frame everything in terms of what you're moving towards, not what you're running away from.
- Good reasons: limited technical growth, wanting to work at a larger scale, wanting to specialise deeper in Android, exciting product/mission at the new company
- Frame positively: "I've learned a lot at X, and I'm looking for an environment with greater scale and more complex Android challenges"
- Be specific about this company: "Your app's architecture and the Compose-first approach you're taking is exactly where I want to build expertise"
- Avoid: "my manager is toxic", "the pay is bad", "the team is dysfunctional" — even if true
- If asked about money: "Compensation is one factor, but the bigger driver is the technical environment"
Interviewers are checking: are you stable? Are you professional? Do you have real reasons? A concise, forward-looking answer signals maturity. Rambling or negativity signals risk.
This is a test of self-awareness, honesty, and resilience. The failure must be real — not a humble-brag. The learning must be concrete — not generic platitudes.
- Pick a genuine professional failure: shipped a bug to production, missed a deadline, misjudged technical complexity, ignored warning signs in code review
- Own it fully: don't deflect to "the team", "the requirements", "the timeline" — even if those contributed, focus on your part
- What you learned: be specific. "I now write rollback plans before every production deploy" beats "I learned to be more careful"
- What changed: did you implement a process change? Mentor others to avoid the same mistake?
The best failure stories show a non-obvious lesson. "I shipped a crasher to 500K users because I skipped the staging regression. I now block all PRs without a staging sign-off step in CI." That's specific, mature, and impressive.
Show structured, proactive learning — not "I read articles sometimes." Interviewers at top companies expect engineers to be self-driven learners.
- Primary sources: Android Developers Blog, Google I/O sessions, Kotlin blog, Jetpack release notes
- Community: Android Weekly newsletter, Kotlinlang Slack, #android-dev Twitter/X, Philipp Lackner & Roman Elizarov talks
- Deep dives: AOSP source code reading, reading Jetpack library internals (Compose runtime, Room, WorkManager)
- Practice: Side projects — mention a specific one and what you learned from it
- Sharing: Blog, internal tech talks, mentoring — teaching cements learning
Mention something specific and recent — "I was just reading the Compose snapshot system source code last week to understand how recomposition batching works." That's concrete and signals genuine curiosity.
Conflict resolution is a senior engineering skill. Show you can navigate disagreement professionally without escalating unnecessarily or avoiding it.
- Keep it professional — technical conflict (architecture disagreement, code review dispute) is better than interpersonal
- Show you addressed it directly, one-on-one, not passive-aggressively or through a manager first
- Demonstrate empathy: "I tried to understand their perspective — they were worried about migration risk, which was a valid concern I had underweighted"
- Show resolution: compromise, data-driven decision, escalation as last resort
- End with: what the relationship was like after — ideally, you maintained or improved trust
Avoid stories where you "won" the conflict and the other person was wrong. The best stories end with mutual understanding, not victory. Companies want collaborators.
Show you have a system — not that you just "work on what's most urgent." Senior engineers are expected to manage their own prioritisation, not wait to be told.
- Framework: Impact vs Effort matrix — high impact, low effort first (quick wins); high impact, high effort needs planning; low impact tasks defer or delegate
- Align with team priorities: weekly sync with EM/PM to understand what's blocking others vs what's nice-to-have
- Protect deep work: time-block focused coding sessions; batch meetings and messages
- Communicate: if you can't do everything, say so early — "I can deliver X this sprint, but Y will push to next sprint. Does that work?"
- Track it: a simple Notion/Linear board with today/this week/backlog keeps you honest
Concrete example: "Last sprint I had a production bug, two feature tasks, and a code review backlog. I triaged the bug first, delegated two reviews, and flagged to my EM that one feature task would slip." Show the thinking, not just the answer.
Pick one thing, go deep. Breadth of examples is less impressive than owning one story completely with numbers, decisions, and lessons.
- Describe what you built and why it mattered to the business/users
- Quantify impact: DAU change, conversion lift, crash rate drop, revenue impact, load time improvement
- Describe your specific contribution vs the team's — be honest about what YOU did
- Technical depth: what interesting decisions did you make? What trade-offs? What did you learn?
- Would-you-do-it-differently: showing retrospective clarity is a sign of maturity
Strong Android examples: "Rebuilt the home feed with Compose + Paging 3 — reduced scroll jank from 45% to 8% of sessions"; "Implemented background sync with WorkManager — 3x improvement in data freshness without battery impact." Numbers matter.
Mentoring is expected at mid-senior level. Show you do it proactively, not just when asked, and that you invest in others' growth intentionally.
- Structured approach: regular 1:1s to understand their blockers, career goals, and learning gaps
- Code review as teaching: don't just point out problems — explain WHY, link to docs, give alternatives
- Pair programming: work alongside them on complex problems rather than doing it for them
- Safe failure: give them ownership of real tasks with a safety net — let them make small mistakes and learn
- Specific example: "I mentored a junior on Compose state — spent 3 sessions pair-coding, then had them own a full screen rebuild. Their PR needed minimal changes."
Show the outcome from THEIR perspective, not yours. "They went from needing hand-holding on every PR to shipping independently within 2 months" is far more compelling than "I taught them about Compose."
This tests emotional maturity and growth mindset. Defensiveness is a red flag. Overclaiming ("I love feedback!") rings hollow. Show a real, grounded response.
- First reaction: acknowledge it's uncomfortable — don't pretend you're above human emotions
- Process: don't react immediately — take time to reflect. Ask clarifying questions: "Can you give me an example?" "What would great look like?"
- Separate signal from noise: some feedback is directional, some is specific. Identify the actionable core
- Action plan: what specifically will change? Set a 30-day micro-goal and check in
- Follow up: go back to the giver of feedback in 4–6 weeks to show you took it seriously
A real example is powerful here: "I got feedback that my PRs were hard to review because of large diffs. I moved to smaller, atomic commits — my review turnaround went from 3 days to same-day." Shows you acted, not just listened.
Senior engineers can't wait for perfect information. This tests judgment under uncertainty — can you make a call, own it, and correct course if wrong?
- Describe the situation: what decision needed to be made, why you couldn't wait for more data
- How you assessed risk: what's the downside if wrong? Is it reversible? Can you rollback?
- The decision framework: bias towards reversible decisions; seek the least-regret option; define what data would change your mind
- Action: made the call, communicated it to stakeholders, flagged the assumptions
- Outcome: was it right? If not, how quickly did you course-correct and what did you learn?
Great Android example: "We had to decide whether to adopt Compose for a major feature before it hit 1.0. We didn't have stability guarantees but had a hard deadline. I chose to proceed with a rollback-ready flag — we shipped and it held up."
Cross-functional collaboration is critical. Show you engage early, push back constructively on infeasible designs, and maintain trust across disciplines.
- With designers: review Figma specs early — flag platform constraints before implementation, not during; ask about edge cases (loading, error, empty states) upfront
- With PMs: translate tech complexity into business impact — "this will take 2 extra weeks because we need to migrate the data layer" not just "it's technically complex"
- Push back respectfully: "This animation at 60 fps on low-end devices will cause jank — here's an alternative that achieves the same feel with 20% less GPU load"
- Challenge: designers often spec for iOS patterns. Show you can translate or adapt while preserving design intent
The engineers who get promoted fastest are the ones PMs and designers love working with. Show you're not just a code machine — you think about the product, user experience, and business impact.
Generic answers ("you're a great company") are red flags. Specific, researched answers signal genuine interest and signal you'll stay longer once hired.
- Research before the interview: their Android tech blog posts, engineering blog, recent app updates, Play Store reviews, LinkedIn engineering team
- Address product: "I've been using your app for 2 years. The offline-first approach in your checkout flow is something few apps do right — I want to build things like that"
- Address tech: "I saw your blog post on migrating to Jetpack Compose — the approach you took with a parallel Compose tree was elegant, and I want to contribute to that"
- Address mission: connect their mission to what you care about professionally
- Ask a question that proves research: "I noticed your app targets API 24+ — are there plans to explore newer Android APIs as the market shifts?"
Spend 30 minutes on their engineering blog, app reviews, and tech stack before every interview. It's the highest-ROI interview prep most engineers skip.
This tests whether you're a force multiplier — someone who makes the whole team better, not just themselves. Senior engineers are expected to improve the systems around them.
- Identify a real pain point you observed: flaky tests, no code review standards, no Compose guidelines, slow CI, inconsistent error handling
- Solution you proposed and evangelised — not just for yourself but for the team
- Adoption: how did you get buy-in? Demo, internal talk, written RFC, gradual rollout?
- Impact: time saved per PR, fewer production bugs, faster onboarding, reduced flakiness
Great Android examples: "I introduced snapshot testing for our Compose components — reduced visual regression reports by 70%"; "I wrote our internal Compose architecture guide — cut new-feature ramp-up time by half for new joiners."
Be genuine — interviewers can tell when you're performing an answer. Authentic motivation signals culture fit and longevity.
- Intrinsic motivators engineers often cite: solving hard problems, shipping things users love, learning new things, making teammates more effective
- Be specific to Android: "I love the constraint of mobile — battery, memory, network variability. Optimising for those constraints is a creative puzzle I never get tired of"
- Connect to their context: if they're at scale, mention you're energised by systems that have to work reliably for millions
- Avoid generic: "I love coding" or "I love building products" — everyone says this
Pair motivation with a story: "I'm most energised when I can look at a complex crash report, dig into it, and come out the other side with a system-level fix that prevents it class-wide. I did this with our OOM crashes last quarter." Makes it real.
Estimation is a skill, not a guess. Show a structured approach, honesty about uncertainty, and a mature response to being wrong.
- Estimation approach: break the task into sub-tasks, estimate each, add integration and testing time, add a buffer for unknown unknowns (10–20%)
- Communicate confidence levels: "This is a 3-day estimate with medium confidence — I've never touched this module before"
- Check early: assess at 30% of the timeline, not at the deadline — surface risk before it's a crisis
- When estimate is off: communicate immediately, give a revised estimate with reasoning, offer to scope-cut if needed
- Learn from it: retrospectively — was it a misunderstood requirement? Hidden complexity? Plan for next time
"I was wrong and I said so on day 3 of a 5-day task" is a far better answer than an engineer who misses deadlines silently. Proactive communication about slippage is a highly valued professional trait.
Almost every real job has legacy code. Companies want engineers who can work in, understand, and incrementally improve messy codebases — not those who want to rewrite everything.
- Understand first: read the code, run it, understand why decisions were made before judging them
- Boy Scout Rule: "leave it cleaner than you found it" — improve code you touch, don't rewrite things you're not touching
- Strangler Fig pattern: incrementally replace legacy modules — wrap the old API, route new traffic to the new implementation, decommission when stable
- Tests before refactor: never refactor without tests — you need a safety net to verify behaviour is preserved
- Android-specific: migrating from AsyncTask → Coroutines, XML → Compose, SQLite → Room — frame it as incremental and feature-gated
Avoid saying "I'd rewrite it from scratch." That signals inexperience. The correct answer shows patience, strategic thinking, and respect for the constraints (time, risk, team) that created the legacy code in the first place.
"No questions" is a red flag. Thoughtful questions show genuine interest, preparation, and signal you're evaluating them too. Have 3–5 ready, ask 2–3.
- Tech stack depth: "What's the biggest technical challenge the Android team is currently working through? How are you approaching it?"
- Team dynamics: "How does the Android team collaborate with backend and design? What does a typical feature cycle look like end-to-end?"
- Growth: "What does the growth path look like for a senior Android engineer here? Are there examples of engineers who've moved into tech lead roles?"
- Culture: "What's something about working here that you didn't know before you joined and wish you had?"
- Product direction: "Where is the Android app heading in the next 12 months? What are the big bets?"
- Avoid: questions about salary/benefits in the first interview, questions whose answers are on their website
The best questions are specific to the company and show you've done research. "I saw you're migrating to Compose — who's driving that, and how are you handling the hybrid period?" is 10x better than "What's the culture like?"
Salary negotiation is a skill. Never anchor too early, never give a number without research, and never apologise for knowing your worth.
- Research first: levels.fyi, Glassdoor, LinkedIn Salary, AmbitionBox (India) — know the band for your level and city
- Delay if possible: "I'm open to a competitive offer. Could you share the band for this role first?" — gets you info without anchoring
- If forced to give a number: give a range where the bottom is your target: "Based on my research and experience, I'm looking in the ₹X–₹Y range" (or $X–$Y)
- Total compensation: include ESOPs/RSUs, joining bonus, benefits — the number isn't just base salary
- Never accept on the spot: "I'm very excited about this opportunity. Can I have 24 hours to review the full offer?"
- Negotiate: a counteroffer is always professional. "I was hoping for X — is there flexibility?" is never rude
Companies expect negotiation. The first offer is rarely the best offer. The worst they can say is "this is our best offer" — and even that is useful information. Never leave negotiation on the table.
End-to-end Android system design walkthroughs — architecture decisions, data flow, caching strategies, trade-offs, and real-world patterns asked at senior interviews.