🏗️ System Design Hard 2025–26

Design an Image Loading Library

A complete mobile-first breakdown of how to build Coil or Glide from scratch — covering the 3-tier cache hierarchy, LRU eviction, Bitmap pooling, request deduplication, lifecycle awareness, and the full decode pipeline.

Understanding the Problem

This question is deceptively hard. On the surface it sounds like "implement Glide" — but interviewers are really testing whether you understand memory management, cache design, concurrency, and Android lifecycle at a deep level. Start by scoping it carefully.

The key insight: loading an image is a 5-step pipeline — Request → Cache Check → Network Fetch → Decode → Display. Every design decision is about making this pipeline faster, cheaper, and safer.

Functional Requirements
Core Requirements
  1. Load an image from a URL and display it in an ImageView or Compose AsyncImage
  2. Cache images at two levels — memory (fast) and disk (persistent across launches)
  3. Support placeholder images while loading and error images on failure
Below the line (out of scope):
  1. Video / GIF support
  2. Image transformations (blur, circle crop) — mention as extensible, not core
  3. SVG / vector format support
  4. Cross-fade animations between placeholder and result
Non-Functional Requirements
Core Requirements
  1. Zero main thread blocking — all I/O and decode must happen on background threads
  2. Memory efficient — no OOM crashes; Bitmap memory must stay bounded
  3. Lifecycle aware — cancel in-flight requests when the view or Activity is destroyed
  4. No duplicate network requests — if two views request the same URL simultaneously, fetch only once
Below the line (out of scope):
  1. P99 latency guarantees (cache hit should be <1ms, but we won't guarantee it contractually)
  2. Cross-process image sharing

A strong opening move: draw the 3-tier cache before anything else. It's the backbone of every image loading library and immediately signals you know the domain.


The Set Up
The 3-Tier Cache Hierarchy

Every image loading library — Glide, Coil, Picasso, Fresco — is built around the same three-level cache. The library checks each level in order; on a miss it falls through to the next. The decision about what lives at each level is the most important design choice.

Tier 1
Memory Cache
Speed: <1ms
Size: ~15% of RAM
Eviction: LRU
Stores: decoded Bitmap
Tier 2
💾
Disk Cache
Speed: ~5–50ms
Size: ~250MB
Eviction: LRU
Stores: compressed bytes
Tier 3
🌐
Network
Speed: 200ms–3s
Size: Unlimited
Eviction: N/A
Stores: origin server
🔷Why store decoded Bitmaps in memory but compressed bytes on disk?

Decoded Bitmaps are expensive to create (CPU cost for decompression) but fast to draw. Storing them in memory gives sub-millisecond display. But Bitmaps are large — a 1080×1920 ARGB_8888 image is 8MB. Disk cache stores the compressed JPEG/PNG bytes instead (~100KB), which is 80× smaller. On a disk cache hit, you still pay the decode cost, but you avoid the network round-trip. This is the right trade-off for mobile.

Core Components

Before drawing the architecture, align on the six components you'll be designing:

  1. ImageLoader — the public API entry point. Accepts a request and orchestrates the pipeline.
  2. MemoryCache — an LRU map of CacheKey → Bitmap. Size-bounded by system RAM.
  3. DiskCache — an LRU file store (using DiskLruCache). Stores compressed image bytes keyed by URL hash.
  4. Fetcher — downloads bytes from the network using OkHttp. Handles HTTP caching headers.
  5. Decoder — converts raw bytes into a Bitmap using BitmapFactory. Handles downsampling to target size.
  6. BitmapPool — a pool of reusable Bitmap objects to reduce GC pressure. Glide's killer feature.

High-Level Design
1) The Request Pipeline

Every load(url) call flows through this pipeline. The key rule: check cheapest source first, fall through to expensive sources only on a miss.

Caller imageLoader.load(url) ImageLoader Engine + Lifecycle check() Memory Cache LRU · ~15% RAM HIT → Bitmap MISS Disk Cache DiskLruCache · ~250MB HIT→bytes Decoder BitmapFactory + pool MISS Fetcher OkHttp · network call write bytes raw bytes write Bitmap Target ImageView or Compose BitmapPool Recycle · reuse borrow/return Pipeline flow Cache HIT → display Cache MISS → fallthrough
Image Loading Pipeline — check cheapest tier first 🔍

2) Memory Cache — LRU with Size Bounds

The memory cache is an LRU (Least Recently Used) map bounded by total Bitmap byte size (not count). Android's LruCache provides this out of the box. The typical sizing is 15% of the available app heap.

// ─── MemoryCache ───────────────────────────────────────────────

class MemoryCache(maxSizeBytes: Int) {

    private val lruCache = object : LruCache<CacheKey, Bitmap>(maxSizeBytes) {
        override fun sizeOf(key: CacheKey, bitmap: Bitmap): Int =
            bitmap.byteCount  // size in bytes, not count!

        override fun entryRemoved(evicted: Boolean, key: CacheKey,
                                   old: Bitmap, new: Bitmap?) {
            if (evicted) bitmapPool.put(old)  // recycle into pool, don't GC it
        }
    }

    fun get(key: CacheKey): Bitmap? = lruCache.get(key)
    fun put(key: CacheKey, bitmap: Bitmap) = lruCache.put(key, bitmap)

    // Sizing: use ~15% of available heap
    companion object {
        fun defaultSize(): Int {
            val memClass = (context.getSystemService(ActivityManager::class.java)
                .memoryClass)  // in MB
            return (memClass * 1024 * 1024) / 7  // ~14% of heap
        }
    }
}

⚠️ Common mistake: Sizing the cache by count (e.g. "hold 50 images") is wrong — a thumbnail is 10KB and a full-screen image is 8MB. Always size by total bytes. Glide, Coil, and Picasso all do this.


3) BitmapPool — Eliminating GC Pressure

When the memory cache evicts a Bitmap, instead of letting the GC collect it, recycle it into a pool. The next decode operation can borrow a same-sized Bitmap from the pool and decode directly into it using BitmapFactory.Options.inBitmap. This eliminates the most expensive GC pauses in image-heavy apps.

// Decode with inBitmap to reuse memory from the pool

fun decode(bytes: ByteArray, targetWidth: Int, targetHeight: Int): Bitmap {
    val options = BitmapFactory.Options()

    // Pass 1: measure dimensions without allocating
    options.inJustDecodeBounds = true
    BitmapFactory.decodeByteArray(bytes, 0, bytes.size, options)

    // Calculate inSampleSize to avoid loading full-res into memory
    options.inSampleSize = calculateSampleSize(options, targetWidth, targetHeight)
    options.inJustDecodeBounds = false

    // Try to borrow a reusable Bitmap from pool
    options.inMutable = true
    options.inBitmap = bitmapPool.get(
        options.outWidth / options.inSampleSize,
        options.outHeight / options.inSampleSize,
        options.outConfig ?: Bitmap.Config.ARGB_8888
    )  // may be null — BitmapFactory allocates fresh if so

    // Pass 2: actual decode, reusing pool Bitmap if available
    return BitmapFactory.decodeByteArray(bytes, 0, bytes.size, options)
}
🔷Pattern: inSampleSize Downsampling

Loading a 4K photo into a 200×200 thumbnail slot without downsampling wastes 200× the memory. Always calculate inSampleSize based on the target view dimensions before decoding. A value of 2 halves each dimension (quarter the pixels); 4 gives one-sixteenth the pixels. Coil does this automatically — it's one of the core reasons it outperforms naive implementations.


4) Request Deduplication

In a RecyclerView with 50 cells loading the same avatar image simultaneously, a naive implementation fires 50 network requests for the same URL. The fix: maintain a map of in-flight requests keyed by URL. If a second request arrives for the same URL, attach it to the existing in-flight job rather than starting a new one.

// Engine — deduplicates concurrent requests for the same key

class Engine {
    private val activeRequests = ConcurrentHashMap<CacheKey, Deferred<Bitmap>>()

    suspend fun load(key: CacheKey, request: ImageRequest): Bitmap {
        // Memory cache hit — return immediately
        memoryCache.get(key)?.let { return it }

        // Deduplicate: attach to existing job if already in-flight
        return activeRequests.getOrPut(key) {
            coroutineScope.async(Dispatchers.IO) {
                try {
                    val bitmap = fetchDecodeSave(key, request)
                    memoryCache.put(key, bitmap)
                    bitmap
                } finally {
                    activeRequests.remove(key)  // clean up when done
                }
            }
        }.await()
    }
}

5) Lifecycle Awareness — Cancelling Requests

If a user scrolls past a cell before its image loads, the in-flight request must be cancelled to avoid wasting bandwidth and updating a view that's now off screen (or recycled to show a different item). The solution: tie each request to the target's lifecycle using a LifecycleObserver or Compose's DisposableEffect.

// ─── Cancellation tied to View lifecycle ──────────────────────

fun ImageView.load(url: String, imageLoader: ImageLoader) {
    // Cancel any previous request on this view
    getTag(R.id.image_loader_tag)?.let { (it as? Job)?.cancel() }

    val job = imageLoader.coroutineScope.launch {
        val bitmap = imageLoader.execute(ImageRequest(url, this@load))
        // Switch to Main thread to update the view
        withContext(Dispatchers.Main) {
            setImageBitmap(bitmap)
        }
    }

    // Store job reference on the view — RecyclerView reuse cancels old job
    setTag(R.id.image_loader_tag, job)
}

// ─── Compose version ───────────────────────────────────────────
@Composable
fun AsyncImage(url: String, ...) {
    var bitmap by remember { mutableStateOf<Bitmap?>(null) }
    LaunchedEffect(url) {  // re-launches when URL changes; cancels previous
        bitmap = imageLoader.execute(ImageRequest(url))
    }
    bitmap?.let { Image(it.asImageBitmap(), ...) }
}

Low-Level Design

Three precise flows that interviewers drill into. Be able to draw and explain each one.

Whiteboard Overview

Draw this in the first 5 minutes. It shows the full request lifecycle — happy path (memory hit), disk hit, and full network miss — on a single diagram.

Caller ImageView load(url) Engine dedup + dispatch Dispatchers.IO check Memory Cache LRU · decoded Bitmap HIT → Bitmap direct MISS→check Disk Cache DiskLruCache · bytes HIT→bytes Decoder BitmapFactory + pool write Bitmap setImageBitmap MISS Fetcher OkHttp network call write bytes raw bytes Target ImageView BitmapPool borrow / return Pipeline Cache HIT Cache MISS Write back
Full Whiteboard — Memory Hit / Disk Hit / Network Miss paths 🔍
Flow 1 — Full Image Load (Network Miss)

The worst-case path: nothing in any cache. This covers every component in the system.

Caller
Engine
Mem Cache
Disk Cache
Fetcher
Decoder
1
imageLoader.load(url, targetView)
Caller invokes the public API. Engine builds a CacheKey from URL + target dimensions (e.g. 200×200). Without dimensions, a 400×400 and a 200×200 decode of the same URL would incorrectly share a cache entry.
2
memoryCache.get(key) → null
Memory cache miss. Key was not found — either first load or was evicted. Time elapsed: <1ms.
3
diskCache.get(urlHash) → null
Disk cache miss. Check if a file exists for the URL hash. No file found. Time elapsed: ~5ms.
4
okHttpClient.get(url) → ResponseBody
Network fetch on Dispatchers.IO. OkHttp respects HTTP caching headers (ETag, Cache-Control). Response body streamed — not fully buffered — into the disk cache write stream.
5
diskCache.put(urlHash, bytes)
Bytes written to disk cache atomically (write to temp file, rename on completion). Future requests get a disk hit without re-fetching.
6
decoder.decode(bytes, targetW, targetH)
Two-pass decode: pass 1 reads only the image header to get dimensions (inJustDecodeBounds). Calculates inSampleSize. Borrows a reusable Bitmap from BitmapPool. Pass 2 decodes into that Bitmap.
7
Bitmap returned to Engine
Decoder returns the decoded Bitmap. Engine writes it to the memory cache for future hits.
8
withContext(Main) { view.setImageBitmap(bitmap) }
Switch to main thread. Check that the target view's tag still matches this request (view may have been recycled). If it matches, set the Bitmap. If not, drop silently.
Flow 2 — Memory Cache Hit (Best Case)

The happy path — the same image was loaded recently and the Bitmap is still in memory.

Caller
Engine
Mem Cache
Engine
Caller (View)
1
imageLoader.load(url, targetView)
Same call as before. Engine builds the same CacheKey.
2
memoryCache.get(key) → Bitmap ✓
Cache HIT. Returns the decoded Bitmap immediately. No disk I/O, no network, no decode. Time: <1ms.
3
withContext(Main) { view.setImageBitmap(bitmap) }
Bitmap set on the view. From a RecyclerView scroll perspective, the image appears before the next frame — no flicker, no placeholder flash.
Flow 3 — LRU Eviction & BitmapPool Recycling

When the memory cache is full and a new Bitmap needs to be inserted, the LRU entry is evicted. Critically, this evicted Bitmap is not handed to the GC — it goes into the BitmapPool for reuse.

Engine
Mem Cache (full)
BitmapPool
Decoder
Mem Cache
1
memoryCache.put(newKey, newBitmap)
Engine tries to write a newly decoded Bitmap into the memory cache. Cache is at capacity (e.g. 32MB).
2
LruCache evicts leastRecentlyUsed → Bitmap(800×600)
LruCache.entryRemoved() callback fires with the evicted Bitmap. Instead of calling bitmap.recycle() and waiting for GC, we pass it to BitmapPool.
3
bitmapPool.put(evictedBitmap)
BitmapPool stores the Bitmap keyed by (width, height, config). It's now available for the next decode of an image with matching dimensions. Zero GC pressure.
4
bitmapPool.get(800, 600, ARGB_8888) → Bitmap
Next decode requesting an 800×600 Bitmap gets the pooled instance. BitmapFactory.Options.inBitmap is set to this instance — the decoder writes directly into the existing allocation.
5
memoryCache.put(nextKey, reusedBitmap)
The reused Bitmap, now containing the new image's pixels, is stored in memory cache. The cycle continues with zero new heap allocations for the decode step.
🔷Pattern: Separate Active vs Cached References

Glide maintains two memory tiers: an Active Resources map (WeakReferences to Bitmaps currently shown on screen) and the LRU MemoryCache. When a view releases a Bitmap (scrolled off screen), it moves from Active Resources to the LRU cache. This prevents a displayed Bitmap from being evicted just because it's LRU — a subtle but important correctness detail.


Potential Deep Dives
1) How would you support animated GIFs?

GIFs are a sequence of frames. Store the raw GIF bytes in the disk cache (same as JPEG). On decode, use Android's ImageDecoder API (API 28+) or a custom frame decoder for older versions. The result is not a Bitmap but an AnimatedImageDrawable or a Movie object. The Target interface needs to accept Drawable instead of just Bitmap. Memory cost is frames × frame_size — apply strict size limits for GIF cache entries.

2) How do you add image transformations (circle crop, blur)?

Add a Transformation interface to the pipeline: fun transform(pool: BitmapPool, input: Bitmap): Bitmap. Each transformation is applied after decode, before caching. Include the transformation class name in the CacheKey — a circle-cropped and a plain version of the same URL must be separate cache entries. Transformations can borrow from and return to the BitmapPool to stay allocation-free.

3) How do placeholders and error images work?

Placeholders are set synchronously on the main thread before the async pipeline starts — they must be resource IDs, not URLs (no async loading for placeholders). When the pipeline completes (success or error), replace the placeholder with the result on the main thread. Use a CrossFadeDrawable to animate the transition. Error drawables follow the same pattern but are set in the catch block.

4) How do you handle HTTP caching headers?

OkHttp handles this automatically if you configure it with a cache directory. For Cache-Control: max-age=3600, OkHttp serves from its HTTP cache without hitting the network. Your disk cache and OkHttp's cache serve different purposes — OkHttp caches compressed bytes with HTTP semantics (ETag, conditional GET); your disk cache is optimised for offline access with LRU eviction. You can use both in tandem or let OkHttp's cache replace your disk cache for simpler setups.

5) How would you implement preloading for a RecyclerView?

Use RecyclerView.addOnScrollListener to detect scroll direction and velocity. When the user is scrolling down, prefetch the next N item URLs that are about to enter the viewport. Issue imageLoader.preload(url, targetSize) — which runs the full pipeline (fetch + decode + cache) without setting the result on any view. When the cell actually comes into view, it gets a memory cache hit. Coil calls this MemoryCache.prefetch(); Glide has a ListPreloader helper.


What is Expected at Each Level
Mid-level
  • 2-tier cache (memory + disk) explained
  • LRU eviction policy understood
  • No main-thread blocking
  • Placeholder + error image support
  • Can describe inSampleSize downsampling
Senior
  • BitmapPool and inBitmap reuse
  • Request deduplication with in-flight map
  • Lifecycle-aware cancellation
  • CacheKey includes target dimensions
  • Active Resources vs LRU distinction
Staff+
  • Transformation pipeline design
  • RecyclerView prefetching strategy
  • GIF / animated drawable support
  • OkHttp HTTP cache vs disk cache trade-offs
  • Custom target types beyond ImageView

20 Must-Know Interview Questions

The most frequently asked follow-ups in real Image Loading Library design interviews. Know each one cold.

Q1EasyWhat are the three tiers of cache in an image loading library and what does each store?

Tier 1 — Memory Cache: stores decoded Bitmap objects in RAM. Sub-millisecond access. Sized to ~15% of app heap. Evicts via LRU. Tier 2 — Disk Cache: stores compressed image bytes (JPEG/PNG) as files. 5–50ms access. Sized to ~250MB. Evicts via LRU across files. Tier 3 — Network: the origin server. 200ms–3s. Unlimited but costs bandwidth. The library checks each tier in order — memory first, then disk, then network — and writes the result back to cheaper tiers on a miss.

Q2EasyWhy do we store decoded Bitmaps in memory cache but compressed bytes in disk cache?

Decoded Bitmaps are large but fast to display — a 1080×1920 ARGB_8888 image is 8MB. Keeping them decoded in memory means zero CPU cost on repeated display. But 8MB per image means the memory cache can only hold ~4 images at this size before OOM. Disk cache stores the original compressed JPEG (~100KB) — 80× smaller — so you can cache hundreds of images without filling the disk. The trade-off: disk hits still pay the decode CPU cost, but save the network round-trip.

Q3EasyWhat is LRU eviction and why is it the right policy for image caches?

LRU (Least Recently Used) evicts the item that was accessed furthest back in time. For image caches, it works because temporal locality holds — if you viewed an image recently, you're likely to view it again soon (e.g. scrolling a list up and down). The alternative, FIFO, would evict images in insertion order regardless of access recency. Android's LruCache uses a LinkedHashMap with accessOrder = true — on every get/put, the accessed entry moves to the tail. Eviction picks the head (least recently used).

Q4MediumWhat is a BitmapPool and why does Glide use it?

A BitmapPool is a collection of recycled Bitmap objects grouped by (width, height, config). When the memory cache evicts a Bitmap, instead of letting it be GC'd, it's placed in the pool. The next decode that needs a Bitmap of the same size borrows from the pool and sets it as BitmapFactory.Options.inBitmap — the decoder writes new pixels directly into the existing allocation. The benefit: zero heap allocation for the decode, eliminating GC pauses. In a RecyclerView scrolling 10 images/second, this prevents the GC from triggering every few frames and causing jank.

Q5MediumWhat is inSampleSize and why is it critical for memory efficiency?

inSampleSize is a BitmapFactory.Options field that downsamples an image during decode. A value of 2 halves each dimension (¼ the pixels); 4 gives 1/16 the pixels. Without it, loading a 4K photo (16MB decoded) into a 100×100 thumbnail wastes 640× the memory. The correct flow: (1) Set inJustDecodeBounds = true and decode to get image dimensions without allocating pixels. (2) Calculate the minimum inSampleSize that produces an output ≥ target dimensions. (3) Set inJustDecodeBounds = false and decode at that sample size. Every production library does this.

Q6MediumHow do you handle request deduplication when 50 RecyclerView cells load the same avatar URL simultaneously?

Maintain a ConcurrentHashMap<CacheKey, Deferred<Bitmap>> of in-flight requests. When a second request arrives for the same URL+dimensions, check the map — if a Deferred already exists, call .await() on it instead of starting a new coroutine. All 50 cells share one network request and one decode. Use getOrPut atomically to avoid races. Clean up the entry in a finally block so the next independent load after completion doesn't incorrectly reuse a finished Deferred.

Q7MediumHow do you cancel an image request when a RecyclerView cell is recycled?

Store the coroutine Job as a tag on the target view: view.setTag(R.id.image_loader_tag, job). When a new request starts on the same view, retrieve the old tag and call job.cancel() before launching the new one. In a RecyclerView, onBindViewHolder calls load(url) — which always cancels the previous job on that view instance. In Compose, use LaunchedEffect(url) — it automatically cancels and re-launches whenever the URL key changes. No manual cancellation code needed.

Q8MediumWhy does the CacheKey include target dimensions, not just the URL?

The same URL loaded into a 100×100 thumbnail and a 1080×1080 full-screen view produces two different decoded Bitmaps. If you keyed only by URL, the 1080×1080 Bitmap would be returned for the 100×100 slot — wasting 100× the memory. Conversely, the 100×100 Bitmap would look pixelated in the full-screen view. The correct key is CacheKey(url, width, height, config, transformations). This means the same URL can have multiple entries in the memory cache, each optimised for a specific target size.

Q9MediumWhat happens when the app receives an onTrimMemory() callback?

Android calls onTrimMemory(level) on the Application when the system is under memory pressure. The image loader should register a ComponentCallbacks2 listener and respond proportionally: TRIM_MEMORY_UI_HIDDEN → clear the memory cache (app is backgrounded, no UI needs images). TRIM_MEMORY_RUNNING_CRITICAL → clear memory cache + BitmapPool. TRIM_MEMORY_COMPLETE → clear everything. Not responding to these callbacks is a common source of OOM kills when the app is backgrounded — Android may kill the process if you're holding large Bitmap allocations unnecessarily.

Q10HardWhat is the "Active Resources" pattern in Glide? Why is it separate from the LRU cache?

Glide maintains a separate ActiveResources map of WeakReference<Bitmap> for images currently displayed on screen. The problem it solves: if image A is displayed in a large list and the LRU cache fills up with other images, image A could be evicted — but it's still being drawn! When the view releases the Bitmap (scrolled off screen), it moves from Active Resources to the LRU cache. When it's needed again, the LRU cache returns it without any re-decode. This two-layer approach means on-screen images are never evicted, while off-screen images compete fairly for LRU space.

Q11HardHow would you make the library testable?

Design every component behind an interface: ImageCache, DiskCache, Fetcher, Decoder, BitmapPool. In tests, inject fake implementations. For the Fetcher, use MockWebServer to serve test images. Test the Engine's deduplication with a slow fake Fetcher that uses a Semaphore — fire 10 concurrent requests and assert only one fetch happens. Test LRU eviction by inserting entries until capacity is exceeded and asserting the oldest entry is gone. Test lifecycle cancellation by calling job.cancel() mid-fetch and asserting no view update occurs.

Q12MediumHow does the disk cache handle concurrent writes to the same key?

Use an atomic write pattern: write to a temp file (key.tmp), then rename to the final path (key) when complete. File rename is atomic on POSIX systems. This prevents a concurrent reader from seeing a partially written file. DiskLruCache (used by OkHttp and Glide) implements this pattern. Additionally, use a per-key lock (ConcurrentHashMap<String, Mutex>) to prevent two coroutines from fetching and writing the same key simultaneously — only one should win, the other should wait and then get a cache hit.

Q13MediumHow do you support loading images from sources other than URLs (e.g. file path, resource ID, content URI)?

Define a Fetcher interface with a type parameter: interface Fetcher<T> { suspend fun fetch(data: T): ByteArray }. Register fetchers per source type in a FetcherRegistry: HttpFetcher for URLs, FileFetcher for file paths, ResourceFetcher for R.drawable IDs, ContentUriFetcher for content:// URIs. The Engine inspects the request's data type and delegates to the matching fetcher. This is exactly how Coil's Fetcher SPI works — you can also register custom fetchers for proprietary data sources without modifying the library.

Q14HardHow would you implement prefetching for a RecyclerView scrolling at high speed?

Use a RecyclerView.OnScrollListener to detect scroll direction and calculate which items will enter the viewport. Call imageLoader.preload(url, size) for the next 3–5 items — this runs the full pipeline (fetch → decode → cache) without attaching to any view. Use a lower-priority coroutine dispatcher for preloads so they don't compete with visible-item loads. Cancel preloads when scroll direction reverses. Coil's Preloader and Glide's RecyclerViewPreloader implement this. The result: near-instant image display as items scroll into view, reducing the blank-placeholder window from ~200ms to near zero.

Q15HardHow does Coil differ from Glide architecturally?

Key differences: (1) Coroutines-first: Coil is built entirely on Kotlin coroutines; Glide uses Java threads + callbacks, making Coil's lifecycle cancellation cleaner. (2) OkHttp integration: Coil uses OkHttp natively and defers to OkHttp's HTTP cache, avoiding a separate disk cache layer. Glide manages its own disk cache. (3) No static singleton by default: Coil encourages injecting a custom ImageLoader instance, making it more testable. (4) Compose-first API: Coil has first-class AsyncImage composable; Glide added Compose support later. (5) Coil is newer and smaller in APK size (~3MB vs Glide's ~5MB).

Q16MediumHow do image transformations (circle crop, rounded corners) affect caching?

Transformations must be included in the CacheKey. A circle-cropped version and the original of the same URL are different Bitmaps and must be stored separately. The key typically includes a hash of the transformation class name and parameters. For disk cache, you have a choice: cache the transformed or untransformed bytes. Caching untransformed bytes is more space-efficient (one file serves multiple transformations), but you always pay the transform CPU cost on disk hits. Caching transformed Bitmaps in memory is the right call — memory hits are free, and memory cache entries are per-transformation anyway.

Q17HardHow would you handle image loading in a Compose LazyColumn with 10,000 items?

Key patterns: (1) Use AsyncImage with a rememberAsyncImagePainter — Compose's LaunchedEffect handles cancellation automatically when the composable leaves composition. (2) Set explicit contentScale and size modifiers so the library can downscale to exactly the render size. (3) Enable allowHardware = false only if you need to draw on Canvas — hardware Bitmaps are faster for display. (4) Use ImageLoader with a generous memory cache (more items are in the composition tree simultaneously than in a View RecyclerView). (5) Consider Coil.imageLoader as a singleton — creating a new ImageLoader per composable defeats the shared cache.

Q18MediumHow do you handle images that are too large to fit in the disk cache?

Two approaches: (1) Size-based eviction: If a single image is larger than the disk cache max size, skip writing it to disk cache entirely — it would evict everything else. Check the content-length header before writing and skip if it exceeds a threshold (e.g. 10MB). (2) Streaming decode: For very large images (panoramas, high-res photos), stream decode them using BitmapRegionDecoder instead of loading the full bytes into memory. Decode only the visible region. This is how photo viewer apps like Google Photos handle extremely large images without OOM.

Q19HardHow would you measure and benchmark the library's performance?

Three layers: (1) Microbenchmarks — use androidx.benchmark to measure decode(bytes) time, memoryCache.get(key) latency, and LRU eviction throughput in isolation. (2) Macro benchmarks — use MacrobenchmarkRule to measure scroll frame rate (jank %) in a RecyclerView with your library vs Coil vs Glide on real devices. (3) Memory profiling — use Android Profiler to compare heap allocations during a 30-second scroll session. A well-implemented BitmapPool should show near-zero Bitmap allocations after the first pass through the list. Track GC events per second as the primary signal of pool effectiveness.

Q20HardIf you were starting this library today, what would you design differently than Glide?

Strong Staff+ answer: (1) Coroutines-native from day one — Glide's Java callback model makes coroutine integration awkward. Design the Engine as a suspend function chain from the start. (2) Delegate HTTP caching to OkHttp — maintaining a custom disk cache alongside OkHttp's HTTP cache creates complexity and potential inconsistency. Let OkHttp own the byte cache; own only the decoded Bitmap cache. (3) Compose as the primary target — design the Target interface as a composable state holder, not an ImageView wrapper. (4) KMP-ready data layer — the cache key, LRU logic, and decoder interface could be Kotlin Multiplatform, sharing code with an iOS SwiftUI image loader.