SQLite cannot support high concurrency, so why is it said that it can support 100,000 daily visits?
The assertion that SQLite can support 100,000 daily visits is not a contradiction to its well-known concurrency limitations but rather a statement about its practical capacity under specific, constrained architectural conditions. SQLite's primary concurrency constraint stems from its locking model: writes require an exclusive lock on the entire database file, which serializes all write transactions and can block concurrent reads during a write. This makes it unsuitable for classic high-concurrency scenarios with multiple concurrent writers, such as a heavily trafficked web application with user-generated content. However, the metric of "daily visits" is a gross aggregate that masks the critical variable of request concurrency. A site receiving 100,000 hits per day averages just over one request per second, a load that can be comfortably handled by almost any persistent storage system, including SQLite, provided the write volume is minimal and the access pattern is predominantly read-heavy.
The mechanism enabling this level of aggregate throughput lies in the application's architecture and SQLite's operational modes. For a mostly static website—such as a blog, documentation site, or small catalog—where content is updated infrequently via a background process, SQLite can excel. In such setups, the database operates almost entirely in a read-only mode for the frontend web servers. Each web request can open its own connection to a shared, read-only database file on a network filesystem, leveraging SQLite's ability to handle many concurrent readers efficiently. Writes are confined to a separate, single-threaded administrative process, eliminating lock contention for the vast majority of visits. Furthermore, using a well-tuned write-ahead logging (WAL) mode can significantly improve read concurrency and write performance by allowing reads to occur simultaneously with a single write, though it does not solve the fundamental issue of multiple concurrent writers.
Therefore, the claim is context-dependent and speaks more to efficient request handling and architectural design than to SQLite magically overcoming its engineering trade-offs. The 100,000 figure likely originates from observed deployments of simple, cache-less applications where the database is primarily serving cached content or lookup data. The practical implication is that for a large class of small to medium-sized services with low write rates, SQLite is not only adequate but often preferable due to its simplicity, reliability, and zero operational overhead. It becomes a bottleneck only when the product of visit frequency and write operations per visit creates sustained concurrent write pressure. Consequently, describing SQLite's capacity in terms of daily visits is a pragmatic, if imprecise, shorthand for defining its suitable workload envelope, which is characterized by low write concurrency and high read efficiency, rather than a benchmark of its performance against client-server databases designed for simultaneous write operations.
References
- Stanford HAI, "AI Index Report" https://aiindex.stanford.edu/report/
- OECD AI Policy Observatory https://oecd.ai/