SQLite Driver Under Development: Pure Luma Reads SQLite Files

SQLite Driver Under Development: Pure Luma Reads SQLite Files

February 16, 2026·
Luma Core Team

We’re building a SQLite driver for Luma. It’s not released yet, but the core is working and we want to share where we are, what’s implemented, and what we learned along the way.

Like the MySQL driver, the SQLite driver is written entirely in Luma. No C library. No Go wrapper. No external dependencies. Just .luma code that reads SQLite database files byte by byte, using the same primitives every Luma developer has access to: file, buffer, bit, and the byte conversion methods we already shipped.

What’s working today

The driver can open a real SQLite3 database file and query it through Luma’s standard db interface:

sqlite = import "sqlite"

conn = db.open(sqlite, "data/analytics.db")

rows = conn.query("SELECT name, age FROM users WHERE age > 28")
rows.walk(row) -> {
    name: str = row.get("name")
    age: int = row.get("age")
    print("${name} is ${age}")
}

conn.close()

Same API as MySQL. Same db.open(). Same conn.query(). Same rows.walk() with row.get(). If you’ve used Luma’s database interface before, you already know how to use SQLite. The only difference is the import and the DSN – a file path instead of a connection string.

Here’s what the driver handles today:

SQL support:

  • SELECT * and column projection (SELECT name, score)
  • WHERE with =, !=, >, <, >=, <= and AND
  • LIMIT
  • Multiple tables in the same database

SQLite format support:

  • Full SQLite3 file header parsing (magic bytes, page size, encoding, reserved bytes)
  • B-tree traversal – both leaf and interior pages
  • SQLite varint decoding (the 1-9 byte variable-length integer format)
  • All serial types: NULL, integers (1-8 bytes signed), IEEE 754 floats, text, blobs
  • Overflow page chains for records larger than a single page
  • Schema loading from sqlite_master
  • CREATE TABLE SQL parsing to extract column names

Error handling:

  • All failure points use Luma’s error() system, so callers can recover with .or()
  • Invalid files, missing tables, unsupported operations – all catchable

What we had to build first

SQLite stores 8-byte integers and 8-byte IEEE 754 floats. Luma’s byte conversion methods only went up to 48 bits. So before the driver could exist, we needed new primitives:

Method Purpose
data.to_int64() 8-byte little-endian to int
data.to_int64_be() 8-byte big-endian to int
data.to_float64_be() 8-byte big-endian IEEE 754 to float
n.to_int64() int to 8-byte little-endian
n.to_int64_be() int to 8-byte big-endian

These are now part of Luma’s core and available to everyone – not just the SQLite driver. If you’re parsing any binary format that uses 64-bit values (and most do), these methods are ready.

How it works inside

The SQLite file format is elegant but deep. A database is a collection of fixed-size pages. Page 1 contains the file header and the root of the sqlite_master table. Every table is stored as a B-tree. Records use a compact variable-length encoding.

The driver implements all of this in pure Luma:

Varint decoder – SQLite uses a custom variable-length integer encoding where each byte’s high bit signals continuation. Our decoder is fully unrolled – no loops needed, just 9 levels of conditional byte reading. Luma’s recursion-over-loops philosophy works naturally here.

Page reader – Each read_page_num() call uses file.read_at(offset, page_size) for random access. Stateless I/O, matching Luma’s file handling model.

B-tree walker – Recursive traversal of interior pages (which point to child pages) and leaf pages (which contain actual records). SQLite B-trees are typically 2-4 levels deep, so recursion depth is never a concern.

Record decoder – Reads the varint-encoded header to get serial types, then decodes each field: 1-8 byte signed integers with sign extension, big-endian IEEE 754 floats, length-prefixed strings and blobs.

Schema loader – Reads the sqlite_master table on page 1, filters for type = 'table', and parses the CREATE TABLE SQL to extract column names. The column name parser handles parenthesized constraints, quoted identifiers, and table-level constraints like PRIMARY KEY(...).

The whole driver is about 1,000 lines of Luma.

What we learned

Building the SQLite driver taught us things about our own language that we wouldn’t have discovered any other way.

Byte slicing needed a better answer. Early versions tried to slice byte arrays with recursive copying – one byte at a time. It worked but was painfully slow and, more importantly, exposed a subtlety in Luma’s list semantics. We switched to buffer(data).skip(start).read(length), which turned out to be cleaner and faster. The buffer type, originally built for network protocols, turned out to be the right tool for file format parsing too.

Empty list literals need type context. Passing [] as a function argument doesn’t give the compiler enough information to infer the element type. We need typed declarations: empty: [str] = [] before passing to a function. This is a known limitation we’ll address, but for now it’s a pattern that driver authors need to know.

The error() system fits naturally. Every failure point in the driver – invalid file, missing table, unsupported encoding, bad SQL – uses error() instead of panic(). This means users can wrap any operation with .or() to handle failures gracefully. We went back and updated the MySQL driver too, converting all its panic() calls to error() calls. Both drivers now participate fully in Luma’s error handling model.

What’s not implemented yet

The driver is read-only. This is by design for the initial version – many real use cases (analytics dashboards, log viewers, config readers, data migration tools) only need to read from SQLite.

Here’s what’s explicitly out of scope for now:

  • Write operationsINSERT, UPDATE, DELETE, CREATE TABLE return a clear error
  • JOIN, GROUP BY, ORDER BY – only single-table scans with WHERE filtering
  • Subqueries and aggregates – no COUNT(), SUM(), etc.
  • Index usage – always does a full table scan (correct but not optimized)
  • WAL mode – only reads rollback-journal mode databases
  • WITHOUT ROWID tables – these use a different B-tree structure
  • UTF-16 databases – only UTF-8 is supported (the vast majority of SQLite databases)
  • Named parameters – not yet supported; use string values directly in SQL

What’s next

The driver is functional today for read-only queries. We’re using it internally, shaking out edge cases, and making sure it handles real-world SQLite files correctly before we call it released.

The path forward is clear. Write support is the obvious next step – INSERT and CREATE TABLE would make the driver useful for a much wider range of programs. But we’re not rushing. The read-only driver already proves that Luma can parse a complex binary file format entirely in user-space code, with no special compiler support and no external dependencies.

Every SQLite feature we add will be built with the same tools available to every Luma developer. That’s the point. The driver isn’t special. It’s just Luma.

Last updated on