Thinking About Databases: How Should Luma Talk to MySQL?

Thinking About Databases: How Should Luma Talk to MySQL?

February 10, 2026·
Luma Core Team

Some features arrive with a clear design from the start. You know what the API should look like, you know how the pieces fit together, and the work is mostly about execution. Database support is not one of those features.

Databases are one of the most requested capabilities for any practical language. If you can read files, serve HTTP, and parse JSON, the next question is always: can I query a database? And it’s a fair question. Most real-world programs eventually need to store or retrieve structured data, and MySQL and MariaDB remain among the most widely deployed databases in the world.

We want Luma to support them. But we’ve decided not to take shortcuts. Instead of rushing to ship a MySQL integration, we’re building the foundations that will let Luma support any database - and let anyone write a driver for it.

What we’re really building

PHP has PDO. Go has database/sql. These are abstractions that separate the idea of “talking to a database” from the details of any specific database engine. The application writes queries against a standard interface. A driver handles the protocol. If you want MySQL, you plug in a MySQL driver. If you want PostgreSQL, you plug in a different one. The application code doesn’t change.

Luma should have its own version of this - designed from the ground up around Luma’s principles: type-safe, self-contained, and readable.

But we’re going one step further. In most languages, writing a database driver requires deep knowledge of the language’s internals, C bindings, or third-party build tools. In Luma, we want database drivers to be regular Luma code. A developer should be able to write a driver as a normal .luma file, import it, and pass it to Luma’s database layer. No compiler plugins. No special privileges. Just code.

This is the vision:

driver = import "drivers/mysql"
db: database = db.open(driver, "user:pass@localhost/mydb")

rows = db.query("SELECT name, age FROM users WHERE age > ?", 21)
rows.walk(row) -> {
    print("${row["name"]} is ${row["age"]}")
}

db.exec("INSERT INTO users (name, age) VALUES (?, ?)", "Alice", 30)
db.close()

The important line is db.open(driver, dsn). The user passes a driver into the database manager. The db module doesn’t know MySQL from PostgreSQL. It only knows that the driver fulfills a contract.

Why not just ship a MySQL driver today?

We could. We considered several approaches, and some of them would get MySQL queries running in Luma within weeks. But each came with compromises that didn’t sit right.

Embedding an existing Go driver

Luma compiles your source to Go and then to a binary. The generated program is self-contained - it only uses Go’s standard library plus Luma’s embedded core runtime. No package manager, no dependencies, no build system.

We could embed an existing Go MySQL driver (like go-sql-driver/mysql) directly into this runtime, the same way we embed file handling, JSON, and HTTP support.

It would work. But it would mean bundling roughly 6,000 lines of third-party Go code under a different license (MPL-2.0), maintaining a forked copy, and tracking upstream security patches indefinitely. More importantly, it would be a black box inside Luma - not something a Luma developer could read, understand, or extend.

Scaffolding a Go module at compile time

When database code is detected, Luma could create a temporary Go module directory with a go.mod, fetch the MySQL driver from the internet, and compile normally.

This is flexible and supports any Go driver. But it breaks one of Luma’s core promises: that luma run program.luma works instantly, offline, with no configuration. The first run would require internet access. The compilation pipeline would gain significant complexity for database programs. And users would suddenly need to care about Go module caching - something Luma has deliberately kept invisible.

Writing a MySQL client in Go’s standard library

We could implement the MySQL wire protocol directly in .lcore helper files using Go’s net and crypto packages. Zero dependencies, fully embedded, tailored to Luma.

Technically sound, but it would live as Go code inside Luma’s internals - invisible and untouchable for Luma developers. When someone asks “how does the MySQL driver work?”, the answer would be “look at these Go files in the compiler’s core directory.” That’s not a Luma answer.

Using system C libraries or shelling out

CGo bindings to libmysqlclient would require a C compiler, system library installations, and break cross-compilation. Shelling out to the mysql CLI would be fragile, slow, and a security liability. Neither belongs in a language that values simplicity and self-containment.

What all these options have in common

They all treat the database driver as something Luma provides. A built-in. A piece of infrastructure maintained by the core team that users consume but never look inside.

We decided that’s the wrong model. The right model is one where Luma provides the tools, and drivers are built with those tools - by us, by the community, by anyone who needs to talk to a database that doesn’t have a driver yet.

The toolbox-first approach

Instead of building a MySQL driver, we’re building the primitives that make a MySQL driver possible to write in Luma itself. These primitives are useful far beyond databases, but databases are the proof that they work.

TCP networking

A database driver needs to open a socket, send bytes, and receive bytes. Luma currently has no network primitives beyond the built-in HTTP server. We need to change that.

sock: tcp = tcp.connect("localhost", 3306)
sock.write(data)
response: [byte] = sock.read(1024)
sock.close()

TCP support unlocks more than databases. It enables custom protocol clients, network tools, socket-based APIs, and any program that talks to a service over the network.

Binary data handling

Network protocols speak in bytes. The MySQL wire protocol encodes packet lengths as little-endian integers, capabilities as bitfields, and strings as null-terminated sequences. To implement it, you need to build and parse binary buffers.

buf: buffer = buffer()
buf.write_int32(length)
buf.write_bytes(payload)
raw: [byte] = buf.to_bytes()

// Reading
reader: buffer = buffer(raw)
length: int = reader.read_int32()
data: [byte] = reader.read_bytes(length)

Binary buffer support enables file format parsers, serialization tools, and any program that works with structured binary data.

Cryptographic hashing

MySQL authentication requires SHA1 and SHA256 hashing of passwords and challenge tokens. These are fundamental operations that belong in any language’s standard toolkit.

hash: [byte] = crypto.sha256(data)
hash: [byte] = crypto.sha1(data)

Crypto primitives enable security tools, data integrity checks, token generation, and anything that touches authentication or hashing.

The database interface

With these primitives in place, we define the contract that every database driver must fulfill. Luma’s trait system is the natural fit:

trait DatabaseDriver {
    fn connect(dsn: str) -> DatabaseConnection
}

trait DatabaseConnection {
    fn query(sql: str, params: [any]) -> [any]
    fn exec(sql: str, params: [any]) -> int
    fn close()
}

The db.open() function accepts any value that implements DatabaseDriver. It doesn’t know about MySQL. It doesn’t need to.

Then the MySQL driver is just a Luma file

// drivers/mysql.luma
pub struct MySQLDriver impl DatabaseDriver {
    fn connect(dsn: str) -> DatabaseConnection {
        // parse dsn, tcp.connect(), handshake, authenticate
        // all in plain Luma
    }
}

This file ships with Luma as the first built-in driver. But it’s not special. It uses the same tcp, buffer, and crypto primitives available to every Luma developer. Someone who wants PostgreSQL support can look at the MySQL driver, understand how it works, and write one using the same tools.

How we plan to get there

We’re building this in three phases. Each phase ships something complete and useful on its own.

Phase 1: Low-level primitives. TCP sockets, binary buffers, and cryptographic hashing. These are the foundation. They need to be solid before anything is built on top of them. When this phase is done, you can write any kind of network client in Luma.

Phase 2: Database traits and the db module. The DatabaseDriver and DatabaseConnection traits. The db.open() function. Parameter escaping. Result normalization. This is pure Luma - no protocol code, just the contract and the scaffolding. When this phase is done, the interface is stable and documented.

Phase 3: MySQL driver in Luma. The first driver, written as a standard Luma module. Connection handshake, authentication, query execution, result parsing - all implemented using the primitives from Phase 1 and the traits from Phase 2. When this phase is done, you can query MySQL and MariaDB from Luma.

We’re not setting deadlines for these phases. Each one will ship when it’s ready, not when a calendar says it should. We’d rather take the time to get the foundations right than rush to ship something we’ll want to redesign later.

Why this path, and not the faster one

There is a shortcut. We could ship a MySQL driver in Go today, buried in .lcore files, and most users would never know the difference. The API would look the same. The queries would run.

But the moment someone wanted to modify the driver, support a different database, or understand how their program talks to MySQL, they’d hit a wall. The driver would be Go code inside a black box, not Luma code they can read and learn from.

We think that wall is the wrong thing to build. Luma’s identity is about clarity and transparency. When you read a Luma program, you should be able to understand what it does. That should be true all the way down - including the database driver.

Building the toolbox first takes longer. But it means that when database support arrives, it arrives as something the community can own, extend, and build on. Not just something they consume.

What comes after MySQL

Once the architecture is in place, every new database driver is just a new Luma file:

  • PostgreSQL - different wire protocol, same primitives, same traits
  • SQLite - different approach (embedded database), but the DatabaseConnection trait still applies
  • Redis - not a relational database, but the TCP and binary primitives make it straightforward
  • Custom protocols - internal services, proprietary databases, anything that speaks over a socket

The first driver is the hardest. Every one after that is just protocol work.

Following along

We’ll share updates as each phase progresses. The TCP and binary buffer work will likely appear first, and we’ll write about the design decisions as they happen.

If you have thoughts on the database interface design, the driver architecture, or which databases matter most to you - we want to hear them. This is one of those features where the community’s input shapes not just what we build, but how.

We’re building the tools first. The tools build everything else.

Last updated on