While working on Bomb Ball and bouncing from engine to engine, I began wondering what my ideal game development toolset would look like. I’d had several conversations with my friend, Caleb Gray, about the topic and what we would do if we were to build a game engine.

Both Caleb and I are software engineers, but we’ve worked on several game projects with people who are not. We decided to try creating a solution that would empower non-programmers to create amazing, performant games without needing to write a single line of code.

#The engine as a compiler

Our driving philosophy was separating the editor’s representation of the game from the actual code that runs in the game. Unlike typical game engines and frameworks, we would move our abstraction out of any runtime code — instead the editor is the only abstraction and it used to generate the final, optimized machine code1.

While this layer of meta-programming is more abstract and complex, it doesn’t suffer from the same performance penalties of a runtime abstraction. Going further it lets us optimize output exactly to the use case of the project and removes virtually any limitations in the design or representation of the game within the editor.

This resulted in the editor operating like a language frontend2 that feeds into a compiler backend performing code generation for the final executable.

#Modeling intent with “atoms”

Like most compilers, Alchemy had its own intermediate representation3. We referred to it as the “atoms” system. Atoms are small, simple units of data that each contain an ID, a kind, a parent ID, and a set of relevant properties.

interface BaseAtom<K extends string> {
	_id: string;
	_parent: string;
	_kind: K;
}

// Here's an example of the SceneAtom type
interface SceneAtom extends BaseAtom<"scene"> {
	name: string;
	// ...other node-specific properties
}

This simple structure allowed us to create a visual editor that was extremely flexible, and adding new features was as simple as adding new atom types.

As you can probably guess by the _parent field, atoms are organized in a hierarchical tree-like structure4. Beyond common parent/child purposes, we would often use child atoms as a way to alter the code generation for the parent atom.

This extremely modular data model was great for serialization and made implementing a transactional undo/redo system a breeze, since the _id and _kind fields never changed and the remaining properties were few and flat5.

#Very human concepts

  • we began thinking of how to model the game like a person

#Re-thinking visual scripting

Something we spent a significant amount of time researching and user-testing was how we wanted to handle game logic. Since we were focused on creating a tool for non-programmers we needed a robust visual scripting system6.

Visual scripting commonly comes in two flavors: graph-based and block-based. Each has its strength and weaknesses7, so we began creating a ton of clickable prototypes and even implemented multiple variations so we can pick the one we liked best.

The classic node graph style is what won out in the end. Once that decision was made, we began exploring ways to make the experience of managing large numbers of nodes and links more enjoyable — especially for power users8.

<p>We tried... a handful... of designs.</p>

We tried... a handful... of designs.

#Generating static, low-level code

Once we had a game modeled out of atoms, we could convert it to code and binary bundles by traversing them starting from the rootmost atom. Initially we were generating C/C++9, a choice that required some very specific traversal in order to create valid C code.

Eventually we began exploring LLVM IR as an ideal output target, since it’s designed to be an output of a visitor pattern10. While generating LLVM IR was easier than generating C, it brought its own challenges. One of which was having to generate everything or compile libraries ahead of time to link against; whereas in C we could leverage hand-written code and the standard library for many tasks11.

Regardless, both approaches had another challenge… packaging Clang in order to build any code we generated. Because we wanted to keep the end user experience as non-technical as possible, we explored a number of ways we could bundle or auto-install supporting tools without forsaking the portable binary architecture we had at the time.

#Optimizations through permutations

Something we spent a lot of time on was exploring ways to optimize the generated code beyond the typical optimizations that a compiler would perform.

This most exciting of these explorations was generating several permutations of code on a per-node basis and then A/B testing them to see which one was fastest. The matrix of permutations we explored generating was huge. Thankfully we had a solution in mind.

Cache tables. Lots and lots of cache tables.

This level of optimization was only really necessary for release builds, so our plan was to watch for common patterns of nodes, perform our tests, and then store the results in a cache table keyed by a hash containing the node pattern, platform, and build configuration.

This way, we could quickly look up the best permutation for a given node and build configuration but only when generating code for release.

#How to create a custom GUI faster

The initial versions of the tool were written in C/C++ and C# using a mix of Dear IMGUI with some heavy customizations. IMGUI is a great library, but getting the exact look we wanted with it was a much slower ordeal compared to working with web technologies12.

Given that both Caleb and I have a considerable amount of web experience, we began to wonder if we could move faster building our UI using Typescript and a framework like React or Svelte.

We wound up giving Svelte a try and found our ability to iterate and explore new ideas was nearly tripled. It also opened a whole new possibility: running our game engine in a browser as a hosted service.

#Benefits of an enclosed ecosystem

Obviously there’s a case to be made that the world doesn’t need another subscription service13, but a recurring revenue model would certainly be nice if we wanted to work on this full time and still pay our bills.

More fascinating to us though were the other benefits that come with an ecosystem that allowed us to leverage our own remote servers.

Not only would the user not need to install or troubleshoot anything on their machine14 to run the engine, but we could simply perform compilation on our hardware. We could also do this as the user’s machine is syncing changes with us.

It also made collaboration for our end users easier, as we no longer needed to worry about having to support a text file format for meta data to satisfy tools like Git, SVN, or Perforce. Instead we could just rely on syncing each action in real time, like in Google Docs.

#Game distribution could get way easier

Even cooler than both of these though is the fact that distribution pipeline once a game was ready could potentially be made easier too.

Anyone who has ever had to sign an executable for MacOS or push a game with stats and achievements to Steam will tell you how painful the process can be. It only gets worse when you have to repeat those steps over and over as you release patches and bug fixes.

Yeah you can obviously automate that stuff, but our end user is using a no-code tool to create a game. There’s a high chance they are not a technical person, and these tasks can be needlessly complicated even for experienced developers.

Typically though you can’t just share scripts and code for licensed SDKs though. So what if you’re trying to release across multiple platforms simultaneously, like Nintendo, Playstation, and XBox? Each one also has a review process and you likely want your game to hit all the storefronts at the same time, right?

By having the licensed code exist only on our servers15, we could handle the complicated testing to ensure the code was valid and safe. In some cases we could even use APIs provided by these storefronts to automatically submit releases to the storefronts, manage responses from the review processes, and publish the change we see green across the board.

#It’s not all upsides though

Developing our editor leveraging web technologies was a huge boon, but… WebGL and WebAssembly are not as fast or as powerful as native code. We would often ask ourselves, “Could we make a game like [insert AAA title here] with this?” and the answer was not a clear yes or no. Just lots of theory16.

We only needed to run the development build in the browser if the application client wasn’t running locally on the user’s machine. Wrapping the web app in Tauri allowed us to circumvent this issue and create a native application that could be downloaded to their machine and run locally. Now the build process could run a build on our servers which would be downloaded and launched as a sub-process of the editor.

But this just introduces a new problem… now we have the cost and time of supporting servers that can compile the code submitted by the client, and have the added delay of downloading the build. Not to mention, what happens if our servers are unreachable? Now you can test your game!

Before too long, we were back to finding ways to seemlessly ship and integrate a Clang/LLVM toolset with the editor itself17.

#Development hits several snags

There were a couple factors that affected our ability to keep working on Alchemy.

A big one is that I was running out of money and had take on some contract work and Caleb needed to focus more time on his full-time job. Another was that we easily had another year to get enough of the engine/editor in place to build a complete game with it. But the biggest reason was that my wife got diagnosed with cancer and we wound up moving to a different house.

By the time I got back to working on Alchemy, I was completely lost on where I left off and some of my priorities had shifted. Making things feel more hopeless was that generative AI was taking off at this point and I was suffering an existential crisis about the future of my career.

#All is not lost

Alchemy lives on in other projects that I’ve been working on. These projects scale back the overall scope, making them more achievable with less time available to spend on them.

First we took concepts of the compiler and coroutine system to create Conjure as an alternative to C, specifically for creating games and real-time applications. The time spent focusing on creating a compiler with very human tooling led to the “tooling-first” mindset.

The editor and no-code tooling was briefly transferred to a personal project named Ornitier, but now I’m planning to use it as the software development kit for Retrograde. This allows people to make games for the Retrograde platform without needing to write code.