Riza raises $2.7M to power the next generation of AI-written code

Hey there! We're Riza and we make it safe and easy to run untrusted code generated by LLMs or humans, in our cloud or on your infra.

We're excited to announce that we’ve raised $2.7M in seed funding led by Matrix Partners with participation from 43 and a handful of our favorite angels to continue building the best platform for running LLM-generated code.

From experiment to billions of requests

When Kyle and I started working on a WASM-based untrusted code execution service last year, it was just a fun experiment to help out a friend's company. But after building a few iterations of the underlying isolated runtime engine, and serving several billion code execution requests along the way, we're ready to scale up our ambitions and the company.

Our growth so far has been exciting. Since our developer preview launch a year ago, our API has seen well over three billion code execution requests, including 850 million requests in March alone. That’s a lot for a team of three! And we’re eager to handle more.

Securely executing user- and AI-generated code

Today, Riza provides a secure, isolated, production-grade execution environment that serves two critical use cases. Riza enables companies to:

  1. Run arbitrary code written by their human users. This gives product teams the ability to add customization and flexibility without compromising security. For example, PromptLayer, a comprehensive platform for prompt engineering, uses Riza to enable their users to customize AI agents and evals.
  2. Run arbitrary code written by LLMs. This increases the accuracy and reliability of LLMs, and unlocks a new, powerful way of using LLMs to solve novel problems in production: just-in-time programming.

Powering a new AI software paradigm: just-in-time programming

In working with our early customers, we've identified a powerful emerging pattern: software that integrates an LLM to not only write code but also run that code in production systems. We've coined the term "just-in-time (JIT) programming" to describe this paradigm, because code is written at the last possible moment before execution.

For example, one customer uses Riza to extract structured data from logs in various formats. Instead of manually writing regular expressions or parsers for every log source, they ask an LLM to generate code for each log line, then run that code on Riza to extract the information they need.

Another customer uses Riza to generate reports that combine data from many sources which are unknown at development time. An LLM writes code to join, filter and analyze the data, then generates charts that are embedded directly in the report. This kind of number crunching is prone to hallucinations when you prompt an LLM directly and pass it all the data. But asking the LLM to write and execute code instead reliably produces correct outputs.

Next-gen tooling for agentic applications

As we’ve built alongside our early customers, we’ve discovered that simply providing a safe execution environment for untrusted LLM-generated code isn’t enough to see the benefits of JIT programming shared broadly. We’ve built a strong foundation, but we aim to do more.

While Claude 3.7 Sonnet, GPT-4.5, and Gemma 2.5 are leaps and bounds better at writing code than their predecessors, they still sometimes fail at basic programming tasks. They try to use packages that don't exist. They construct nonsensical regular expressions. They decide a task is too difficult and just generate made-up data that looks plausible.

When an LLM is writing code at development time, these issues are usually resolved with human code review. When running LLM-generated code directly in production, you don’t have that luxury.

We foresee a future powered by LLM-driven software written and run unreviewed in production, without human supervision. Our existing isolated runtime environment is a critical primitive supporting that future, and we’re excited to get to work with our new seed capital building a set of tools to make LLM code generation even more reliable.

Join us

We're still in the early days of AI. The amount of AI-written code has exploded in the last year and is only increasing. We expect to be the engine that powers new types of applications that don't fit into the standard software patterns of the past.

Sound interesting? Come join us! We're looking for talented engineers from diverse backgrounds to help build this future. Our founding team includes engineers from Twilio, Stripe, and Retool, and we're eager to add to our team.

Check out our open positions →

AI writes code. Riza runs it.

Execute Python, JavaScript, TypeScript, Ruby or PHP in our isolated runtime environment.

Now available for self-hosting!

Try the Code Interpreter API
Andrew Benton