Safely connect LLMs to customer code.

Riza lets you run untrusted code from multiple customers in a secure sandbox embedded inside your generative AI application.

Input
Sally has 17 apples. She gives 9 to Jim. Later that day, Peter gives 6 Bananas to Sally. How many pieces of fruit does Sally have at the end of the day?
1# When using an LLM, you can describe functions and have the 
2# model intelligently choose to output a JSON object 
3# containing arguments to call one or many functions.
4#
5# In this example, we give the model the ability to do 
6# simple addition and subtraction.
7
8def addition(a, b):
9    """Add two numbers together."""
10    return a + b
11
12def subtraction(a, b):
13    """Subtract one number from another."""
14    return a - b
15
Output
At the end of the day Sally has 14 pieces of fruit.

Give your model power tools

Let your customers extend your generative AI application with their own functions that run safely inside your environment.

Access external data

Large language models are often missing up-to-date information. Allow your customers to pull in information stored in different systems without using embeddings or file uploads.

Start simple, get complex

Let your customer dream big (or small)

Start small by letting your model know the current date. Go big by querying a public data set and returning custom analysis. Your customers get to decide the complexity.

Model and provider agnostic

Our SDK integrates with OpenAI, Claude, and Gemini Pro. We work with any model that supports tool use or function calling.

Blazing fast

Functions are compiled to WASM and cached aggressively.

Secure by default

Each execution happens inside its own sandbox with CPU, memory, and network limits.

Flexible deployment

Cloud hosted, hybrid, ByoC, on-prem. Even in-process!

Truly multi-language

Write functions in JavaScript, Python, Go, Rust, and more

Bring in customer code with confidence

The Riza execution runtime is flexible enough to run deep inside your cluster. Deploy it as a sidecar or embed it within your application process.

How it works

Your customer defines the interface for their tool. Your customer writes a function that implements this interface in their programming language of choice.

Riza compiles their code to WASM and stores it in our secure private registry. You integrate the Riza runtime into your application using a supported function-calling model and tool definitions generated by our SDK. The runtime pulls down compiled binaries from the registry, caches them and runs them on demand

Sign up to get early access