Safely connect LLMs to customer code.
Riza lets you run untrusted code from multiple customers
in a secure sandbox embedded inside your generative AI application.
Riza lets you run untrusted code from multiple customers
in a secure sandbox embedded inside your generative AI application.
Let your customers extend your generative AI application with their own functions that run safely inside your environment.
Large language models are often missing up-to-date information. Allow your customers to pull in information stored in different systems without using embeddings or file uploads.
Let your customer dream big (or small)
Start small by letting your model know the current date. Go big by querying a public data set and returning custom analysis. Your customers get to decide the complexity.
Our SDK integrates with OpenAI, Claude, and Gemini Pro. We work with any model that supports tool use or function calling.
Blazing fast
Secure by default
Flexible deployment
Truly multi-language
The Riza execution runtime is flexible enough to run deep inside your cluster. Deploy it as a sidecar or embed it within your application process.
Your customer defines the interface for their tool. Your customer writes a function that implements this interface in their programming language of choice.
Riza compiles their code to WASM and stores it in our secure private registry. You integrate the Riza runtime into your application using a supported function-calling model and tool definitions generated by our SDK. The runtime pulls down compiled binaries from the registry, caches them and runs them on demand