Rasgo allows you to bring your own OpenAI API key to use for all LLM calls. The benefits of using your own API key are: faster access to new models, and ability to keep data in your own VPC (if deployed on Azure).

If you do not bring your own key, Rasgo will provide keys to communicate with OpenAI on your behal

How to set an API key for an LLM provider

API keys are set at your organization-level. All users in your organization will share this key. Rasgo supports one key per organization.

In the Admin Settings screen, navigate to the "AI Providers & Models" tab:

Click "Connect" on one of the supported LLM providers.

Enter all values in the modal, and your organization will be set to connect with your API key.

Next, you must set the models your organization will use for chat and text embeddings.

How to Set LLM Models

In the Admin Settings > "AI Providers & Models" tab:

Click "Add" to create a new model. Fill out all values in the modal to save models.

Create at least 1 chat model and 1 embedding model. If you add more than one chat model, users will be able to select from this list in their chat screens.

Every organization must have at least 1 chat model and 1 embedding model created to function properly.

Finally, select the models that will run as defaults for your organization. The default chat model will appear as the suggested model in all user's chats, but they will have the ability to override it.

Connecting to your VPC

Rasgo will always connect to your LLM provider from these IP addresses. Make sure to whitelist them if you're running a secure VPC.

IP Address

Last updated