Configuring LLM providers
OpenAI gpt-4o is configured as a default LLM provider for kagent (the default-model-config
in the kagent namespace). The ModelConfig resource is used for configuring models.
The following providers are currently supported:
- OpenAI
- Azure OpenAI
- Anthropic (note that there's an active issue with tool calling)
Configuring Anthropic
- Create a Kubernetes Secret that stores the API key, replace
<your_api_key>
with an actual API key:
export ANTHROPIC_API_KEY=<your_api_key>kubectl create secret generic kagent-anthropic -n kagent --from-literal ANTHROPIC_API_KEY=$ANTHROPIC_API_KEY
- Create a ModelConfig resource that references the secret and key name, and specify the Anthropic model you want to use:
apiVersion: kagent.dev/v1alpha1kind: ModelConfigmetadata:name: claude-model-confignamespace: kagentspec:apiKeySecretName: kagent-anthropicapiKeySecretKey: ANTHROPIC_API_KEYmodel: claude-3-sonnet-20240229provider: Anthropicanthropic: {}
- Apply the above resource to the cluster.
Once the resource is applied, you can select the model from the Model dropdown in the UI when creating or updating agents.

Configuring Azure OpenAI
- Create a Kubernetes Secret that stores the API key, replace
<your_api_key>
with an actual API key:
export AZURE_OPENAI_API_KEY=<your_api_key>kubectl create secret generic kagent-azureopenai -n kagent --from-literal AZURE_OPENAI_API_KEY=$AZURE_OPENAI_API_KEY
- Create a ModelConfig resource that references the secret and key name, and specify the additional information that's required for the Azure OpenAI - that's the deployment name, version and the Azure AD token. You can get these values from Azure.
apiVersion: kagent.dev/v1alpha1kind: ModelConfigmetadata:name: azureopenai-model-confignamespace: kagentspec:apiKeySecretName: kagent-azureopenaiapiKeySecretKey: AZURE_OPENAI_API_KEYmodel: gpt-4o-miniprovider: AzureOpenAIazureOpenAI:azureEndpoint: "https://{yourendpointname}.openai.azure.com/"apiVersion: "2025-03-01-preview"azureDeployment: "gpt-4o-mini"azureAdToken: <azure_ad_token_value>
- Apply the above resource to the cluster.
Once the resource is applied, you can select the model from the Model dropdown in the UI when creating or updating agents.
Configuring Ollama
Ollama allows you to run LLMs locally on your computer or in a Kubernetes cluster. Configuring Ollama in kagent follows the same pattern as for other providers.
Let's give an example of how to run Ollama on a Kubernetes cluster:
- Create a namespace for Ollama deployment and service:
kubectl create ns ollama
- Create the deployment and service:
apiVersion: apps/v1kind: Deploymentmetadata:name: ollamanamespace: ollamaspec:selector:matchLabels:name: ollamatemplate:metadata:labels:name: ollamaspec:containers:- name: ollamaimage: ollama/ollama:latestports:- name: httpcontainerPort: 11434protocol: TCP---apiVersion: v1kind: Servicemetadata:name: ollamanamespace: ollamaspec:type: ClusterIPselector:name: ollamaports:- port: 80name: httptargetPort: httpprotocol: TCP
You can run kubectl get pod -n ollama
and wait until the pod has started.
Once the pod has started, you can port-forward to the Ollama service and use ollama run [model-name]
to download/run the model. You can download Ollama binary here.
As kagent relies on calling tools, sure you're using a model that allows function calling.
Let's assume we've downloaded the llama3
model, you can then use the following ModelConfig to configure the model:
apiVersion: kagent.dev/v1alpha1kind: ModelConfigmetadata:name: llama3-model-confignamespace: kagentspec:apiKeySecretKey: OPENAI_API_KEYapiKeySecretName: kagent-openaimodel: llama3provider: Ollamaollama:host: http://ollama.ollama.svc.cluster.local