Portkey is also available on the Azure Marketplace. You can deploy Portkey directly through your Azure console, which streamlines procurement and deployment processes.Deploy via Azure Marketplace →
Components and Sizing Recommendations
| Component | Options | Sizing Recommendations |
|---|
| AI Gateway | Deploy in your AKS cluster using Helm charts. | Use AKS B2ms worker nodes, each providing at least 2 vCPUs and 4 GiB of memory. For high availability, deploy them across multiple Availability Zones. |
| Logs Store (optional) | Azure Blob Storage or S3-compatible storage | Each log document is ~10kb in size (uncompressed) |
| Cache (Prompts, Configs & Providers) | Built-in Redis or Azure Cache for Redis | Deployed within the same VNet as the Portkey Gateway. |
Prerequisites
Ensure the following tools and resources are installed and available:
Create a Portkey Account
- Go to the Portkey website.
- Sign up for a Portkey account.
- Once logged in, locate and save your
Organisation ID for future reference. It can be found in the browser URL:
https://app.portkey.ai/organisation/<organisation_id>/
- Contact the Portkey AI team and provide your Organisation ID and the email address used during signup.
- The Portkey team will share the following information with you:
- Docker credentials for the Gateway images (username and password).
- License: Client Auth Key.
Setup Project Environment
cluster_name=<AKS_CLUSTER_NAME> # Specify the name of the AKS cluster where the gateway will be deployed.
namespace=<NAMESPACE> # Specify the namespace where the gateway should be deployed (for example, portkeyai).
service_account_name=<SERVICE_ACCOUNT_NAME> # Provide a name for the Service Account to be associated with Gateway Pod (for example, gateway-sa)
mkdir portkey-gateway
cd portkey-gateway
touch values.yaml
Image Credentials Configuration
# Update the values.yaml file
imageCredentials:
- name: portkey-enterprise-registry-credentials
create: true
registry: https://index.docker.io/v1/
username: <PROVIDED BY PORTKEY>
password: <PROVIDED BY PORTKEY>
gatewayImage:
repository: "docker.io/portkeyai/gateway_enterprise"
pullPolicy: Always
tag: "latest"
dataserviceImage:
repository: "docker.io/portkeyai/data-service"
pullPolicy: Always
tag: "latest"
redisImage:
repository: "docker.io/redis"
pullPolicy: IfNotPresent
tag: "7.2-alpine"
environment:
create: true
secret: true
data:
ANALYTICS_STORE: control_plane
SERVICE_NAME: <SERVICE_NAME> # Specify a name for the service
PORTKEY_CLIENT_AUTH: <PROVIDED BY PORTKEY>
ORGANISATIONS_TO_SYNC: <ORGANISATION_ID> # This is obtained after signing up for a Portkey account.
Based on the choice of components and their configuration update the values.yaml.
MCP Gateway (Optional)
By default, only the AI Gateway is enabled in the deployment. To enable the MCP Gateway, add the following configuration to values.yaml:
environment:
data:
SERVER_MODE: "mcp/all"
MCP_PORT: "8788"
MCP_GATEWAY_BASE_URL: "<This must be set to MCP LoadBalancer URL or Hostname pointing to MCP Service>"
Note: MCP_GATEWAY_BASE_URL does not need to be provided during the initial deployment. Once the MCP Load Balancer is created after first deployment and hostname mapping is configured, you can set this value and redeploy.
Server Modes
"" (empty or not provided): Deploys only the AI Gateway. This is the default configuration.
"mcp": Deploys only the MCP Gateway.
"all": Deploys both the AI Gateway and MCP Gateway.
Cache Store
The Portkey Gateway deployment includes a Redis instance pre-installed by default. You can either use this built-in Redis or connect to an external cache like Azure Cache for Redis or Azure Managed Redis.
Built-in Redis
No additional permissions or network configurations are required.
## To use the built-in Redis, add the following configuration to the values.yaml file.
environment:
data:
CACHE_STORE: redis
REDIS_URL: "redis://redis:6379"
REDIS_TLS_ENABLED: "false"
Log Store
Azure Blob Storage
-
(Optional) If not already done, create an Azure Storage Account and a Blob Container to store LLM logs.
-
Set up access to the log store. The Gateway supports the following methods for connecting to the Blob Storage:
- Managed Identity
- Entra ID
Depending on the chosen Blob Storage access method, update values.yaml with the following configuration.
Managed Identity
Entra ID
## To enable Managed Identity update values.yaml with following details:-
environment:
data:
LOG_STORE: azure
AZURE_STORAGE_ACCOUNT: <STORAGE_ACCOUNT_NAME> # Specify the name of the storage account which will be used for storing LLM logs.
AZURE_STORAGE_CONTAINER: <STORAGE_CONTAINER> # Specify the name of the blob store container which will be used for storing LLM logs.
AZURE_AUTH_MODE: managed
## To enable Entra ID update values.yaml with following details:-
environment:
data:
LOG_STORE: azure
AZURE_STORAGE_ACCOUNT: <STORAGE_ACCOUNT_NAME> # Specify the name of the storage account which will be used for storing LLM logs.
AZURE_STORAGE_CONTAINER: <STORAGE_CONTAINER> # Specify the name of the blob store container which will be used for storing LLM logs.
AZURE_AUTH_MODE: entra
AZURE_ENTRA_CLIENT_ID: <ENTRA_CLIENT_ID> # Specify client id of the app created during set up of Entra ID access for blob store.
AZURE_ENTRA_CLIENT_SECRET: <ENTRA_CLIENT_SECRET> # Specify client secret of the app created during set up of Entra ID access for blob store.
AZURE_ENTRA_TENANT_ID: <ENTRA_TENANT_ID> # Specify tenant id obtained during set up of Entra ID access for blob store.
-
(Optional) Configure log path format using
LOG_STORE_FILE_PATH_FORMAT. See Log Object Path Format for details.
Network Configuration
Set Up External Access
To make the Gateway service accessible externally, you can set up either of the following:
- Azure Load Balancer with NGINX
Ingress controller
- Azure Load Balancer with Kubernetes
Service
Azure Load Balancer with NGINX Ingress Controller
Recommended if SERVER_MODE is set to all
- Enable application routing add-on.
RESOURCE_GROUP=<RESOURCE_GROUP_NAME> # Provide name of resource group in which AKS cluster exists
CLUSTER_NAME=<CLUSTER_NAME> # Provide name of AKS cluster
az aks approuting enable
--resource-group ${RESOURCE_GROUP}
--name ${CLUSTER_NAME}
--nginx None
- Create NGINX Ingress Controller
External LB with Static IP
STATIC_IP_NAME=portkey-lb-static-ip
# Create Static Public IP to associate with LB
az network public-ip create \
--resource-group ${RESOURCE_GROUP} \
--name ${STATIC_IP_NAME} \
--sku Standard \
--allocation-method static
# Delegate AKS cluster permission to created Public IP
CLIENT_ID=$(az aks show --name ${CLUSTER_NAME} --resource-group ${RESOURCE_GROUP} --query identity.principalId -o tsv)
RG_SCOPE=$(az group show --name ${RESOURCE_GROUP} --query id -o tsv)
az role assignment create \
--assignee ${CLIENT_ID} \
--role "Network Contributor" \
--scope ${RG_SCOPE}
# Create External NGINX Ingress Controller
kubectl apply -f - <<EOF
apiVersion: approuting.kubernetes.azure.com/v1alpha1
kind: NginxIngressController
metadata:
name: nginx-static
spec:
ingressClassName: nginx-static
controllerNamePrefix: nginx-static
loadBalancerAnnotations:
service.beta.kubernetes.io/azure-pip-name: "${STATIC_IP_NAME}"
service.beta.kubernetes.io/azure-load-balancer-resource-group: "${RESOURCE_GROUP}"
EOF
Internal LB with Private IPv4 IP
kubectl apply -f - <<EOF
apiVersion: approuting.kubernetes.azure.com/v1alpha1
kind: NginxIngressController
metadata:
name: nginx-internal
spec:
ingressClassName: nginx-internal
controllerNamePrefix: nginx-internal
loadBalancerAnnotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
EOF
- Update
values.yaml with following configuration
ingress:
enabled: true
ingressClassName: "<NginxIngressControllerName>" # Set 'nginx-static' for External LB and 'nginx-internal' for Internal LB
# hostname: "<AI Gateway Hostname>"
# hostBased: true
# mcpHostname: "<MCP Gateway Hostname>"
annotations:
service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path: "/v1/health"
- Retrieve IP address of Azure Load Balancer.
IP=$(kubectl get ingress portkey-ai-gateway -n portkeyai -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo ${IP}
Notes:
-
If
SERVER_MODE is set to all (i.e., both AI Gateway and MCP Gateway are enabled), you must enable host-based routing by setting hostBased: true and specify the hostname through which the AI Gateway and MCP Gateway will be accessed.
-
After the Load Balancer is provisioned, create a DNS record in your public or private hosted zone that points to the Load Balancer’s static public or private IPv4 address. If
SERVER_MODE is set to all, you must create two separate DNS records—one for the AI Gateway hostname and one for the MCP Gateway hostname—each pointing to the Load Balancer IP.
-
The Azure Load Balancer NGINX Controller supports additional annotations—such as TLS configuration, custom health checks, and more—for managing the Ingress load balancer. For a complete list of supported annotations, refer to the Azure NGINX Load Balancer ingress.
Azure Load Balancer with Kubernetes Service
service:
type: LoadBalancer
port: 80 # LB listener port
containerPort: 8787
annotations:
# Specify the type of Load Balancer to create - internal or internet-facing
service.beta.kubernetes.io/azure-load-balancer-internal: "false"
service.beta.kubernetes.io/azure-load-balancer-health-probe-protocol: "http"
service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path: "/v1/health"
# Replace the cidr ranges to make it more restrictive
service.beta.kubernetes.io/azure-allowed-ip-ranges: 0.0.0.0/0
Azure Load Balancer supports various annotations to fine-tune its configuration. For a complete list of supported annotations, see the Azure Load Balancer annotations.
Note:service.containerPort must be same as environment.data.PORT.
Deploying Portkey Gateway
# Add the Portkey AI Gateway helm repository
helm repo add portkey-ai https://portkey-ai.github.io/helm
helm repo update
# Install the chart
helm upgrade --install portkey-ai portkey-ai/gateway -f ./values.yaml -n $namespace --create-namespace
Verify the deployment
To confirm that the deployment was successful, follow these steps:
- Verify that all pods are running correctly.
#
kubectl get pods -n $namespace
# You should see all pods with a 'STATUS' of 'Running'.
Note: If pods are in a Pending, CrashLoopBackOff, or other error state, inspect the pod logs and events to diagnose potential issues.
-
Test Gateway by sending a cURL request.
- Port-forward the Gateway pod
kubectl port-forward <POD_NAME> -n $namespace 9000:8787 # Replace <POD_NAME> with your Gateway pod's actual name.
- Once port forwarding is active, open a new terminal window or tab and send a test request by running:
# Specify LLM provider and Portkey API keys
OPENAI_API_KEY=<OPENAI_API_KEY> # Replace <OPENAI_API_KEY> with an actual API key
PORTKEY_API_KEY=<PORTKEY_API_KEY> # Replace <PORTKEY_API_KEY> with Portkey API key which can be created from Portkey website(https://app.portkey.ai/api-keys).
# Configure and send the curl request
curl 'http://localhost:9000/v1/chat/completions'`\
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "x-portkey-provider: openai" \
-H "x-portkey-api-key: $PORTKEY_API_KEY" \
-d '{
"model": "gpt-4o-mini",
"messages": [{"role": "user","content": "What is a fractal?"}]
}'
- Test gateway service integration with Load Balancer.
# Replace <LOAD_BALANCER_IP> and <LISTENER_PORT> with the Load Balancer's IP/DNS and listener port respectively.
curl 'http://<LB_IP>:<LISTENER_PORT>/v1/chat/completions' \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "x-portkey-provider: openai" \
-H "x-portkey-api-key: $PORTKEY_API_KEY" \
-d '{
"model": "gpt-4o-mini",
"messages": [{"role": "user","content": "What is a fractal?"}]
}'
Integrating Gateway with Control Plane
Outbound Connectivity (Data Plane to Control Plane)
Portkey supports the following methods for integrating the Data Plane with the Control Plane for outbound connectivity:
- Azure Private Link
- Over the Internet
Ensure Outbound Network Access
By default, Kubernetes allows full outbound access, but if your cluster has NetworkPolicies that restrict egress, configure them to allow outbound traffic.
Example NetworkPolicy for Outbound Access:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-all-egress
namespace: portkeyai
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0
This allows the gateway to access LLMs hosted both within your VPC and externally. This also enables connection for the sync service to the Portkey Control Plane.
Azure Private Link
Establishes a secure, private connection between the Control Plane and Data Plane within the Azure network.
Steps to establish Azure Private Link connectivity:
- Contact Portkey and provide your Azure Subscription Id so it can be whitelisted in Portkey’s Control Plane.
- Once you get confirmation from Portkey that your Azure account is whitelisted, create Azure Private Endpoint connection to Portkey Control Plane.
# Replace <SUBNET_ID> with Subnet ID (not Subnet Name) in which Portkey gateway is deployed.
az network private-endpoint create \
--name portkey-cp-pvt-endpoint \
--resource-group ${RESOURCE_GROUP} \
--subnet <SUBNET_ID> \
--private-connection-resource-id "/subscriptions/4bec865f-23ea-4d04-be20-3e883cbb3eb1/resourceGroups/privatelink/providers/Microsoft.Network/privateLinkServices/privatelink-proxy-pls" \
--connection-name portkey-cp-pvt-connection
- Retrieve Private IP of created endpoint.
PE_IP=$(az network nic show --ids $(az network private-endpoint show --name portkey-cp-pvt-endpoint --resource-group ${RESOURCE_GROUP} --query "networkInterfaces[0].id" -o tsv) --query "ipConfigurations[0].privateIPAddress" -o tsv)
- Contact the Portkey team and share the endpoint name with them.
- Once the connection request is approved, create Private Hosted Zone and link it to Gateway VNet.
# Create PHZ
az network private-dns zone create \
--resource-group ${RESOURCE_GROUP} \
--name privatelink-az.portkey.ai
# Create VNet Link
# Replace <VNET_ID> with the VNet ID (not VNet name) of AKS Node Group.
# IMPORTANT!! <VNET_ID> must be full Resource ID of AKS's node group's VNet.
# You can find it by navigating to AKS Cluster > Settings > Networking > VNet integration > Click on VNet.
# On Vnet console, click on JSON View and copy Resource ID
# e.g,
az network private-dns link vnet create \
--resource-group ${RESOURCE_GROUP} \
--zone-name privatelink-az.portkey.ai \
--name portkey-gateway-vnet-link \
--virtual-network "<VNET_ID>" \
--registration-enabled false
- Create a record in the PHZ pointing to Portkey’s Private Endpoint IP.
az network private-dns record-set a add-record \
--resource-group ${RESOURCE_GROUP} \
--zone-name privatelink-az.portkey.ai \
--record-set-name azure-cp \
--ipv4-address ${PE_IP}
- Update the
values.yaml file with the following environment variables.
environment:
create: true
secret: true
data:
ALBUS_BASEPATH: "https://azure-cp.privatelink-az.portkey.ai/albus"
CONTROL_PLANE_BASEPATH: "https://azure-cp.privatelink-az.portkey.ai/api/v1"
SOURCE_SYNC_API_BASEPATH: "https://azure-cp.privatelink-az.portkey.ai/api/v1/sync"
CONFIG_READER_PATH: "https://azure-cp.privatelink-az.portkey.ai/api/model-configs"
- Re-deploy the gateway.
helm upgrade --install portkey-ai portkey-ai/gateway -f ./values.yaml -n portkeyai --create-namespace
Over the Internet
Ensure Gateway has access to the following endpoints over the internet.
https://api.portkey.ai
https://albus.portkey.ai
Inbound Connectivity (Control Plane to Data Plane)
- Azure Private Link
- IP Whitelisting
Azure Private Link
Establishes a secure, private connection between the Control Plane and Data Plane within the Azure network.
Steps to establish Azure Private Link connectivity:
-
Create a dedicated subnet for Azure Private Link Service.
# Replace <SUBNET_CIDR> with CIDR you want to allocate to this subnet. CIDR must be at least /27, must fall within the AKS VNet CIDR range, and must not overlap with existing subnet CIDRs.
# IMPORTANT!! <VNET_RESOURCE_GROUP> must be the Resource Group in which AKS Node Group exists. It starts with MC_*.
az network vnet subnet create \
--resource-group ${VNET_RESOURCE_GROUP} \
--vnet-name <VNET_ID> \
--name private-endpoints-subnet \
--address-prefixes <SUBNET_CIDR>
--private-link-service-network-policies Disabled
-
On the Azure Portal, navigate to
Private Link services and click + Create to create a new Private Link Service (PLS).
-
Select the Subscription and Resource Group, provide a name for the PLS, and click Next.
-
Under Outbound settings, select the Load Balancer created for the gateway. Then choose the appropriate frontend IP configuration of that Load Balancer.
-
In the
Source NAT subnet field, select the dedicated PLS subnet created earlier, and click Next.
-
Under Access security, select Restricted by subscription and whitelist the Portkey Control Plane subscription ID (
4bec865f-23ea-4d04-be20-3e883cbb3eb1).
-
Leave all other settings as default and click Create to provision the Private Link Service.
-
Contact the Portkey team and share the alias of the created Private Link Service, along with the Gateway URL.
-
Once the Portkey team has raised the connection request from their end, approve it by navigating to the PLS in the Azure Portal, then go to Private Endpoint connections and click Approve.
IP Whitelisting
Allows control plane to access the Data Plane over the internet by restricting inbound traffic to specific IP address of Control Plane. This method requires the Data Plane to have a publicly accessible endpoint.
To whitelist, add an inbound rule to the Azure NSG or Firewall allowing connections from the Portkey Control Plane’s IPs (54.81.226.149, 34.200.113.35, 44.221.117.129) on required port.
To integrate the Control Plane with the Data Plane, contact the Portkey team and provide the Public Endpoint of the Data Plane.
Verifying Gateway Integration with the Control Plane
- Send a test request to Gateway using
curl.
- Go to Portkey website -> Logs.
- Verify that the test request appears in the logs and that you can view its full details by selecting the log entry.
Uninstalling Portkey Gateway
helm uninstall portkey-ai --namespace $namespace
Setting up Permission
Azure Blob Storage
To allow the Portkey Gateway to access Azure Blob Storage for log storage, permissions must be granted.
Follow the steps below to set up these permissions according to your selected access method.
Managed Identity
Entra Identity
- Specify the details:
CLUSTER_NAME=<CLUSTER_NAME> # Specify name of AKS cluster
RESOURCE_GROUP=<RESOURCE_GROUP> # Specify the name of the resource group that contains the storage account.
STORAGE_ACCOUNT_NAME=<STORAGE_ACCOUNT_NAME> # Specify the name of the storage account which will be used for storing LLM logs.
CONTAINER_NAME=<CONTAINER_NAME> # Specify the name of the blob store container which will be used for storing LLM logs.
- Fetch Identity associated with the AKS cluster.
KUBELET_OBJECT_ID=$(az aks show \
--resource-group $RESOURCE_GROUP \
--name $CLUSTER_NAME \
--query "identityProfile.kubeletidentity.objectId" \
--output tsv)
- Grant identity a role that allows it to access Blob Storage.
# Fetch storage id of storage account
STORAGE_ID=$(az storage account show \
--name $STORAGE_ACCOUNT_NAME \
--resource-group $RESOURCE_GROUP --query id -o tsv)
# Grant Storage Blob Data Contributor to kubelet identity
az role assignment create \
--assignee-object-id $KUBELET_OBJECT_ID \
--assignee-principal-type ServicePrincipal \
--role "Storage Blob Data Contributor" \
--scope "$STORAGE_ID/blobServices/default/containers/$CONTAINER_NAME"
- Register an Entra Identity application.
AZURE_ENTRA_CLIENT_ID=$(az ad app create \
--display-name portkey-gateway-logstore-app \
--sign-in-audience "AzureADMyOrg" \
--query appId -o tsv)
echo $AZURE_ENTRA_CLIENT_ID
Make a note of the value of AZURE_ENTRA_CLIENT_ID, as it will be required in defining values.yaml.
- Create a client secret for registered app.
AZURE_ENTRA_CLIENT_SECRET=$(az ad app credential reset \
--id $AZURE_ENTRA_CLIENT_ID \
--display-name "entra-secret" \
--query password -o tsv)
echo $AZURE_ENTRA_CLIENT_SECRET
Make a note of the value of AZURE_ENTRA_CLIENT_SECRET, as it will be required in defining values.yaml.
- Retrieve your Tenant ID.
AZURE_ENTRA_TENANT_ID=$(az account show \
--query tenantId -o tsv)
echo $AZURE_ENTRA_TENANT_ID
Make a note of the value of AZURE_ENTRA_TENANT_ID, as it will be required in defining values.yaml.
- Grant the app a role that allows it to access Blob Storage.
RESOURCE_GROUP=<RESOURCE_GROUP> # Specify the name of the resource group that contains the storage account.
STORAGE_ACCOUNT_NAME=<STORAGE_ACCOUNT_NAME> # Specify the name of the storage account which will be used for storing LLM logs.
CONTAINER_NAME=<CONTAINER_NAME> # Specify the name of the blob store container which will be used for storing LLM logs.
# Create a Service Principal for the app.
az ad sp create --id $AZURE_ENTRA_CLIENT_ID
# Fetch storage id of storage account
STORAGE_ID=$(az storage account show \
--name $STORAGE_ACCOUNT_NAME \
--resource-group $RESOURCE_GROUP \
--query id -o tsv)
# Assign the role to the app.
az role assignment create \
--assignee $AZURE_ENTRA_CLIENT_ID \
--role "Storage Blob Data Contributor" \
--scope "$STORAGE_ID/blobServices/default/containers/$CONTAINER_NAME"
Examples
Built-in Redis
The following sample values.yaml below shows how to configure the built-in Redis cache and Azure Blob Storage for log store using Entra ID.
images:
gatewayImage:
repository: "docker.io/portkeyai/gateway_enterprise"
pullPolicy: Always
tag: "latest"
dataserviceImage:
repository: "docker.io/portkeyai/data-service"
pullPolicy: Always
tag: "latest"
redisImage:
repository: "docker.io/redis"
pullPolicy: IfNotPresent
tag: "7.2-alpine"
imageCredentials:
- name: portkeyenterpriseregistrycredentials
create: true
registry: https://index.docker.io/v1/
username: <DOCKER_USERNAME>
password: <DOCKER_PASSWORD>
environment:
create: true
secret: true
data:
ANALYTICS_STORE: control_plane
SERVICE_NAME: gateway
PORTKEY_CLIENT_AUTH: <CLIENT_AUTH> # REPLACE <CLIENT_AUTH> with client auth shared by Portkey team.
ORGANISATIONS_TO_SYNC: <ORGASIZATION_ID> # REPLACE <ORGANISATION_ID> with organisation_id of your account.
PORT: "8787"
# Configuration for using built-in redis
CACHE_STORE: redis
REDIS_URL: "redis://redis:6379"
REDIS_TLS_ENABLED: "false"
# Configuration for enabling Entra ID access to Azure Blob Storage.
LOG_STORE: azure
AZURE_STORAGE_ACCOUNT: <STORAGE_ACCOUNT_NAME> # Specify the name of the storage account which will be used for storing LLM logs.
AZURE_STORAGE_CONTAINER: <STORAGE_CONTAINER> # Specify the name of the blob store container which will be used for storing LLM logs.
AZURE_AUTH_MODE: entra
AZURE_ENTRA_CLIENT_ID: <ENTRA_CLIENT_ID> # Specify client id of the app created during set up of Entra ID access for blob store.
AZURE_ENTRA_CLIENT_SECRET: <ENTRA_CLIENT_SECRET> # Specify client secret of the app created during set up of Entra ID access for blob store.
AZURE_ENTRA_TENANT_ID: <ENTRA_TENANT_ID> # Specify tenant id obtained during set up of Entra ID access for blob store.
# Enabling Load Balancer to provide access outside of cluster
service:
type: LoadBalancer
port: 8787
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "false"
service.beta.kubernetes.io/azure-load-balancer-health-probe-protocol: "http"
service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path: "/v1/health"
service.beta.kubernetes.io/azure-allowed-ip-ranges: 0.0.0.0/0
Last modified on February 13, 2026