Skip to content

Question: Need an EPP for each InferencePool? #2018

@ed-pai

Description

@ed-pai

I've been testing out GAIE + Kgateway (agentgateway) + VLLM the last couple days and trying to understand a couple things. Really appreciate the help:

1. When hosting multiple VLLM engines with different models using BBR, should each unique model/engine instance get its own EPP?

For example:

# Should this have 3 separate EPP's deployed??
InferencePool 1 = VLLM (2 replicas) w/ Qwen3-32B
InferencePool 2 = VLLM (1 replica) w/ GPT-OSS-120B
InferencePool 3 = VLLM (4 replica) w/ Llama-3.1 + (3) LoRAs

2. Is there a mechanism that aggregates all /v1/models and serves that as a single endpoint?

I wrote a PoC of an aggregator that monitors the inferencepools, gets all instances of downstream VLLM, and regularly queries the downstream /v1/models and serves an aggregated upstream endpoint (includes all LoRAs this way). However, this feels like something that might be native to GAIE already. How are people solving this problem now?

Metadata

Metadata

Assignees

No one assigned

    Labels

    needs-triageIndicates an issue or PR lacks a `triage/foo` label and requires one.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions