Skip to main content

Deploying an endpoint

Seven-step wizard from model selection to live URL.

Deploying an endpoint

Seven-step wizard from model selection to live URL.

  1. Pick a kind — Predictive or LLM.
  2. Pick a model source — from your registry, from HuggingFace, from S3, from a PVC, or a custom container image.
  3. Pick a runtime — Workbench filters runtimes by your kind and whether they need a GPU.
  4. Set sizing — CPU, memory, GPU, replica bounds (min/max), and the autoscaling metric (concurrency / RPS / CPU).
  5. Pick a compute profile — usually a GPU profile for LLMs and a smaller predictive box otherwise.
  6. (LLM only) Configure parallelism — tensor / pipeline / data sharding to fit the model on your GPUs.
  7. Submit. The endpoint goes Pending while images pull and pods schedule, then Running once at least one replica passes its readiness check.
⌘I