Run a stateless workspace-scoped LLM chat request

Uses the workspace's configured LLM integrations. Provide a concrete model to pin the request, or omit it to use auto selection. For streaming, provide request_id, subscribe to llm_chat:{workspace_id}:{request_id} before posting, and set stream=true.

Recent Requests
Log in to see full request history
TimeStatusUser Agent
Retrieving recent requests…
LoadingLoading…
Path Params
uuid
required
Body Params

LLM.ChatRequest

string | null
enum
Allowed:
string | null
enum
json_schema
object | null
integer | null
≥ 1
messages
array of objects
required
messages*
string | null
enum
string | null
enum
Allowed:
uuid | null
string | null
enum
Allowed:
boolean
Defaults to false
number | null
integer | null
≥ 1
number | null
Responses

Callback
Language
Credentials
URL
LoadingLoading…
Response
Click Try It! to start a request and see the response here! Or choose an example:
application/json