API Documentation
Base URL
The base URL for all API endpoints is your frontend's origin.
GET /api
Returns the version of the backend API.
Example Request
fetch('/api')
Example Response
{
"version": {
"v": "v1",
"full": "backend-v1"
}
}
GET /api/models
Returns a list of all available models.
Example Request
fetch('/api/models')
Example Response
[
{
"id": "llama3",
"name": "Llama 3",
"provider": "ollama"
},
{
"id": "gemma",
"name": "Gemma",
"provider": "ollama"
},
...
]
GET /api/gen
Generates a response from a model.
Query Parameters
model
(string, required): The ID of the model to use.prompt
(string, required): The prompt to send to the model.stream
(boolean, optional): Whether to stream the response. Defaults totrue
.
Example Request (No Stream)
fetch('/api/gen?model=llama3&prompt=Hello&stream=false')
Example Response (No Stream)
{
"message": "Hello! How can I help you today?",
"stream": false,
"request": {
"model": "llama3",
"prompt": "Hello"
}
}
Example Request (Stream)
fetch('/api/gen?model=llama3&prompt=Hello')
Example Response (Stream)
The response is a stream of JSON. The final concatenated result will look like this:
{"stream":true,"request":{"model":"llama3","prompt":"Hello"},"message":"Hello! How can I help you today?"}
GET /api/gen/raw
Generates a raw text response from a model.
Query Parameters
model
(string, required): The ID of the model to use.prompt
(string, required): The prompt to send to the model.stream
(boolean, optional): Whether to stream the response. Defaults totrue
.
Example Request (No Stream)
fetch('/api/gen/raw?model=llama3&prompt=Hello&stream=false')
Example Response (No Stream)
Hello! How can I help you today?
Example Request (Stream)
fetch('/api/gen/raw?model=llama3&prompt=Hello')
Example Response (Stream)
The response is a stream of plain text chunks.
Hello! How can I help you today?