Biomni – Open Responses SDK
For the open-source Biomni release (Apache 2.0).
This document shows how to call the OpenAI Python SDK Responses API for Biomni.
Prerequisites
- Sign up for a 20minds account
- Create an API key in settings
- Install Python 3.10+ and OpenAI Python SDK (tested with
2.15.0)
Install the SDK:
pip install openai==2.15.0
Configure the client
Biomni uses an API key passed via the OpenAI SDK
api_key field.from openai import OpenAI
TWENTYMINDS_BASE_URL = "https://api.20minds.ai/v1/biomni"
TWENTYMINDS_API_KEY = "your-api-key"
client = OpenAI(
base_url=TWENTYMINDS_BASE_URL,
api_key=TWENTYMINDS_API_KEY,
)
Create a response
model is optional and passed through to the Biomni A1 session (gpt-5-mini by default).response = client.responses.create(
input=[
{
"role": "user",
"content": [
{
"type": "input_text",
"text": "Suggest a biomarker panel for early-stage pancreatic cancer and explain why."
}
],
}
],
model="gpt-5-mini",
)
print(response.id)
print(response.status)
print(response.metadata.get("biomni_session_id"))
Poll for completion
import time
resp = client.responses.retrieve(response_id=response.id)
while resp.status == "running":
time.sleep(5)
resp = client.responses.retrieve(response_id=response.id)
print(resp.status)
print(resp.output)
print(resp.usage)
Stream events
Biomni supports
stream=True and emits OpenAI-style SSE events (for example response.output_text.delta, response.code_interpreter.*, and response.completed).with client.responses.stream(
input=[{
"role": "user",
"content": [{"type": "input_text", "text": "Run a quick analysis and show intermediate reasoning outputs."}],
}],
) as stream:
for event in stream:
print(event.type)
Continue a conversation
Use
previous_response_id to continue the same Biomni session.followup = client.responses.create(
input=[
{
"role": "user",
"content": [
{"type": "input_text", "text": "Now summarize that in 5 bullet points."}
],
}
],
previous_response_id=response.id,
)
Delete a response
client.responses.delete(response_id=response.id)
Deleting a response also deletes its underlying Biomni session.
Response shape (summary)
A completed response contains conversation items in
output:for item in resp.output:
print(item["type"])
Notes:
outputincludes assistantmessageitems withoutput_text.- If Biomni returns
<solution>...</solution>, the server adds afunction_callitem namedfinal_answer. usageis normalized to OpenAI-style token fields and includesusage_usd.
File uploads
Biomni exposes OpenAI-style file endpoints under the same base URL.
from openai import BaseModel
from typing import Literal
import requests
class FileCreate(BaseModel):
id: str
response_id: str
object: Literal["file"]
filename: str
status: str
upload_url: str
upload_expires_at: str
created_at: str
file_record = client.post(
"/files",
body={"filename": "paper.pdf"},
cast_to=FileCreate,
)
upload_response = requests.put(
file_record.upload_url,
data=requests.get("https://example.com/paper.pdf").content,
)
upload_response.raise_for_status()
response = client.responses.create(
input=[{
"role": "user",
"content": [
{"type": "input_text", "text": "Summarize the uploaded report."},
{"type": "input_file", "file_id": file_record.id},
],
}],
previous_response_id=file_record.response_id,
)
upload_url expires after 3 minutes.Reference
- Biomni preprint: https://www.biorxiv.org/content/10.1101/2025.05.30.656746v1