These are the changes since v1.61.20-stable.
This release is primarily focused on:
- LLM Translation improvements (more thinkingcontent improvements)
- UI improvements (Error logs now shown on UI)
info
This release will be live on 03/09/2025
Demo Instance​
Here's a Demo Instance to test changes:
- Instance: https://demo.litellm.ai/
- Login Credentials:- Username: admin
- Password: sk-1234
 
New Models / Updated Models​
- Add supports_pdf_input for specific Bedrock Claude models PR
- Add pricing for amazon eumodels PR
- Fix Azure O1 mini pricing PR
LLM Translation​
- Support /openai/passthrough for Assistant endpoints. Get Started
- Bedrock Claude - fix tool calling transformation on invoke route. Get Started
- Bedrock Claude - response_format support for claude on invoke route. Get Started
- Bedrock - pass descriptionif set in response_format. Get Started
- Bedrock - Fix passing response_format: {"type": "text"}. PR
- OpenAI - Handle sending image_url as str to openai. Get Started
- Deepseek - return 'reasoning_content' missing on streaming. Get Started
- Caching - Support caching on reasoning content. Get Started
- Bedrock - handle thinking blocks in assistant message. Get Started
- Anthropic - Return signatureon streaming. Get Started
- Note: We've also migrated from signature_deltatosignature. Read more
- Support format param for specifying image type. Get Started
- Anthropic - /v1/messagesendpoint -thinkingparam support. Get Started
- Note: this refactors the [BETA] unified /v1/messagesendpoint, to just work for the Anthropic API.
- Vertex AI - handle $id in response schema when calling vertex ai. Get Started
Spend Tracking Improvements​
- Batches API - Fix cost calculation to run on retrieve_batch. Get Started
- Batches API - Log batch models in spend logs / standard logging payload. Get Started
Management Endpoints / UI​
- Virtual Keys Page- Allow team/org filters to be searchable on the Create Key Page
- Add created_by and updated_by fields to Keys table
- Show 'user_email' on key table
- Show 100 Keys Per Page, Use full height, increase width of key alias
 
- Logs Page- Show Error Logs on LiteLLM UI
- Allow Internal Users to View their own logs
 
- Internal Users Page - Allow admin to control default model access for internal users
 
- Fix session handling with cookies
Logging / Guardrail Integrations​
- Fix prometheus metrics w/ custom metrics, when keys containing team_id make requests. PR
Performance / Loadbalancing / Reliability improvements​
- Cooldowns - Support cooldowns on models called with client side credentials. Get Started
- Tag-based Routing - ensures tag-based routing across all endpoints (/embeddings,/image_generation, etc.). Get Started
General Proxy Improvements​
- Raise BadRequestError when unknown model passed in request
- Enforce model access restrictions on Azure OpenAI proxy route
- Reliability fix - Handle emoji’s in text - fix orjson error
- Model Access Patch - don't overwrite litellm.anthropic_models when running auth checks
- Enable setting timezone information in docker image