I have written and maintained AI proxies. They are not terribly complex except the inconsistent structure of input and output that changes on each model and provider release. I figure that if there is a not a < 24 hour turn around for new model integration the project is not properly maintained.
Governance is the biggest concern at this point - with proper logging, and integration to 3rd party services that provide inspection and DLP type threat mitigation.
Another reason golang is interesting for the gateway is having clear control of the supply chain at compile time. Tools like LiteLLM the supply chain attacks can have more impact at runtime, where the compiled binary helps.
How do you plan on keeping up with upstream changes from the API providers? I have implemented something similar, and the biggest issue I have faced with go is that providers don’t usually have sdk’s (compared to javascript and python), and there is work involved in staying up to date at each release.
Does this have a unified API? In playing around with some of these, including unified libraries to work with various providers, I've found you are, at some point, still forced to do provider-specific works for things such as setting temperatures, setting reasoning effort, setting tool choice modes, etc.
What I'd like is for a proxy or library to provide a truly unified API where it will really let me integrate once and then never have to bother with provider quirks myself.
Also, are you also planning on doing an open-source rug pull like so many projects out there, including litellm?
Are these kinds of libraries a temporary phenomenon? It strikes me as weird that providers haven't settled on a single API by now. Of course they aren't interested in making it easier for customers to switch away from them, but if a proprietary API was a critical part of your business plan, you probably weren't going to make it anyway.
(I'm asking only about the compatibility layer; the other tracking features would be useful even if there were only one cloud LLM API.)
I've been maintaining an abstraction layer over multiple providers for a couple of years now - https://llm.datasette.io/
The best effort we have to defining a standard is OpenAI harmony/responses - https://developers.openai.com/cookbook/articles/openai-harmo... - but it's not seen much pickup. The older OpenAI Chat Completions thing is much more of an ad-hoc standard - almost every provider ends up serving up a clone of that, albeit with frustrating differences because there's no formal spec to work against.
The key problem is that providers are still inventing new stuff, so committing to a standard doesn't work for them because it may not cover the next set of features.
2025 was particularly turbulent because everyone was adding reasoning mechanisms to their APIs in subtly different shapes. Tool calls and response schemas (which are confusingly not always the same thing) have also had a lot of variance - some providers allow for multiple tool calls in the same response, for example.
My hunch is we'll need abstraction layers for quite a while longer, because the shape of these APIs is still too frothy to support a standard that everyone can get behind without restricting their options for future products too much.
The providers themselves can't keep this straight even within their own ecosystem. Plus everyone is running at a million miles/hour.
For example `Claude code` used to set 2 specific beta headers with some version numbers for their Max subscription to be supported.
Oauth tokens for Max plan is different from how their API keys looked. They kind of look similar, but has specific prefix that these tool pre-validate.
It is barely working at this point even within a single provider
This is way more interesting to me as well. I have projects that use small limited-purpose language models that run on local network servers and something like this project would be a lot simpler than manually configuring API clients for each model in each project.
Curious how the semantic caching layer works.. are you embedding requests on the gateway side and doing a vector similarity lookup before proxying? And if so, how do you handle cache invalidation when the underlying model changes or gets updated?
Hey, contributor here. That's right, GoModel embeds requests and does vector similarity lookup before proxying. Regarding the cache invalidation, there is no "purging" involved – the model is part of the namespace (params_hash includes the LLM model, path, guardrails hash, etc). TTL takes care of the cleanup later.
You are not the first person who has asked about it.
It looks like a useful feature to have. Therefore, I'll dig into this topic more broadly over the next few days and let you know here whether, and possibly when, we plan to add it.
This is really useful. I've been building an AI platform (HOCKS AI) where I route different tasks to different providers — free OpenRouter models for chat/code gen, Gemini for vision tasks. The biggest pain point has been exactly what you describe: switching models without changing app code.
One thing I'd love to see is built-in cost tracking per model/route. When you're mixing free and paid models, knowing exactly where your spend goes is critical. Do you have plans for that in the dashboard?
It’s a heavily vibe coded project with only proxy with terrible benchmarks design. Basically vibe coded benchmarks that lie through ignorance of mocked super fast endpoint without using full power of litellm in multiple processes.
Too bad so many people fall for it.
Other than that almost useless it’s faster when this will be io bound and not cpu bound.
I'm all in on Go and integrating AI up and down our systems for https://housecat.com/ and am currently familiar and happy with:
https://github.com/boldsoftware/shelley -- full Go-based coding agent with LLM gateway.
https://github.com/maragudk/gai -- provides Go interfaces around Anthropic / OpenAI / Google.
Adding this to the list as well as bifrost to look into.
Any other Go-based AI / LLM tools folks are happy with?
I'll second the request to add support for harnesses with subscriptions, specifically Claude Code, into the mix.
Governance is the biggest concern at this point - with proper logging, and integration to 3rd party services that provide inspection and DLP type threat mitigation.
https://sbproxy.dev - engine is fully open source.
Another reason golang is interesting for the gateway is having clear control of the supply chain at compile time. Tools like LiteLLM the supply chain attacks can have more impact at runtime, where the compiled binary helps.
How do you plan on keeping up with upstream changes from the API providers? I have implemented something similar, and the biggest issue I have faced with go is that providers don’t usually have sdk’s (compared to javascript and python), and there is work involved in staying up to date at each release.
What I'd like is for a proxy or library to provide a truly unified API where it will really let me integrate once and then never have to bother with provider quirks myself.
Also, are you also planning on doing an open-source rug pull like so many projects out there, including litellm?
2. Regarding being open-source and the license, I've described our approach here transparently: https://gomodel.enterpilot.io/docs/about/license
(I'm asking only about the compatibility layer; the other tracking features would be useful even if there were only one cloud LLM API.)
The best effort we have to defining a standard is OpenAI harmony/responses - https://developers.openai.com/cookbook/articles/openai-harmo... - but it's not seen much pickup. The older OpenAI Chat Completions thing is much more of an ad-hoc standard - almost every provider ends up serving up a clone of that, albeit with frustrating differences because there's no formal spec to work against.
The key problem is that providers are still inventing new stuff, so committing to a standard doesn't work for them because it may not cover the next set of features.
2025 was particularly turbulent because everyone was adding reasoning mechanisms to their APIs in subtly different shapes. Tool calls and response schemas (which are confusingly not always the same thing) have also had a lot of variance - some providers allow for multiple tool calls in the same response, for example.
My hunch is we'll need abstraction layers for quite a while longer, because the shape of these APIs is still too frothy to support a standard that everyone can get behind without restricting their options for future products too much.
For example `Claude code` used to set 2 specific beta headers with some version numbers for their Max subscription to be supported.
Oauth tokens for Max plan is different from how their API keys looked. They kind of look similar, but has specific prefix that these tool pre-validate.
It is barely working at this point even within a single provider
However kudos for the project, we need more alternatives in compiled languages.
It looks like a useful feature to have. Therefore, I'll dig into this topic more broadly over the next few days and let you know here whether, and possibly when, we plan to add it.
One thing I'd love to see is built-in cost tracking per model/route. When you're mixing free and paid models, knowing exactly where your spend goes is critical. Do you have plans for that in the dashboard?
However IIUC what you're asking for - it's already in the dashboard! Check the Usage page.
Are there even any benchmarks?
Too bad so many people fall for it.
Other than that almost useless it’s faster when this will be io bound and not cpu bound.
It's more lightweight and simpler. The Bifrost docker image looks 4x larger, at least for now.
IMO GoModel is more convenient for debugging and for seeing how your request flows through different layers of AI Gateways in the Audit Logs.