Unified API Signature If you've used OpenAI, you already know how to use Portkey with any other provider. |
Interoperability Write once, run with any provider. Switch between any model from_any provider seamlessly. |
Automated Fallbacks & Retries Ensure your application remains functional even if a primary service fails. |
Load Balancing Efficiently distribute incoming requests among multiple models. |
Semantic Caching Reduce costs and latency by intelligently caching results. |
Virtual Keys Secure your LLM API keys by storing them in Portkey vault and using disposable virtual keys. |
Request Timeouts Manage unpredictable LLM latencies effectively by setting custom request timeouts on requests. |
Logging Keep track of all requests for monitoring and debugging. |
Requests Tracing Understand the journey of each request for optimization. |
Custom Metadata Segment and categorize requests for better insights. |
Feedbacks Collect and analyse weighted feedback on requests from users. |
Analytics Track your app & LLM's performance with 40+ production-critical metrics in a single place. |
- Sign up on Portkey and grab your Portkey API Key
- Add your OpenAI key to Portkey's Virtual Keys page and keep it handy
# Installing the SDK
$ npm install portkey-ai
$ export PORTKEY_API_KEY="PORTKEY_API_KEY"
- Portkey fully adheres to the OpenAI SDK signature. You can instantly switch to Portkey and start using our production features right out of the box.
- Just replace
import OpenAI from 'openai'
withimport Portkey from 'portkey-ai'
:
import Portkey from 'portkey-ai';
const portkey = new Portkey({
virtualKey: "VIRTUAL_KEY"
})
async function main() {
const chatCompletion = await portkey.chat.completions.create({
messages: [{ role: 'user', content: 'Say this is a test' }],
model: 'gpt-4',
});
console.log(chatCompletion.choices);
};
main();
Get started by checking out Github issues. Email us at [email protected] or just ping on Discord to chat.