Prompt Management Systems
I managed to find the following 10 prompt managers. I’m sure there are more but this is what I managed to find so far.
https://www.prompthub.us 🟠
https://agenta.ai 🟠
https://promptlayer.com 🟠
https://portkey.ai ✅
https://prst.ai ❌
https://www.promptground.io ❌
https://manageprompt.com ✅
https://www.prompteams.com 🟠
https://langfuse.com ✅
https://www.wordware.ai 🟠
Of the 10 that I found, I was not able to get a single one set up correctly on the first try. Some I was unable to figure out altogether (❌), some I’m still figuring out (🟠), and the rest I worked with their support team to eventually get things running (✅). I thought I’d speed run you through the set up for each since each of these platforms have used different conventions.
My app was written in Typescript. I’m using Next.js so it’s javascript on the front and back end. Everything connected my OpenAI call is contained in the following file. The file exports a function that I then call in the app.
This is as close to the textbook example for Chat completion form the open AI documentation as I could get. The only difference is that I’m setting the response format to JSON.
Setting Up Portkey
I’m going to run through PortKey first because their support team got back to me the fastest and this was the first tool I managed to get working.
Swapping the OpenAI implementation for PortKey’s was fairly straight forward. There’s an npm package you install, you get an API key and then you use their javascript client that works a lot like OpenAI’s (except that you reference a prompt ID for a prompt you created on their dashboard)
There’s a ‘Prompts’ tab on the left of their app with a playground for creating prompts. Using the playground requires adding your own OpenAI keys to the playground as ‘virtual keys’. When you publish your prompts an API tab will appear at the top of the screen and this will contain the api and implementation detail that you can copy and paste.
Portkey gives you 10K requests per month on their free plan and then costs $99 a month for up to 1 million requests a month.
On the plus side, PortKey has a Node, Python and Curl implementation. This was also platform was the only prompt manager I tried that allowed me to set the response format to json.
On the other hand, the reason I couldn’t get this working on the first try is because all of the API implementation information only get revealed after you publish a prompt. It would have been clearer if the same information was available in their documentation so that I can see show everything works before committing to the app.
Getting ManagePrompt to Work
The ManagePrompt documentation has a node client but I could not get it to work despite speaking to their support team. However they also let you access prompts with a POST request so I used that instead.
In ManagePrompt each prompt is called a ‘workflow’. Unfortunately I was unable to set the system prompt on any of the option, nor was I able to change the response format to JSON so I couldn’t use this tool in the end.
When published each workflow gives you a separate URL you can call with a POST request that returns the text completion as a string. You also get some basic analytics and a mini playground where you can test the prompt with different inputs.
ManagePrompt’s pricing model comes in a $0.1 for 1K tokens. You never share you OpenAI keys with ManagePrompt, they cover your production calls and the playground calls. However, they have several different models you can choose form and several different LLM providers so I don’t think charging a single price for tokens regardless of the model is fair.
Configuring LangFuse
Langfuse took a quite different approach to prompt management from the other examples. Langfuse does not call OpenAI directly. They only store your prompt text. Calling Langfuse will return your prompt text, which you then have to pass into OpenAI or any other LLM you have implemented.
Personally I found the Langfuse app experience a bit technical and overwhelming. The app is focused on providing analytics for your LLM calls so prompt management feels like more of an additional feature. They is a ‘Prompts’ tab on the left side panel of their app that lets you store as many prompts as you want.
They don’t have a playground in the prompt manager but they do have a separate tab with a playground that you can add your own OpenAI keys to.
On the upside, since Langfuse is focused on Analytics their pricing model doesn’t cover calling prompts and as I understand it the product is free if you’re just using it as a prompt manager.
Promptground is another prompt manager that took the same approach as Langfuse where they just return prompt text and never call OpenAI directly. Their interface was much simpler to understand.
Despite being relatively easy to setup, I decided not to use Promptground because I couldn’t contact anyone to ask questions about the app. There were no support contact details in the app, the sign up email was no-reply, and Terms of Service and Privacy Notice on the sign up page were anchor links. Understandably, I did not feel comfortable storing my prompts on their system.
I’m still working on figuring out how to set up the other tools I found and I will update this post once I’ve figured them out.