A.I. Tools

Build Your Own ChatGPT-Like App with Streamlit | by Heiko Hotz | Apr, 2023

Image by author — created with Stable Diffusion

When GPT-4 was announced on 14 March 2023, I immediately signed up for ChatGPT Plus — a paid tier within the ChatGPT application that offered access to the new model right away. It cost $20 per month and was well worth it in the beginning. However, after a few days, my usage decreased — don’t get me wrong: I still use it regularly, I just wasn’t sure if I would use it to the extent that it justified the cost. Then, a few days ago, I gained access to GPT-4 via OpenAI’s API, and while the new model is much more expensive than its predecessor GPT-3.5, I still think it’s probably more economical for me to interact with the API rather than using the ChatGPT app.

But I definitely wanted to keep the chat-like experience when interacting with the model. While there are already quite a few open-source apps out there that provide a slick user experience, I didn’t want anything to do with React or similar front-end frameworks — they are great for building amazing web apps, but it’s just not what I enjoy doing. Instead, I decided to build my own chat interface with Streamlit, which offers a much more basic UX and has far fewer features — but it was much more fun for me to develop my own UI from scratch (and in Python). 😃

Image by author

In this tutorial I will walk you through the application — all the code is also available in this GitHub repo.

Learning by doing

Apart from the cost aspect that I already mentioned, there are a few more advantages to building my own chat interface. First off, it forced me to study the Chat API more closely, as up until now, I had only worked with the text completion API. Using the Chat API is similar, but there are a few key differences to be aware of.

Independence

Second, it makes me completely independent from the ChatGPT app. Whether there is a major outage in the app or the app is throttling how many inference requests I can send to the model (currently it is throttled at 25 messages every 3 hours), none of these apply when I run my own app.

Data privacy

Third, data privacy. By default, ChatGPT collects data and uses it for improving the service (although it’s possible to opt out). However, when using the APIs, by default, the data is not collected unless we specifically opt in. See more information in OpenAI’s API usage document.

So much more fun!

Finally, as previously mentioned, it is much more enjoyable to construct something like this (at least for geeks like me 🤓). I have already incorporated a few features into my app, such as displaying the number of tokens and the price per conversation. Perhaps at some point, I can expand the app to utilise other models (e.g., from Hugging Face) as well 🤗.

Let’s get this show on the road! 💪

Prerequisites

To develop this app we need to make sure that we have the packages openai, streamlit, and streamlit-chat installed:

pip install openai streamlit streamlit-chat

Keeping track of conversation history

The guide for chat completion mentions that we need to pass the conversation history to the API, so the model understands the context; in other words, we must manage the memory of the chat model, as it is not handled for us within the API. To accomplish this, we create a session state list where we store a system message at the beginning of a session and then append interactions with the model.

if ‘messages’ not in st.session_state:st.session_state[‘messages’] = [{“role”: “system”, “content”: “You are a helpful assistant.”}]

def generate_response(prompt):st.session_state[‘messages’].append({“role”: “user”, “content”: prompt})

completion = openai.ChatCompletion.create(model=model,messages=st.session_state[‘messages’])response = completion.choices[0].message.contentst.session_state[‘messages’].append({“role”: “assistant”, “content”: response})

Displaying the conversation

To display the conversation we leverage the message function from the streamlit-chat package. We iterate over the stored interactions and show the conversation chronologically, starting with the oldest one at the top (just like in ChatGPT).

from streamlit_chat import message

if st.session_state[‘generated’]:with response_container:for i in range(len(st.session_state[‘generated’])):message(st.session_state[“past”][i], is_user=True, key=str(i) + ‘_user’)message(st.session_state[“generated”][i], key=str(i))

Printing additional information

An additional feature that I thought could be useful is to print some metadata for each interaction. To that end, we can, for example, print the model that was used (which can change from one interaction to the next), how many tokens were used for this particular interaction, and its cost (according to OpenAI’s pricing page).

total_tokens = completion.usage.total_tokensprompt_tokens = completion.usage.prompt_tokenscompletion_tokens = completion.usage.completion_tokens

if model_name == “GPT-3.5”:cost = total_tokens * 0.002 / 1000else:cost = (prompt_tokens * 0.03 + completion_tokens * 0.06) / 1000

st.write(f”Model used: {st.session_state[‘model_name’][i]}; Number of tokens: {st.session_state[‘total_tokens’][i]}; Cost: ${st.session_state[‘cost’][i]:.5f}”)

Image by author

Please note that the number of tokens (and consequently the price) will increase as the conversation gets longer. This is because we need to submit all previous questions and answers so that the model understands the context of the interaction.

To save money, it is therefore advisable to clear the conversation when starting a new topic with the chatbot.

Sidebar

In the sidebar, we offer the option to switch models and clear the conversation history. Additionally, we can display the accumulated costs of the current conversation:

Image by author

By following these steps, we have successfully developed an easy-to-use and customisable chat interface that allows us to interact with GPT-based models without relying on apps like ChatGPT. We can now run the application with the following command:

streamlit run app.py

How this changed my workflow

I have now actually canceled my subscription to ChatGPT Plus, and I’m exclusively using my app to interact with the GPT models. By default, I use the GPT-3.5 model, which makes it really affordable to use these models. Only for more sophisticated tasks or when I’m not completely satisfied with the results of GPT-3.5 will I switch to GPT-4. Most likely, I will continue adding features to the app over time, as this is what I enjoy doing most — so stay tuned for future updates 😊

Further improvement ideas

I hope this was helpful — please go ahead and build your own chat UI using this tutorial as a starting point. I’m curious to learn what you are building, so please reach out in the comments. Here are some ideas on how this app could be improved, to get you started:

Happy coding!

👋 Follow me on Medium and LinkedIn to read more about Generative AI, Machine Learning, and Natural Language Processing.

👥 If you’re based in London join one of our NLP London Meetups.


Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
Translate »