This project focuses on developing an order bot for a pizza restaurant using the Gemini 1.5 Flash LLM. Our goal was to create a chatbot that efficiently streamlines customer orders through a user-friendly GUI, which we built using the open-source Python library Panel. To implement this solution, we utilized the free tier of the Gemini API, ensuring cost-effectiveness while maintaining functionality.
A chatbot project can help solidify our understanding of a few key Gen-AI concepts– and is fun to build!
Major Implementation Aspects
LLM API
- The Gemini API key can be obtained here.
- To get started with the Gemini API, refer to the Gemini API Quickstart Guide.
- Gemini’s ChatSession class supports multi-turn, freeform conversations, enabling dynamic interactions with the chatbot.
Chat GUI
- The ‘ChatInterface’ component of Panel library was used to build the chatbot’s graphical user interface (GUI).
Problem Statement
Build a pizzeria chatbot to facilitate order taking using the Gemini 1.5 Flash model and design its GUI with the Panel library.
Tasks
- Configuration: import necessary LLM libraries and api key
- Prompt Engineering: Prepare detailed system instructions to steer the behavior of the LLM model.
- Developing a Function for Interactions: Create a function that receives prompts, passes them to the LLM, and returns the response while ensuring that the chat context is preserved. This can be achieved using the ChatSession class of the Gemini API.
- Building the GUI: Use the ChatInterface component from Panel to build the chatbot’s graphical interface.
- App Deployment: Deploy the application both locally and on a web server.
More information on each project task can be found in the jupyter notebook preview.
Code
Github Repository: link
Chatbot in Action
Here is a video demonstrating our orderbot in action:
I deployed the chatbot using Ploomber, however the link may currently be unavailable due to limitations of the free tier.
Cloud Deployment Options - Good Resources
Key Takeaways
Importance of Prompt Engineering: Providing detailed and clear instructions significantly improves the performance of LLM models. For example, explicit instructions helped the Gemini model perform comparably to the OpenAI model by reducing errors in mathematical calculations.
Performance Comparison: A comparison between the paid OpenAI model, “gpt-3.5-turbo” (code available here), and the free-tier Gemini model, “gemini-1.5-flash,” showed that the OpenAI model generally delivered clearer and more accurate responses. However, the Gemini model’s performance improved with more detailed instructions.
Tailor-Made Chat Interfaces: We used Panel in our project to build a customizable chat interface. Additionally, other packages and libraries, such as Streamlit and Flask, are also available for creating chat interfaces.
Deployment Options and Costs: Different deployment services, such as Amazon AWS, Google App Engine, Ploomber, and Hugging Face, come with varying costs, which should be considered when choosing a platform.
Challenges Encountered
Cost of OpenAI API: As the OpenAI API is no longer available for free, we opted for the Gemini LLM API. However, the transition presented its own set of challenges, such as requiring more detailed instructions and inconsistent performance in some tasks.
Inconsistent Mathematical Calculations: When asked to re-check a calculation, the model may produce a different total than previously stated.
- Solution: To ensure accuracy, explicitly instruct the model to calculate the sum of all ordered items’ prices when computing the customer’s total bill.
Need for Extensive Instructions: The Gemini model required more detailed instructions compared to the OpenAI API. For example:
- Instruct the chatbot to calculate the total bill by summing up the prices of all ordered items.
- Instruct the chatbot not to suggest dishes not listed in the provided menu.
Missed Prompts for Toppings: Despite instructions to always inquire about toppings after a pizza order is placed, the chatbot occasionally fails to do so.
Warning Message from Gemini Model: A warning message appeared when adding a system instruction during model initialization.
- Solution: The warning was resolved by installing the package grpcio==1.60.1.
Additional Information
Extensions - Ideas
- Add memory to the bot
- Use case: Save customers orders so that we can retrieve it if the customer asks.
- Edited: You can see my implemention of a chatbot with memory capability here.
- Create book-suggestion-bot based on reader’s reading history, genres of interest, and current best-sellers.
If you have suggestions or ideas to collaborate, please drop an email.