Introduction:
Overview
In this comprehensive tutorial on building an interactive Streamlit application that leverages the power of MindsDB and Ollama. In this guide, I will walk you through the entire process, from setting up your development environment to deploying a fully functional application. By the end of this tutorial, you will have a deep understanding of how to integrate advanced machine learning models into a user-friendly web interface.
You can download the complete code from GitHub.
What You Will Learn
Throughout this tutorial, you will learn how to:
- Set Up Your Development Environment: Install necessary packages and configure environment variables to connect with MindsDB and Ollama.
- Build a Streamlit Interface: Create a sidebar for settings and user input handling.
- Implement Chat Functionality: Enable real-time communication with MindsDB and Ollama to generate responses based on user input.
- Finalize and Deploy Your Application: Test the application thoroughly and deploy it to a cloud service for easy access and use.
Prerequisites
Before you begin, make sure you have the following:
- Basic Knowledge of Python: Familiarity with Python programming will help you follow along with the code.
- Streamlit: Understanding the basics of Streamlit will be beneficial but not necessary.
- MindsDB and Ollama: Basic understanding of MindsDB and Ollama, including how they operate, will help you grasp the concepts more quickly.
Tools and Technologies
This tutorial will make use of the following tools and technologies:
- Python: The primary programming language used to build the application.
- Streamlit: A powerful framework for creating web applications quickly.
- MindsDB: An open-source AI layer for databases that allows for easy integration of machine learning models.
- Ollama: A locally hosted large language model (LLM) that will generate responses based on user input.
- Dotenv: A tool to manage environment variables in Python applications.
Structure of the Tutorial
The tutorial is divided into four stages, each focusing on a critical aspect of the development process:
- Initial Setup and Environment Configuration: Set up your development environment and configure the necessary environment variables.
- Building the Streamlit Interface: Create the main Streamlit interface for your application, including settings and user input handling.
- Implementing the Chat Functionality: Implement the chat functionality to interact with MindsDB and Ollama.
- Finalizing and Deploying the Application: Perform additional testing and prepare for deployment.
Each stage will include detailed, step-by-step instructions, allowing you to run and test the app at the end of each stage. By following along, you will build a robust application capable of real-time interaction with advanced AI models.
Let’s get started on this exciting journey to create a dynamic and interactive web application with Streamlit, MindsDB, and Ollama!
Stage 1: Initial Setup and Environment Configuration
Objective:
In this stage, you’ll set up your development environment and configure the necessary environment variables to interact with MindsDB and Ollama.
Step-by-Step Guide:
-
Install Required Packages: Make sure you have Python installed. Then, create a virtual environment and install the required packages:
python -m venv venv source venv/bin/activate # On Windows use `venv\Scripts\activate` pip install streamlit requests python-dotenv
-
Create a
.env
File: Create a.env
file in the root of your project directory and add the following environment variables:OLLAMA_HOST=localhost OLLAMA_PORT=8000 MINDSDB_HOST=localhost MINDSDB_PORT=47334 MINDSDB_API_KEY=your_mindsdb_api_key
-
Create
setup_mindsdb.py
: Create a file namedsetup_mindsdb.py
with the following content to configure MindsDB:import requests from dotenv import load_dotenv import os # Load environment variables from .env file load_dotenv() # Fetch environment variables MINDSDB_API_URL = f"http://{os.getenv('MINDSDB_HOST')}:{os.getenv('MINDSDB_PORT')}/api/sql/query" MINDSDB_API_KEY = os.getenv("MINDSDB_API_KEY") def setup_mindsdb(): # Define the queries to setup MindsDB engine and model engine_creation_query = """ DROP MODEL IF EXISTS ollama_model; DROP ML_ENGINE IF EXISTS ollama_engine; CREATE ML_ENGINE ollama_engine FROM ollama; CREATE MODEL ollama_model PREDICT response USING engine = 'ollama_engine', model_name = 'llama3', prompt_template = 'respond to {{text}} by {{username}}', ollama_serve_url = 'http://host.docker.internal:11434'; """ headers = { "Content-Type": "application/json", "Authorization": f"Bearer {MINDSDB_API_KEY}" } data = { "query": engine_creation_query } try: response = requests.post(MINDSDB_API_URL, json=data, headers=headers) response.raise_for_status() # Raise exception for non-200 status codes print("MindsDB setup completed successfully.") except requests.RequestException as e: print(f"An error occurred while setting up MindsDB: {e}") if __name__ == "__main__": setup_mindsdb()
-
Run MindsDB Setup: Run the MindsDB setup script to initialize the MindsDB engine and model:
python setup_mindsdb.py
Testing Stage 1:
Ensure that the script runs without errors. If successful, you’ll see “MindsDB setup completed successfully.” in the console.
Stage 2: Building the Streamlit Interface
Objective:
Create the main Streamlit interface for your application, including settings and user input handling.
Step-by-Step Guide:
-
Create
app.py
: Create a file namedapp.py
with the following content:import streamlit as st import requests from requests.exceptions import RequestException from dotenv import load_dotenv import os import subprocess # Load environment variables from .env file load_dotenv() # Run MindsDB setup def run_mindsdb_setup(): try: subprocess.run(["python", "setup_mindsdb.py"], check=True) except subprocess.CalledProcessError as e: st.error(f"Failed to setup MindsDB: {e}") # Fetch environment variables OLLAMA_HOST = os.getenv("OLLAMA_HOST") OLLAMA_PORT = os.getenv("OLLAMA_PORT") MINDSDB_HOST = os.getenv("MINDSDB_HOST") MINDSDB_PORT = os.getenv("MINDSDB_PORT") MINDSDB_API_KEY = os.getenv("MINDSDB_API_KEY") # Sidebar settings st.sidebar.header("Settings") # API settings st.sidebar.subheader("API Settings") use_mindsdb = st.sidebar.checkbox("Use MindsDB", value=True) ollama_host = st.sidebar.text_input("Ollama Host", value=OLLAMA_HOST) ollama_port = st.sidebar.text_input("Ollama Port", value=OLLAMA_PORT) def fetch_available_ollama_models(host, port): api_url = f"http://{host}:{port}/api/tags" try: response = requests.get(api_url) if response.status_code == 200: models = response.json().get('models', []) return [model['name'] for model in models] else: st.error(f"Failed to fetch Ollama models: {response.text}") return [] except RequestException as e: st.error(f"An error occurred while fetching Ollama models: {e}") return [] ollama_models = fetch_available_ollama_models(ollama_host, ollama_port) ollama_model = st.sidebar.selectbox("Ollama Model", ollama_models) mindsdb_host = st.sidebar.text_input("MindsDB Host", value=MINDSDB_HOST) mindsdb_port = st.sidebar.text_input("MindsDB Port", value=MINDSDB_PORT) mindsdb_api_key = st.sidebar.text_input("MindsDB API Key", value=MINDSDB_API_KEY) # Add username input username = st.sidebar.text_input("Username", value="User") # Initialize session state variables if they are not already initialized if "use_mindsdb" not in st.session_state: st.session_state.use_mindsdb = use_mindsdb if "ollama_host" not in st.session_state: st.session_state.ollama_host = ollama_host if "ollama_port" not in st.session_state: st.session_state.ollama_port = ollama_port if "ollama_model" not in st.session_state: st.session_state.ollama_model = ollama_model if "mindsdb_host" not in st.session_state: st.session_state.mindsdb_host = mindsdb_host if "mindsdb_port" not in st.session_state: st.session_state.mindsdb_port = mindsdb_port if "mindsdb_api_key" not in st.session_state: st.session_state.mindsdb_api_key = mindsdb_api_key if "username" not in st.session_state: st.session_state.username = username if "messages" not in st.session_state: st.session_state.messages = [] # Add update button if st.sidebar.button("Update Settings"): st.session_state.use_mindsdb = use_mindsdb st.session_state.ollama_host = ollama_host st.session_state.ollama_port = ollama_port st.session_state.ollama_model = ollama_model st.session_state.mindsdb_host = mindsdb_host st.session_state.mindsdb_port = mindsdb_port st.session_state.mindsdb_api_key = mindsdb_api_key st.session_state.username = username st.sidebar.success("Settings updated successfully.") run_mindsdb_setup()
-
Run Streamlit App: Start the Streamlit app to test the interface:
streamlit run app.py
Testing Stage 2:
Verify that the Streamlit app runs and the sidebar settings are displayed correctly. Update the settings and ensure they are reflected in the application state.
Stage 3: Implementing the Chat Functionality
Objective:
Implement the chat functionality to interact with MindsDB and Ollama.
Step-by-Step Guide:
-
Extend
app.py
: Add the following code toapp.py
to implement the chat functionality:OLLAMA_API_URL = f"http://{st.session_state.ollama_host}:{st.session_state.ollama_port}/api/v1/models/{st.session_state.ollama_model}/complete" MINDSDB_API_URL = f"http://{st.session_state.mindsdb_host}:{st.session_state.mindsdb_port}/api/sql/query" MINDSDB_HEADERS = {"Authorization": f"Bearer {st.session_state.mindsdb_api_key}"} def send_message_to_mindsdb(message): query = f"SELECT response FROM ollama_model WHERE text = '{message}' AND username = '{st.session_state.username}'" data = {"query": query} try: response = requests.post(MINDSDB_API_URL, headers=MINDSDB_HEADERS, json=data) response.raise_for_status() # Raise exception for non-200 status codes result = response.json() # Extract the clean response if "data" in result and isinstance(result["data"], list) and len(result["data"]) > 0 and isinstance(result["data"][0], list) and len(result["data"][0]) > 0: llm_response = result["data"][0][0] else: st.error("Unexpected response format from MindsDB.") llm_response = "Failed to get a valid response from MindsDB." except RequestException as e: st .error(f"Error communicating with MindsDB: {e}") llm_response = "Failed to communicate with MindsDB." return llm_response, "🤖" def send_message_to_ollama(message): data = { "prompt": f"{message}\n\nUsername: {st.session_state.username}" } try: response = requests.post(OLLAMA_API_URL, json=data) response.raise_for_status() llm_response = response.json().get("choices")[0].get("text").strip() except RequestException as e: st.error(f"Error communicating with Ollama: {e}") llm_response = "Failed to communicate with Ollama." return llm_response, "🤖" def send_message(message): if st.session_state.use_mindsdb: llm_response, bot_icon = send_message_to_mindsdb(message) else: llm_response, bot_icon = send_message_to_ollama(message) return llm_response, bot_icon # Display chat messages from history on app rerun for message in st.session_state.messages: with st.chat_message(message["role"]): st.markdown(message["content"]) # Accept user input if user_input := st.chat_input(f"{st.session_state.username}:"): st.session_state.messages.append({"role": st.session_state.username, "content": user_input}) with st.chat_message(st.session_state.username): st.markdown(user_input) llm_response, bot_icon = send_message(user_input) st.session_state.messages.append({"role": bot_icon, "content": llm_response}) with st.chat_message(bot_icon): st.markdown(llm_response) st.sidebar.header("Application Info") st.sidebar.info("This application interacts directly with a locally hosted LLM using either MindsDB or Ollama.")
Testing Stage 3:
Run the Streamlit app and test the chat functionality by entering messages. Ensure that responses from either MindsDB or Ollama are displayed correctly.
Stage 4: Finalizing and Deploying the Application
Objective:
Finalize the application, perform additional testing, and prepare for deployment.
Step-by-Step Guide:
-
Add Error Handling and Final Touches: Make sure your application handles errors gracefully and provides useful feedback to the user.
-
Test the Entire Application: Run thorough tests to ensure all parts of the application work together seamlessly. Test various scenarios and edge cases.
-
Deploy the Application: Deploy your Streamlit application to a hosting service like Streamlit Cloud, Heroku, or any other cloud provider.
For example, to deploy on Streamlit Cloud:
- Create a GitHub repository and push your code.
- Go to Streamlit Cloud and link your GitHub repository.
- Follow the deployment instructions provided by Streamlit Cloud.
Testing Stage 4:
Ensure that your deployed application runs smoothly and can handle user interactions without errors.
Conclusion
Summary
Congratulations on completing this tutorial! By following along, you have successfully built a robust and interactive Streamlit application that leverages the powerful capabilities of MindsDB and Ollama. Let’s recap what you have achieved:
-
Initial Setup and Environment Configuration:
- Installed and configured the necessary packages.
- Set up environment variables to connect with MindsDB and Ollama.
- Initialized the MindsDB engine and model.
-
Building the Streamlit Interface:
- Created a user-friendly interface using Streamlit.
- Developed a sidebar for setting up API configurations and user input handling.
-
Implementing the Chat Functionality:
- Integrated chat functionality to allow real-time communication with MindsDB and Ollama.
- Implemented error handling to ensure smooth operation and user feedback.
-
Finalizing and Deploying the Application:
- Conducted thorough testing to ensure the application runs seamlessly.
- Deployed the application to a cloud service for easy access and use.
Key Takeaways
- Streamlit Integration: You learned how to use Streamlit to build an interactive web interface that can dynamically handle user inputs and settings.
- MindsDB and Ollama: You gained insights into integrating and leveraging machine learning models from MindsDB and locally hosted LLMs like Ollama.
- Environment Configuration: You configured environment variables using Dotenv to manage sensitive information securely.
- Error Handling: Implemented error handling mechanisms to ensure the application remains user-friendly and resilient against common issues.
Next Steps
Now that you have a solid foundation, here are a few ideas to extend your project:
- Enhance the User Interface: Add more features to the UI, such as user authentication, detailed logs, or advanced settings.
- Expand Functionality: Integrate additional AI models or data sources to provide more diverse and powerful interactions.
- Optimize Performance: Investigate ways to optimize the application’s performance, including caching responses and load balancing.
- Contribute to the Community: Share your project with the community, contribute to open-source projects, or collaborate with others to improve and expand its capabilities.
Final Thoughts
Building interactive applications that leverage advanced AI models can open up a wide range of possibilities. Whether for personal projects, research, or professional applications, the skills and knowledge you have gained through this tutorial will serve as a valuable asset in your development toolkit.
Thank you for following along, and happy coding!