How to enable web search with GPT API

In today's digital age, the ability to extract and process information from the web is invaluable. This Answer will guide you through the process of enabling web search using the GPT API, leveraging the capabilities of both OpenAI and the llmsearch tool.

Introduction to OpenAI GPT and Llmsearch

OpenAI GPT is a state-of-the-art language model that can generate human-like text based on prompts. It's versatile and can be used for a variety of tasks, from chatbots to content generation.

On the other hand, Llmsearch is a tool designed to enhance the web search experience. It uses an LLM (language model) to rewrite queries for better search results, filters URLs based on a local whitelist/blacklist, and preprocesses URL returns to extract relevant text.

Setting up your environment

Before looking into the code, ensure you have the necessary API keys:

These keys are essential for authentication and accessing the respective services.

To set up the llmsearch environment:

  1. Clone the GitHub Repository:

git clone https://github.com/bdambrosio/llmsearch.git
  1. Install the Required Dependencies:
    Navigate to the cloned directory and run:

pip3 install -r requirements.txt
  1. Add Your API Keys:

  • For OpenAI, you can set the API key either as an environment variable or directly in the code using openai.api_key = 'YOUR_OPENAI_API_KEY'.

  • For Google Customized Search, add your credentials either as environment variables or directly in the google_search_concurrent.py file.

Generating web search queries using OpenAI GPT

To extract specific information from the web, you can use OpenAI GPT to generate a query based on the user's input.

extraction_prompt = f"I have a plugin that can extract information from the web. Based on the user's query '{user_prompt}', what should I search for on the web?"
extraction_query = openai.Completion.create(engine="davinci", prompt=extraction_prompt, max_tokens=100).choices[0].text.strip()

Searching the web using Llmsearch

Once you have the query, you can use the from_gpt function of llmsearch to search the web.

search_results = search_service.from_gpt(extraction_query, search_level="full")

The search_level parameter can be set to "quick" for faster results or "full" for a more comprehensive search.

Extracting relevant content

From the search results, you can extract the most relevant content. For simplicity, we're taking the first result.

relevant_content = search_results[0]['response']

Generating a response using OpenAI GPT

With the extracted content, you can now generate a coherent response using OpenAI GPT.

answer_prompt = f"The user asked: '{user_prompt}'. The extracted content from the web is: '{relevant_content}'. How should I respond?"
answer = openai.Completion.create(engine="davinci", prompt=answer_prompt, max_tokens=200).choices[0].text.strip()

Bringing it all together

Finally, you can combine all the steps in a main function to create a seamless interaction where the user provides a query, and the system returns a relevant answer.

import openai
import search_service
import os
# Initialize OpenAI API
openai.api_key = os.environ['SECRET_KEY']
def main():
# Get user input
user_prompt = input("Enter your query about a topic or website: ")
# Use OpenAI GPT to generate a query for llmsearch
extraction_prompt = f"I have a plugin that can extract information from the web. Based on the user's query '{user_prompt}', what should I search for on the web?"
extraction_query = openai.Completion.create(engine="davinci", prompt=extraction_prompt, max_tokens=100).choices[0].text.strip()
# Search the web using llmsearch's from_gpt function
search_results = search_service.from_gpt(extraction_query, search_level="full") # You can choose between "quick" and "full" for search_level
# Extract the most relevant result (for simplicity, we're taking the first result)
relevant_content = search_results[0]['response']
# Use GPT to generate an answer based on the extracted content and user's initial prompt
answer_prompt = f"The user asked: '{user_prompt}'. The extracted content from the web is: '{relevant_content}'. How should I respond?"
answer = openai.Completion.create(engine="davinci", prompt=answer_prompt, max_tokens=200).choices[0].text.strip()
# Print the answer
print(f"Answer: {answer}")
if __name__ == "__main__":
main()

Code explanation

  • Line 1–3: Import the openai library, os and search_service module.

  • Line 5: Set the OpenAI API key.

  • Line 7: Define the main function.

  • Line 9: Get a query from the user about a topic or website.

  • Line 12–13: Construct a prompt for GPT to generate a query for llmsearch.

  • Line 16: Use llmsearch to search the web-based on the GPT-generated query.

  • Line 19: Extract the most relevant content from the search results.

  • Line 22: Construct another prompt for GPT to generate an answer based on the extracted content.

  • Line 23: Retrieve the GPT-generated answer.

  • Line 26: Print the answer to the console.

  • Line 28–29: Check if the script is the main module and if so, execute the main function.

Conclusion

By integrating OpenAI GPT with llmsearch, you can create powerful applications that extract and process information from the web-based on user prompts. This approach not only enhances user experience but also opens up new avenues for web search and interaction.

Copyright ©2024 Educative, Inc. All rights reserved