Building a GPT-3 Bot for Our Browser Extension Community

We’ve built an AI chatbot that can help answer questions about Chrome Extensions and about Plasmo

Stefan Aleksic
Stefan AleksicDecember 22, 2022
cover image for blog

At Plasmo, we have a thriving community of developers who often ask questions about our platform and general browser extension knowledge. To better serve our community, we built a bot that could provide accurate and helpful responses to their inquiries.

However, we encountered a challenge when generating meaningful answers using static prompts. These prompts were limited in the number of tokens they could include, making it difficult for GPT-3 to fully understand the context and nuances of our framework and Chrome extensions in general.

To address this issue, we implemented a solution that leverages embeddings to automatically include relevant context in the prompts before GPT-3 processes the request. This allowed us to significantly improve the accuracy and usefulness of the responses generated by our bot, ultimately providing a better experience for our community. In this post, we'll share the details of how we implemented this solution and the positive impact it has had on our community.

What is an embedding?

Imagine you have a bunch of shapes that you want to group together based on how similar they are. You might group all your circles together, all your squares together, and all your triangles together.

Now, imagine that you want to teach a computer to do this same thing. One way to do this is to use something called an "embedding." An embedding is a way to represent each shape as a set of numbers that describe different aspects of the shape. For example, one number might describe the color, another the size, and another the number of sides.

To group the shapes, a computer can use a machine learning algorithm to look at the features of each shape and figure out which ones are most similar.

Enhancing Prompt Context with Embeddings

Recently, OpenAI released a new model called text-embedding-ada-002, which is 99.8% cheaper and more effective for most embedding tasks than previous models. This development allowed us to use embeddings without breaking the budget.

To add more context and detail to our prompts, we created document embeddings for various resources, including Github Discussions, our Documentation, and the Google Chrome Extension API reference documentation. This gave us a range of 2,000 documents to incorporate into user prompts, providing GPT-3 with more comprehensive and relevant information.

Gathering Data

Gathering data for this project presented several challenges, which we'll discuss in more detail below. However, we were ultimately able to use GraphQL queries to retrieve data from Github Discussions and wrote a markdown parser to extract data from open-source documentation written in markdown.

Github Data

We used the following GraphQL query to retrieve data from Github Discussions:

1query($owner: String!, $name: String!) {
2  repository(owner: $owner, name: $name) {
3    discussions(first: 100){
4      nodes {
5        title,
6        bodyText,
7        answer {
8          bodyText
9        },
10        comments (first: 10) {
11          nodes {
12            bodyText
13          }
14        }
15      }
16    }
17  }

Running this on our Github Repo produced about 50 different discussions that we could include as context in our prompts!


Since both the Plasmo docs and the Google Chrome Extension API reference docs are open-source and written in markdown, we wrote a simple markdown parser that creates documents for each heading (## or ###) and the text underneath it.

For example, given the following markdown text:

1# Some Title
3## Some heading
4Amazing text. 
6### Some more context
7Additional stuff here
9## Cool stuff
10This is pretty interesting.

Our parser would turn this into a list of 3 tuples:

1[(”Some Title”, “Some heading”, “Amazing text”), (”Some Title”, “Some more context”, “Additional stuff here”), (”Some Title”, “Cool stuff”, “This is pretty interesting.)]


Gathering data for this project was one of the biggest challenges we faced. Initially, we tried using a web crawler with Beautiful Soup but found HTML was noisy and difficult to work with. This made it hard to remove noise and ensure that each document wasn't too large for a prompt. After spending way too long on HTML parsing, we settled on the markdown approach.

Creating Embeddings

After gathering our data, we sent each document to OpenAI's Create Embedding API endpoint to obtain a vector representation. We then sent this data to Pinecone, a vector search database, for later retrieval.

Crafting Effective Prompts

We created two distinct prompts to enable the bot to answer questions and perform actions. Following OpenAI's recommendations, we included instructions at the beginning of the prompts and used separators such as === to help GPT-3 comprehend the context and intended action.

For example, asking the bot to “write a minimal React component that asks a user what their favorite color is and stores it in chrome storage using the useStorage hook” will construct the following prompt:

1write a minimal React component that asks a user what their favorite color is and stores it in chrome storage using the useStorage hook
6* Our library provides React hooks for reading and writing to storage. Here's some example usage:
8import { useStorage } from "@plasmohq/storage/hook"
9function IndexPopup() {
10  const [openCount] = useStorage<number>("open-count", (storedCount) =>
11    typeof storedCount === "undefined" ? 0 : storedCount + 1
12  )
13  const [checked, setChecked] = useStorage("checked", true)
14  return (
15    <div
16      style={{
17        display: "flex",
18        flexDirection: "column",
19        padding: 16
20      }}>
21      <p>Times opened: {openCount}</p>
22      <input
23        type={"checkbox"}
24        checked={checked}
25        onChange={(e) => setChecked(}
26      />
27    </div>
28  )
30export default IndexPopup
33import { useStorage } from "@plasmohq/storage/hook"
34function IndexOptions() {
35  const [openCount] = useStorage<number>("open-count")
36  const [checked] = useStorage("checked")
37  return (
38    <div
39      style={{
40        display: "flex",
41        flexDirection: "column",
42        padding: 16
43      }}>
44      <p>Times opened: {openCount}</p>
45      <input type={"checkbox"} readOnly checked={checked} />
46    </div>
47  )
49export default IndexOptions

And here’s the result that the bot responded with:

1import { useStorage } from "@plasmohq/storage/hook"
3function FavoriteColor() {
4  const [favoriteColor, setFavoriteColor] = useStorage("favorite-color")
6  const handleChange = (e: React.ChangeEvent<HTMLInputElement>) => {
7    setFavoriteColor(
8  }
10  return (
11    <div>
12      <label>
13        What is your favorite color?
14        <input type="text" value={favoriteColor} onChange={handleChange} />
15      </label>
16    </div>
17  )
20export default FavoriteColor

When asking the bot the question, “what is a manifest.json file?” this is the prompt generated:

1Answer the question as truthfully as possible using the provided context and your knowledge of Javascript, and if the answer is not contained within the text below, say "I don't know."
6* An extension [manifest][doc-manifest] gives the browser information about the extension, such as the most important files and the capabilities the extension might use. The extension platform features change when there's a new [manifest version][manifest-version].
7* Plasmo abstracts away the manifest file. The framework generates the manifest under the hood based on your source files and configurations you export from your code, similar to [Next.js]( abstracts page routing and SSG with the file system and page components. We will further abstract with auto-permissions and a needs-based permission scheme, removing the need to specify permissions manually! (Coming soon)
8* Manifest V3 represents one of the most significant shifts in the extensions platform since it launched a decade ago. Manifest V3 extensions enjoy enhancements in security, privacy, and performance; they can also use more contemporary open web technologies such as service workers and promises.
11 Q: what is a manifest file?
12 A:

Moving to Production

A model in a colab notebook is easy, but that is not what models are built for.

Our colab notebook worked well, so we wanted to make it accessible and easy to use for our community.

We used a combination of a Discord bot, Pinecone, and OpenAI’s API.

  1. The Discord bot takes the user input and obtains an embedding from OpenAI.
  2. It then requests Pinecone to provide the seven closest documents related to the input.
  3. The bot constructs a prompt that includes these documents as context.
  4. It then calls OpenAI's completion API using the text-davinci-003 model and the newly constructed prompt.
  5. Finally, the bot replies to the user with the result of the completion.


Since our Discord bot was written in TypeScript, we had to rewrite our code from Python. Most things worked, but there were a few hiccups.

Pinecone doesn't have very good JavaScript bindings, so we called their API with fetch, which worked great.

To prevent exceeding the token limit imposed by OpenAI, we used the gpt-3-encoder package. Unfortunately, it is quite buggy and not the exact same one that OpenAI uses, but it works most of the time.


There are a lot of poor responses from the bot not included here, and there’s a lot of work to do to make it consistently good, but want to share the good responses to show the potential of a bot like this.

The Future

We built this in 3 days, so there’s a lot of room for improvement, but we’re excited with the preliminary results. Watch this space!

Try it for free by joining our Discord server.

Back to Blog

Thanks for reading! We're Plasmo, a company on a mission to improve browser extension development for everyone. If you're a company looking to level up your browser extension, reach out, or sign up for Itero to get started.