• Thu. Jul 4th, 2024

How to Build A Crypto Chat Bot | CoinGecko API

Have you ever thought of building your chatbot, but felt overwhelmed by the technical details? Fortunately, with drag-and-drop editors, advanced coding skills are no longer required to create an artificial intelligence (AI) chatbot based on a language learning model (LLM).

In this tutorial, we’ll cover how to easily develop and deploy your own crypto AI chatbot with Flowise, an open-source UI tool to build custom LLM flows without needing to code, and CoinGecko API for crypto price data.

Let’s dive in!

Disclaimer: This guide is catered for users operating on Windows.

 


 

How to Create a Crypto AI Chatbot

You can easily build a chatbot using API (without knowing how to code) by following these steps:

  1. Set up Flowise on your computer.

  2. Create and connect nodes in your Flowise project.

  3. Add your OpenAI API key.

  4. Add your CoinGecko API key.

  5. Test your chatbot by asking questions.

Step #1: Set up Flowise

First, ensure you have Node.js installed on your computer. Visit nodejs.org and download the LTS version.

Once installed, open the command prompt and input the following command:

npm install -g flowise

Execute this command to install Flowise on your local machine.

Once installed successfully, input the following command.

npx flowise start

If everything is set up right, you’ll see a message that says ‘Flowise server is listening at port 3000’.

To access Flowise, navigate to http://localhost:3000 and you’ll arrive at the Flowise dashboard shown below:

Step #2: Create and Connect Nodes in Your Flowise Project

To initiate a new chat Flow, click on ‘Add New’. This action will present you with a blank canvas where you can design your AI application.

On the left side of the screen, there’s a button for adding nodes. Once clicked, it reveals a list of available components, such as agents, chains, chat models, document loaders, and more, which we can utilize.

Let’s start by dragging and dropping the nodes. We’ll be using chains, so navigate to nodes and select ‘chains’.

There, you’ll find a chain named ‘conversational retrieval QA chain’, which is a document QA chain. This fits our needs perfectly, so let’s place it onto the canvas.

Drag the OpenAI chat model node onto the canvas. Once in place, quickly connect this LLM to the chain. Next, input your OpenAI API key. We can retain the default model setting as GPT 3.5 Turbo. Now, let’s proceed to set up the next steps.

Click on ‘nodes’ and navigate to ‘Vector stores’. Within this section, there are several options available. For this demonstration, we’ll choose the ‘in-memory Vector store’.

However, for real-world applications, consider alternatives like ‘pinecone’ or “‘Super Bass’. Drag the in-memory Vector store onto the canvas, and connect it to our chain. Now, our chain is equipped with both an LLM and a vector store.

Next, we’ll introduce a document loader to our project. Under ‘document loaders’, there’s a range of options, including the Cheerio web scraper, upload CSV and DOCX files, and even a loader for a folder containing various files.

For simplicity, we’ll add the ‘API loader’ to our project. This node enables us to fetch data from the CoinGecko API, converting the data into documents. Now, let’s connect this node to our vector store.

What we aim to do is upload our file, divide its contents into segments, and then generate documents from these segments.

Alternatively, we can connect a text splitter to this node. In the nodes section, navigate to “Text Splitters”. 

Within that category, choose the “Recursive Character Text Splitter” and incorporate it into our canvas.

You can link the “text splitter” to the “text splitter” parameter on our text file node.

While the default chunk size is set at 1000 characters, we can reduce this to 200 characters. However, the ideal chunk size is ultimately your decision.

Bear in mind that the goal is to extract these chunks to enrich our conversation context. 

Smaller chunks are advantageous because they utilize fewer tokens, which can result in cost savings.

We can also set a chunk overlap, say 20 characters, ensuring each chunk includes portions from its preceding and succeeding chunks.

 Now, with this setup, you can upload files directly from your device.

The system will then segment the file based on the specified parameters, and each segment will be transformed into a ‘line chain’ document stored in our Vector database.

This leads us to the final element of our chain: embeddings. For the AI to comprehend the content stored in the database, it must translate the text into vector arrays. To achieve this translation, we invoke the embeddings function.

Setting this up is straightforward. In the nodes section, navigate to ‘embeddings’. From the available options, choose the embedding function corresponding to our LLM. Since we’re utilizing OpenAI as the LLM, we’ll opt for the ‘OpenAI embeddings function .’

Drag and drop this onto your workspace. Link it to the Vector store. Remember, to activate the OpenAI embeddings API, you’ll need to provide the OpenAI API key. Once done, you can proceed to save your setup.

Step #3: Add the OpenAI API Keys

Then add your OpenAI Key to these two nodes

Get Your AP Here  [these instructions] for how to get your API key from OpenAI

Step #4: Add the CoinGecko API Keys

If you haven’t signed up for your CoinGecko API key yet, you can do so here. Once you have an API key, head over to CoinGecko API to identify the desired endpoint. For this tutorial, we’ll focus on two APIs: /coins/markets, which provides a comprehensive list of coin prices, market caps, volumes, etc., and /search/trending, which showcases the top trending searches.

Navigate to /coins/markets on the CoinGecko API. Adjacent to ‘parameters,’ click on ‘try it out.’ The mandatory parameter here is vs_currency. Input your desired currency — in this instance, I’ll use USD.

While this is the only requirement, you have the flexibility to adjust other parameters. For instance, by selecting ‘category,’ you can filter coins by their categories. After configuring your desired parameters, hit ‘Execute’. Then, it will generate the URL, which you can then integrate into the chatbot’s API loader.

 

Once you have the URL ready, you will need to add your CoinGecko API Key at the end.Just append  `&x_cg_demo_api_key=YOUR_API_KEY` and replace YOUR_API_KEY with your actual CoinGecko API Key. For example, to fetch Bitcoin’s market data using your Demo API key, use:

https://api.coingecko.com/api/v3/coins/markets?vs_currency=usd&ids=bitcoin&x_cg_demo_api_key=YOUR_API_KEY

Then add the URL to API Loader and insert the URL you want to talk to.
 

Now that your URL has been processed , you can start chatting with the results returned from it.

Congratulations! You have designed your first chatbot.

Step #5: Test Your AI Crypto Chatbot

Here’s an example: Let’s try asking a few questions to see if the bot is indeed pulling live and accurate information from CoinGecko API.

First query: What is the price of Bitcoin today?

Second query: What is the 7-day price change for Ethereum?”

Third query: What coin is trending today?

 

Due to limitations in this tutorial’s architecture, we treat each endpoint as a single “document” for the chatbot to reference. To give the bot access to today’s trending information, you will need to update the CoinGecko API to /research/trending.

https://api.coingecko.com/api/v3/search/trending&x_cg_demo_api_key=YOUR_API_KEY

 

Conclusion :

Building a chatbot is no longer a complicated and challenging process. Understanding the basics of how chatbots work and using the right tools and services, you can create an AI crypto chatbot with the CoinGecko api and flowise chatbot builder.

Author: Gao Dalie (高達烈)