Especially in a fast-moving space like crypto, it can be overwhelming to stay on top of your investments 24/7. In this article, we’ll be sharing how to build your own real-time portfolio tracker using Google Sheets, so you can manage and track your crypto investments easily. Creating your own custom portfolio will allow you to record and calculate your crypto holdings, analyze crypto price and volume changes, and tailor it to your trading preferences. Investors that trade stocks and other assets may even combine this with existing stocks portfolio trackers.
Regardless of whether you’re a beginner or advanced trader, this detailed guide will walk through:
- How to set your Google Sheet up for auto-refreshes
- How to import live crypto data via CoinGecko API (for both Public & Paid API users)
- How to customize your spreadsheet to calculate crypto holdings, holdings value, and more.
- The benefits of creating a portfolio tracker on Google Sheets
Let’s get started!
Create a Live Crypto Portfolio Tracker on Google Sheets in 4 Steps
First, create a new spreadsheet on Google Sheets and name it accordingly. This will be your workspace where you’ll input and analyze cryptocurrency data.
Step 1: Import Live Data with App Scripts
Navigate to ‘Extensions’ and select ‘App Script’, where a new tab will appear.
On the left panel, select ‘< > Editor’ and add a new script using the ‘+’ button. Copy and paste the following importJSON script, and save the script as ‘ImportJSON’. This importJSON script is a versatile one that will allow you to import data in many different ways.
For bug reports see https://github.com/bradjasper/ImportJSON/issues
————————————————————————————————————————————
Changelog:
1.6.0 (June 2, 2019) Fixed null values (thanks @gdesmedt1)
1.5.0 (January 11, 2019) Adds ability to include all headers in a fixed order even when no data is present for a given header in some or all rows.
1.4.0 (July 23, 2017) Transfer project to Brad Jasper. Fixed off-by-one array bug. Fixed previous value bug. Added custom annotations. Added ImportJSONFromSheet and ImportJSONBasicAuth.
1.3.0 Adds ability to import the text from a set of rows containing the text to parse. All cells are concatenated
1.2.1 Fixed a bug with how nested arrays are handled. The rowIndex counter wasn’t incrementing properly when parsing.
1.2.0 Added ImportJSONViaPost and support for fetchOptions to ImportJSONAdvanced
1.1.1 Added a version number using Google Scripts Versioning so other developers can use the library
1.1.0 Added support for the noHeaders option
1.0.0 Initial release
*====================================================================================================================================*/
/**
* Imports a JSON feed and returns the results to be inserted into a Google Spreadsheet. The JSON feed is flattened to create
* a two-dimensional array. The first row contains the headers, with each column header indicating the path to that data in
* the JSON feed. The remaining rows contain the data.
*
* By default, data gets transformed so it looks more like a normal data import. Specifically:
*
* – Data from parent JSON elements gets inherited to their child elements, so rows representing child elements contain the values
* of the rows representing their parent elements.
* – Values longer than 256 characters get truncated.
* – Headers have slashes converted to spaces, common prefixes removed and the resulting text converted to title case.
*
* To change this behavior, pass in one of these values in the options parameter:
*
* noInherit: Don’t inherit values from parent elements
* noTruncate: Don’t truncate values
* rawHeaders: Don’t prettify headers
* noHeaders: Don’t include headers, only the data
* allHeaders: Include all headers from the query parameter in the order they are listed
* debugLocation: Prepend each value with the row & column it belongs in
*
* For example:
*
* =ImportJSON(“http://gdata.youtube.com/feeds/api/standardfeeds/most_popular?v=2&alt=json”, “/feed/entry/title,/feed/entry/content”,
* “noInherit,noTruncate,rawHeaders”)
*
* @param {url} the URL to a public JSON feed
* @param {query} a comma-separated list of paths to import. Any path starting with one of these paths gets imported.
* @param {parseOptions} a comma-separated list of options that alter processing of the data
* @customfunction
*
* @return a two-dimensional array containing the data, with the first row containing headers
**/
function ImportJSON(url, query, parseOptions) {
return ImportJSONAdvanced(url, null, query, parseOptions, includeXPath_, defaultTransform_);
}
/**
* Imports a JSON feed via a POST request and returns the results to be inserted into a Google Spreadsheet. The JSON feed is
* flattened to create a two-dimensional array. The first row contains the headers, with each column header indicating the path to
* that data in the JSON feed. The remaining rows contain the data.
*
* To retrieve the JSON, a POST request is sent to the URL and the payload is passed as the content of the request using the content
* type “application/x-www-form-urlencoded”. If the fetchOptions define a value for “method”, “payload” or “contentType”, these
* values will take precedent. For example, advanced users can use this to make this function pass XML as the payload using a GET
* request and a content type of “application/xml; charset=utf-8”. For more information on the available fetch options, see
* https://developers.google.com/apps-script/reference/url-fetch/url-fetch-app . At this time the “headers” option is not supported.
*
* By default, the returned data gets transformed so it looks more like a normal data import. Specifically:
*
* – Data from parent JSON elements gets inherited to their child elements, so rows representing child elements contain the values
* of the rows representing their parent elements.
* – Values longer than 256 characters get truncated.
* – Headers have slashes converted to spaces, common prefixes removed and the resulting text converted to title case.
*
* To change this behavior, pass in one of these values in the options parameter:
*
* noInherit: Don’t inherit values from parent elements
* noTruncate: Don’t truncate values
* rawHeaders: Don’t prettify headers
* noHeaders: Don’t include headers, only the data
* allHeaders: Include all headers from the query parameter in the order they are listed
* debugLocation: Prepend each value with the row & column it belongs in
*
* For example:
*
* =ImportJSON(“http://gdata.youtube.com/feeds/api/standardfeeds/most_popular?v=2&alt=json”, “user=bob&apikey=xxxx”,
* “validateHttpsCertificates=false”, “/feed/entry/title,/feed/entry/content”, “noInherit,noTruncate,rawHeaders”)
*
* @param {url} the URL to a public JSON feed
* @param {payload} the content to pass with the POST request; usually a URL encoded list of parameters separated by ampersands
* @param {fetchOptions} a comma-separated list of options used to retrieve the JSON feed from the URL
* @param {query} a comma-separated list of paths to import. Any path starting with one of these paths gets imported.
* @param {parseOptions} a comma-separated list of options that alter processing of the data
* @customfunction
*
* @return a two-dimensional array containing the data, with the first row containing headers
**/
function ImportJSONViaPost(url, payload, fetchOptions, query, parseOptions) {
var postOptions = parseToObject_(fetchOptions);
if (postOptions[“method”] == null) {
postOptions[“method”] = “POST”;
}
if (postOptions[“payload”] == null) {
postOptions[“payload”] = payload;
}
if (postOptions[“contentType”] == null) {
postOptions[“contentType”] = “application/x-www-form-urlencoded”;
}
convertToBool_(postOptions, “validateHttpsCertificates”);
convertToBool_(postOptions, “useIntranet”);
convertToBool_(postOptions, “followRedirects”);
convertToBool_(postOptions, “muteHttpExceptions”);
return ImportJSONAdvanced(url, postOptions, query, parseOptions, includeXPath_, defaultTransform_);
}
/**
* Imports a JSON text from a named Sheet and returns the results to be inserted into a Google Spreadsheet. The JSON feed is flattened to create
* a two-dimensional array. The first row contains the headers, with each column header indicating the path to that data in
* the JSON feed. The remaining rows contain the data.
*
* By default, data gets transformed so it looks more like a normal data import. Specifically:
*
* – Data from parent JSON elements gets inherited to their child elements, so rows representing child elements contain the values
* of the rows representing their parent elements.
* – Values longer than 256 characters get truncated.
* – Headers have slashes converted to spaces, common prefixes removed and the resulting text converted to title case.
*
* To change this behavior, pass in one of these values in the options parameter:
*
* noInherit: Don’t inherit values from parent elements
* noTruncate: Don’t truncate values
* rawHeaders: Don’t prettify headers
* noHeaders: Don’t include headers, only the data
* allHeaders: Include all headers from the query parameter in the order they are listed
* debugLocation: Prepend each value with the row & column it belongs in
*
* For example:
*
* =ImportJSONFromSheet(“Source”, “/feed/entry/title,/feed/entry/content”,
* “noInherit,noTruncate,rawHeaders”)
*
* @param {sheetName} the name of the sheet containg the text for the JSON
* @param {query} a comma-separated lists of paths to import. Any path starting with one of these paths gets imported.
* @param {options} a comma-separated list of options that alter processing of the data
*
* @return a two-dimensional array containing the data, with the first row containing headers
* @customfunction
**/
function ImportJSONFromSheet(sheetName, query, options) {
var object = getDataFromNamedSheet_(sheetName);
return parseJSONObject_(object, query, options, includeXPath_, defaultTransform_);
}
/**
* An advanced version of ImportJSON designed to be easily extended by a script. This version cannot be called from within a
* spreadsheet.
*
* Imports a JSON feed and returns the results to be inserted into a Google Spreadsheet. The JSON feed is flattened to create
* a two-dimensional array. The first row contains the headers, with each column header indicating the path to that data in
* the JSON feed. The remaining rows contain the data.
*
* The fetchOptions can be used to change how the JSON feed is retrieved. For instance, the “method” and “payload” options can be
* set to pass a POST request with post parameters. For more information on the available parameters, see
* https://developers.google.com/apps-script/reference/url-fetch/url-fetch-app .
*
* Use the include and transformation functions to determine what to include in the import and how to transform the data after it is
* imported.
*
* For example:
*
* ImportJSON(“http://gdata.youtube.com/feeds/api/standardfeeds/most_popular?v=2&alt=json”,
* new Object() { “method” : “post”, “payload” : “user=bob&apikey=xxxx” },
* “/feed/entry”,
* “”,
* function (query, path) { return path.indexOf(query) == 0; },
* function (data, row, column) { data[row][column] = data[row][column].toString().substr(0, 100); } )
*
* In this example, the import function checks to see if the path to the data being imported starts with the query. The transform
* function takes the data and truncates it. For more robust versions of these functions, see the internal code of this library.
*
* @param {url} the URL to a public JSON feed
* @param {fetchOptions} an object whose properties are options used to retrieve the JSON feed from the URL
* @param {query} the query passed to the include function
* @param {parseOptions} a comma-separated list of options that may alter processing of the data
* @param {includeFunc} a function with the signature func(query, path, options) that returns true if the data element at the given path
* should be included or false otherwise.
* @param {transformFunc} a function with the signature func(data, row, column, options) where data is a 2-dimensional array of the data
* and row & column are the current row and column being processed. Any return value is ignored. Note that row 0
* contains the headers for the data, so test for row==0 to process headers only.
*
* @return a two-dimensional array containing the data, with the first row containing headers
* @customfunction
**/
function ImportJSONAdvanced(url, fetchOptions, query, parseOptions, includeFunc, transformFunc) {
var jsondata = UrlFetchApp.fetch(url, fetchOptions);
var object = JSON.parse(jsondata.getContentText());
return parseJSONObject_(object, query, parseOptions, includeFunc, transformFunc);
}
/**
* Helper function to authenticate with basic auth informations using ImportJSONAdvanced
*
* Imports a JSON feed and returns the results to be inserted into a Google Spreadsheet. The JSON feed is flattened to create
* a two-dimensional array. The first row contains the headers, with each column header indicating the path to that data in
* the JSON feed. The remaining rows contain the data.
*
* The fetchOptions can be used to change how the JSON feed is retrieved. For instance, the “method” and “payload” options can be
* set to pass a POST request with post parameters. For more information on the available parameters, see
* https://developers.google.com/apps-script/reference/url-fetch/url-fetch-app .
*
* Use the include and transformation functions to determine what to include in the import and how to transform the data after it is
* imported.
*
* @param {url} the URL to a http basic auth protected JSON feed
* @param {username} the Username for authentication
* @param {password} the Password for authentication
* @param {query} the query passed to the include function (optional)
* @param {parseOptions} a comma-separated list of options that may alter processing of the data (optional)
*
* @return a two-dimensional array containing the data, with the first row containing headers
* @customfunction
**/
function ImportJSONBasicAuth(url, username, password, query, parseOptions) {
var encodedAuthInformation = Utilities.base64Encode(username + “:” + password);
var header = {headers: {Authorization: “Basic ” + encodedAuthInformation}};
return ImportJSONAdvanced(url, header, query, parseOptions, includeXPath_, defaultTransform_);
}
/**
* Encodes the given value to use within a URL.
*
* @param {value} the value to be encoded
*
* @return the value encoded using URL percent-encoding
*/
function URLEncode(value) {
return encodeURIComponent(value.toString());
}
/**
* Adds an oAuth service using the given name and the list of properties.
*
* @note This method is an experiment in trying to figure out how to add an oAuth service without having to specify it on each
* ImportJSON call. The idea was to call this method in the first cell of a spreadsheet, and then use ImportJSON in other
* cells. This didn’t work, but leaving this in here for further experimentation later.
*
* The test I did was to add the following into the A1:
*
* =AddOAuthService(“twitter”, “https://api.twitter.com/oauth/access_token”,
* “https://api.twitter.com/oauth/request_token”, “https://api.twitter.com/oauth/authorize”,
* “<my consumer key>”, “<my consumer secret>”, “”, “”)
*
* Information on obtaining a consumer key & secret for Twitter can be found at https://dev.twitter.com/docs/auth/using-oauth
*
* Then I added the following into A2:
*
* =ImportJSONViaPost(“https://api.twitter.com/1.1/statuses/user_timeline.json?screen_name=fastfedora&count=2”, “”,
* “oAuthServiceName=twitter,oAuthUseToken=always”, “/”, “”)
*
* I received an error that the “oAuthServiceName” was not a valid value. [twl 18.Apr.13]
*/
function AddOAuthService__(name, accessTokenUrl, requestTokenUrl, authorizationUrl, consumerKey, consumerSecret, method, paramLocation) {
var oAuthConfig = UrlFetchApp.addOAuthService(name);
if (accessTokenUrl != null && accessTokenUrl.length > 0) {
oAuthConfig.setAccessTokenUrl(accessTokenUrl);
}
if (requestTokenUrl != null && requestTokenUrl.length > 0) {
oAuthConfig.setRequestTokenUrl(requestTokenUrl);
}
if (authorizationUrl != null && authorizationUrl.length > 0) {
oAuthConfig.setAuthorizationUrl(authorizationUrl);
}
if (consumerKey != null && consumerKey.length > 0) {
oAuthConfig.setConsumerKey(consumerKey);
}
if (consumerSecret != null && consumerSecret.length > 0) {
oAuthConfig.setConsumerSecret(consumerSecret);
}
if (method != null && method.length > 0) {
oAuthConfig.setMethod(method);
}
if (paramLocation != null && paramLocation.length > 0) {
oAuthConfig.setParamLocation(paramLocation);
}
}
/**
* Parses a JSON object and returns a two-dimensional array containing the data of that object.
*/
function parseJSONObject_(object, query, options, includeFunc, transformFunc) {
var headers = new Array();
var data = new Array();
if (query && !Array.isArray(query) && query.toString().indexOf(“,”) != -1) {
query = query.toString().split(“,”);
}
// Prepopulate the headers to lock in their order
if (hasOption_(options, “allHeaders”) && Array.isArray(query))
{
for (var i = 0; i < query.length; i++)
{
headers[query[i]] = Object.keys(headers).length;
}
}
if (options) {
options = options.toString().split(“,”);
}
parseData_(headers, data, “”, {rowIndex: 1}, object, query, options, includeFunc);
parseHeaders_(headers, data);
transformData_(data, options, transformFunc);
return hasOption_(options, “noHeaders”) ? (data.length > 1 ? data.slice(1) : new Array()) : data;
}
/**
* Parses the data contained within the given value and inserts it into the data two-dimensional array starting at the rowIndex.
* If the data is to be inserted into a new column, a new header is added to the headers array. The value can be an object,
* array or scalar value.
*
* If the value is an object, it’s properties are iterated through and passed back into this function with the name of each
* property extending the path. For instance, if the object contains the property “entry” and the path passed in was “/feed”,
* this function is called with the value of the entry property and the path “/feed/entry”.
*
* If the value is an array containing other arrays or objects, each element in the array is passed into this function with
* the rowIndex incremeneted for each element.
*
* If the value is an array containing only scalar values, those values are joined together and inserted into the data array as
* a single value.
*
* If the value is a scalar, the value is inserted directly into the data array.
*/
function parseData_(headers, data, path, state, value, query, options, includeFunc) {
var dataInserted = false;
if (Array.isArray(value) && isObjectArray_(value)) {
for (var i = 0; i < value.length; i++) {
if (parseData_(headers, data, path, state, value[i], query, options, includeFunc)) {
dataInserted = true;
if (data[state.rowIndex]) {
state.rowIndex++;
}
}
}
} else if (isObject_(value)) {
for (key in value) {
if (parseData_(headers, data, path + “/” + key, state, value[key], query, options, includeFunc)) {
dataInserted = true;
}
}
} else if (!includeFunc || includeFunc(query, path, options)) {
// Handle arrays containing only scalar values
if (Array.isArray(value)) {
value = value.join();
}
// Insert new row if one doesn’t already exist
if (!data[state.rowIndex]) {
data[state.rowIndex] = new Array();
}
// Add a new header if one doesn’t exist
if (!headers[path] && headers[path] != 0) {
headers[path] = Object.keys(headers).length;
}
// Insert the data
data[state.rowIndex][headers[path]] = value;
dataInserted = true;
}
return dataInserted;
}
/**
* Parses the headers array and inserts it into the first row of the data array.
*/
function parseHeaders_(headers, data) {
data[0] = new Array();
for (key in headers) {
data[0][headers[key]] = key;
}
}
/**
* Applies the transform function for each element in the data array, going through each column of each row.
*/
function transformData_(data, options, transformFunc) {
for (var i = 0; i < data.length; i++) {
for (var j = 0; j < data[0].length; j++) {
transformFunc(data, i, j, options);
}
}
}
/**
* Returns true if the given test value is an object; false otherwise.
*/
function isObject_(test) {
return Object.prototype.toString.call(test) === ‘[object Object]’;
}
/**
* Returns true if the given test value is an array containing at least one object; false otherwise.
*/
function isObjectArray_(test) {
for (var i = 0; i < test.length; i++) {
if (isObject_(test[i])) {
return true;
}
}
return false;
}
/**
* Returns true if the given query applies to the given path.
*/
function includeXPath_(query, path, options) {
if (!query) {
return true;
} else if (Array.isArray(query)) {
for (var i = 0; i < query.length; i++) {
if (applyXPathRule_(query[i], path, options)) {
return true;
}
}
} else {
return applyXPathRule_(query, path, options);
}
return false;
};
/**
* Returns true if the rule applies to the given path.
*/
function applyXPathRule_(rule, path, options) {
return path.indexOf(rule) == 0;
}
/**
* By default, this function transforms the value at the given row & column so it looks more like a normal data import. Specifically:
*
* – Data from parent JSON elements gets inherited to their child elements, so rows representing child elements contain the values
* of the rows representing their parent elements.
* – Values longer than 256 characters get truncated.
* – Values in row 0 (headers) have slashes converted to spaces, common prefixes removed and the resulting text converted to title
* case.
*
* To change this behavior, pass in one of these values in the options parameter:
*
* noInherit: Don’t inherit values from parent elements
* noTruncate: Don’t truncate values
* rawHeaders: Don’t prettify headers
* debugLocation: Prepend each value with the row & column it belongs in
*/
function defaultTransform_(data, row, column, options) {
if (data[row][column] == null) {
if (row < 2 || hasOption_(options, “noInherit”)) {
data[row][column] = “”;
} else {
data[row][column] = data[row-1][column];
}
}
if (!hasOption_(options, “rawHeaders”) && row == 0) {
if (column == 0 && data[row].length > 1) {
removeCommonPrefixes_(data, row);
}
data[row][column] = toTitleCase_(data[row][column].toString().replace(/[/_]/g, ” “));
}
if (!hasOption_(options, “noTruncate”) && data[row][column]) {
data[row][column] = data[row][column].toString().substr(0, 256);
}
if (hasOption_(options, “debugLocation”)) {
data[row][column] = “[” + row + “,” + column + “]” + data[row][column];
}
}
/**
* If all the values in the given row share the same prefix, remove that prefix.
*/
function removeCommonPrefixes_(data, row) {
var matchIndex = data[row][0].length;
for (var i = 1; i < data[row].length; i++) {
matchIndex = findEqualityEndpoint_(data[row][i-1], data[row][i], matchIndex);
if (matchIndex == 0) {
return;
}
}
for (var i = 0; i < data[row].length; i++) {
data[row][i] = data[row][i].substring(matchIndex, data[row][i].length);
}
}
/**
* Locates the index where the two strings values stop being equal, stopping automatically at the stopAt index.
*/
function findEqualityEndpoint_(string1, string2, stopAt) {
if (!string1 || !string2) {
return -1;
}
var maxEndpoint = Math.min(stopAt, string1.length, string2.length);
for (var i = 0; i < maxEndpoint; i++) {
if (string1.charAt(i) != string2.charAt(i)) {
return i;
}
}
return maxEndpoint;
}
/**
* Converts the text to title case.
*/
function toTitleCase_(text) {
if (text == null) {
return null;
}
return text.replace(/wS*/g, function(word) { return word.charAt(0).toUpperCase() + word.substr(1).toLowerCase(); });
}
/**
* Returns true if the given set of options contains the given option.
*/
function hasOption_(options, option) {
return options && options.indexOf(option) >= 0;
}
/**
* Parses the given string into an object, trimming any leading or trailing spaces from the keys.
*/
function parseToObject_(text) {
var map = new Object();
var entries = (text != null && text.trim().length > 0) ? text.toString().split(“,”) : new Array();
for (var i = 0; i < entries.length; i++) {
addToMap_(map, entries[i]);
}
return map;
}
/**
* Parses the given entry and adds it to the given map, trimming any leading or trailing spaces from the key.
*/
function addToMap_(map, entry) {
var equalsIndex = entry.indexOf(“=”);
var key = (equalsIndex != -1) ? entry.substring(0, equalsIndex) : entry;
var value = (key.length + 1 < entry.length) ? entry.substring(key.length + 1) : “”;
map[key.trim()] = value;
}
/**
* Returns the given value as a boolean.
*/
function toBool_(value) {
return value == null ? false : (value.toString().toLowerCase() == “true” ? true : false);
}
/**
* Converts the value for the given key in the given map to a bool.
*/
function convertToBool_(map, key) {
if (map[key] != null) {
map[key] = toBool_(map[key]);
}
}
function getDataFromNamedSheet_(sheetName) {
var ss = SpreadsheetApp.getActiveSpreadsheet();
var source = ss.getSheetByName(sheetName);
var jsonRange = source.getRange(1,1,source.getLastRow());
var jsonValues = jsonRange.getValues();
var jsonText = “”;
for (var row in jsonValues) {
for (var col in jsonValues[row]) {
jsonText +=jsonValues[row][col];
}
}
Logger.log(jsonText);
return JSON.parse(jsonText);
}
Create a second Apps Script by clicking on the ‘+’ button. Copy the code below and paste it into the script editor, saving it as ‘autoRefresh’ – this will allow your sheet to automatically refresh at fixed intervals.
// Updates cell A1 in “randomNumber” with a random number
function triggerAutoRefresh() {
SpreadsheetApp.getActive().getSheetByName(‘doNotDelete’).getRange(1, 1).setValue(getRandomInt(1, 200));
}
// Basic Math.random() function
function getRandomInt(min, max) {
min = Math.ceil(min);
max = Math.floor(max);
return Math.floor(Math.random() * (max – min + 1)) + min;
}
Your Apps Script editor will now look like this:
Step 2: Automate Data Refreshes
Now that the scripts have been created, select the clock icon on the left to navigate to ‘Triggers’.
Clicking on ‘+ Add Trigger’ will cause this pop-up to appear. Select the respective dropdowns accordingly:
- Choose which function to run: triggerAutoRefresh
- Choose which deployment should run: Head
- Select event source: Time-driven
- Select type of time based trigger: Minutes timer
- Select minute interval: Every 5 or 10 minutes (note: anything less than this may not be useful, as results are cached)
Depending on your preferred frequency, you may also toggle between Hour timer, Day timer, Week timer, and 15 or 30 minute interval triggers.
Step 3: Import Top 500 Crypto Data with CoinGecko API
CoinGecko tracks over 10,000 cryptocurrencies across 700 exchanges, and is the go-to source for millions of investors globally. Coin rankings are based on market capitalization, so pulling in top 500 cryptocurrencies data would typically be more than sufficient for most investors. Do adjust parameters accordingly if you trade smaller market cap coins!
Head over to our Crypto API documentation, and find the endpoint /coins/markets.
Select ‘Try it out’ and fill in the parameters according to respective prompts in the API playground.
As each ‘Page’ tracks 250 coins, we will be importing two pages of coins data via two API calls, to derive the top 500 cryptocurrencies. Leave the ‘ids’ parameter blank, and add the following inputs:
- Per_page: 250
- Page: 1
To pull data for smaller cap coins, change the page number accordingly – for instance, you will be importing coins data for coins ranked between #2500 to #3000, with the ‘Page’ parameter inputs of 11 and 12. This will consume two API calls as well.
Obtaining Data for Specific Cryptocurrencies
In the scenario where you want to only retrieve data for a specific list of coins, you can fill in the ‘ids’ parameter with the respective coins’ API IDs – this Token API list, created by the CoinGecko team is particularly helpful. Alternatively, you may search for the specific coin on CoinGecko and copy the API id from individual coin pages. For example, XRP’s API id is ‘ripple’.
Fill in the parameters and finally hit the ‘Execute’ button. The server response and Request URL will be generated accordingly.
In our example, the Request URL is:
https://api.coingecko.com/api/v3/coins/markets?vs_currency=usd&order=market_cap_desc&per_page=250&page=1&sparkline=true&price_change_percentage=1h%2C24h%2C7d&locale=en&precision=3
Copy this and head back to your worksheet.
Label a new worksheet ‘Top 500 Coins’, as this will serve as your raw database and not your actual crypto portfolio dashboard.
In cell A1, use the following and replace the Request URL accordingly.
=IMPORTJSON(“Request URL”,“/name,/current_price,/market_cap,/price_change,/total_volume,/high_24h,/low_24h”,“noTruncate,noHeaders”,doNotDelete!$A$1)
The following script will appear:
=IMPORTJSON(“https://api.coingecko.com/api/v3/coins/markets?vs_currency=usd&order=market_cap_desc&per_page=100&page=1&sparkline=true&price_change_percentage=1h%2C24h%2C7d&locale=en&precision=3”,“/name,/current_price,/market_cap,/price_change,/total_volume,/high_24h,/low_24h”,“noTruncate”,doNotDelete!$A$1)
Why Am I Getting Rate Limited?
Due to Google Sheets rate limits, you may run into an ‘#ERROR’ or are only able to import a small range of data. Google Sheets rely on shared hosting, which means one Google server hosts multiple Google Sheets, and multiple users making requests on the same server share the same API calls per minute limit. This explains why you may get rate limited even when you use only a few API calls.
You can avoid getting rate limited by subscribing to the CoinGecko API Analyst plan. If you’re an existing subscriber and have an API key, use the Pro API root URL (https://pro-api.coingecko.com/api/v3/) and include your API key at the end. This is how the URL structure will appear:
=IMPORTJSON(“https://pro-api.coingecko.com/api/v3/coins/markets?vs_currency=usd&order=market_cap_desc&per_page=250&page=1&sparkline=true&price_change_percentage=1h%2C24h%2C7d&locale=en&precision=3?&x_cg_pro_api_key=YOUR_API_KEY”,“/name,/current_price,/market_cap,/price_change,/total_volume,/high_24h,/low_24h”,“noTruncate”,doNotDelete!$A$1)
Once the script loads, a list of top 250 cryptocurrencies and its respective price, market cap data will now stream into your spreadsheet up through row 251.
- /name – coin name
- /current_price – coin price
- /market_cap – all market cap details
- /price_change – 24hr price change
- /total_volume – 24hr trading volume
- /high_24h and /low24 – 24hr high and low prices
To pull the next 250 cryptocurrencies, apply the same formula on cell A252 with a few tweaks, or simply copy and paste the formulas below!
- Change page number to ‘2’, since we’re now moving on to Page 2 for the top 251-500 cryptocurrencies
- Add in ‘,noHeaders’ after “noTruncate” – this prevents duplicating headers (as seen in row 1) on row 252.
Public API users:
=IMPORTJSON(“https://api.coingecko.com/api/v3/coins/markets?vs_currency=usd&order=market_cap_desc&per_page=250&page=2&sparkline=true&price_change_percentage=1h%2C24h%2C7d&locale=en&precision=3”,“/name,/current_price,/market_cap,/price_change,/total_volume,/high_24h,/low_24h”,“noTruncate,noHeaders”,doNotDelete!$A$1)
Similarly, due to Google Sheets and Public API rate limits, you may only be able to import a limited range of data.
Paid API users:
=IMPORTJSON(“https://pro-api.coingecko.com/api/v3/coins/markets?vs_currency=usd&order=market_cap_desc&per_page=250&page=2&sparkline=true&price_change_percentage=1h%2C24h%2C7d&locale=en&precision=3?&x_cg_pro_api_key=YOUR_API_KEY”,“/name,/current_price,/market_cap,/price_change,/total_volume,/high_24h,/low_24h”,“noTruncate,noHeaders”,doNotDelete!$A$1)
This robust crypto API integration on Google Sheets allows you to easily fetch real-time prices for the top 500 cryptocurrencies on CoinGecko, and the data automatically refreshes every 10 minutes.
We’ll move on to the final step where you can customize your crypto portfolio tracker and dashboard.
Step 4: Configure Your Portfolio Tracker
Now that you have an auto-updating database of the top 500 cryptocurrencies, you can customize your portfolio tracker based on your trading preferences.
Using VLOOKUP, search for the price, market cap, trading volume and % change, based on the Coin Name. In this example, we’ve done a VLOOKUP search of ‘Bitcoin’ in cell B19, cross referencing its price in the Top 500 Coins worksheet.
Since coin price data is indexed on column 2 of our Top 500 Coins database, we enter ‘2’ in the VLOOKUP formula.
This method is applied to the rest of the table, returning responding values accordingly.
Finally, create a Portfolio section at the end to track your holdings, calculate holding value and profit and loss (P&L) based on real-time cryptocurrency prices.
Create the following row headers:
- Current Holdings – How much of each coin you currently hold.
- Current Holding Value (USD) – How much value your crypto holdings is worth, in fiat currency, derived by multiplying Current Holdings by Current Price.
- Total Invested (USD) – Cost of purchase in fiat currency, for each entry.
- Unrealized P&L (USD) – The profit or loss that could be realized, if the position were closed at that time.
- Realized P&L (USD) – The actual profit or loss that has been realized, based on closing positions.
- ROI % – Return on investment, which evaluates how efficient or profitable your investment is. The higher your ROI, the more profitable your investment is.
Finally, you may want to add data visualizations to your crypto portfolio tracker. Adding a chart and a summary can help to organize and present your crypto investments in an easily digestible way, especially if you have a wide range of crypto assets in your basket.
As you continue to invest in crypto, you’ll need to update your portfolio tracker with new purchase records and remove outdated ones. While calculating trading profits and losses hasn’t been fully covered in this article, we’ll be creating a step-by-step guide soon on how to automate P&L – both unrealized, realized, ROI and more.
Here’s the final Crypto Portfolio Tracker on Google Sheets:
What are the Benefits of Tracking Your Crypto Portfolio on Google Sheets?
Tracking your crypto portfolio on Google Sheets allows you to easily analyze data with charts, pivot tables and formulas across any device, any time. Having auto-refreshing crypto price data also ensures you have an accurate view of all your investments at all times. Investors who diversify their portfolios across crypto, stocks and other assets and are working off Google Sheets, will find it extremely convenient to consolidate and customize all asset holdings in a single, master dashboard.
For Advanced Traders: Useful CoinGecko API Endpoints
Here are some useful API endpoints that advanced traders might find particularly useful:
- /coins/top_gainers_losers – get the top 30 coins with the largest price gains and losses based on specific time frames
- /global/market_cap_chart – get historical global market cap and volume data, by no. of days away from now
- /nfts/markets – track NFT floor prices, market cap and volume
If you’re an advanced trader and want access to more comprehensive data, historical prices and bypass rate limits, you may want to consider subscribing to an Analyst API plan. Alternatively, if you require a custom solution, fill in the form below to get in touch with our API sales team: