This class is the workhorse of CARTOframes by providing all functionality related to data access to CARTO, map creation, and Data Observatory functionality.


class cartoframes.context.CartoContext(base_url=None, api_key=None, creds=None, session=None, verbose=0)

CartoContext class for authentication with CARTO and high-level operations such as reading tables from CARTO into dataframes, writing dataframes to CARTO tables, creating custom maps from dataframes and CARTO tables, and augmenting data using CARTO’s Data Observatory. Future methods will interact with CARTO’s services like routing, geocoding, and isolines, PostGIS backend for spatial processing, and much more.

Manages connections with CARTO for data and map operations. Modeled after SparkContext.

There are two ways of authenticating against a CARTO account:

  1. Setting the base_url and api_key directly in CartoContext. This method is easier.:

    cc = CartoContext(
  2. By passing a Credentials instance in CartoContext’s creds keyword argument. This method is more flexible.:

    from cartoframes import Credentials
    creds = Credentials(username='eschbacher', key='abcdefg')
    cc = CartoContext(creds=creds)

Credentials instance

  • base_url (str) – Base URL of CARTO user account. Cloud-based accounts should use the form https://{username} (e.g., for user eschbacher) whether on a personal or multi-user account. On-premises installation users should ask their admin.
  • api_key (str) – CARTO API key.
  • creds (Credentials) – A Credentials instance can be used in place of a base_url/api_key combination.
  • session (requests.Session, optional) – requests session. See requests documentation for more information.
  • verbose (bool, optional) – Output underlying process states (True), or suppress (False, default)

A CartoContext object that is authenticated against the user’s CARTO account.

Return type:



Create a CartoContext object for a cloud-based CARTO account.

import cartoframes
# if on prem, format is '{host}/user/{username}'
BASEURL = 'https://{}'.format('your carto username')
APIKEY = 'your carto api key'
cc = cartoframes.CartoContext(BASEURL, APIKEY)


If using cartoframes with an on premises CARTO installation, sometimes it is necessary to disable SSL verification depending on your system configuration. You can do this using a requests Session object as follows:

import cartoframes
from requests import Session
session = Session()
session.verify = False

# on prem host (e.g., an IP address)
onprem_host = 'your on prem carto host'

cc = cartoframes.CartoContext(
        user='your carto username'),
    api_key='your carto api key',
write(df, table_name, temp_dir=SYSTEM_TMP_PATH, overwrite=False, lnglat=None, encode_geom=False, geom_col=None, **kwargs)

Write a DataFrame to a CARTO table.


Write a pandas DataFrame to CARTO.

cc.write(df, 'brooklyn_poverty', overwrite=True)

Scrape an HTML table from Wikipedia and send to CARTO with content guessing to create a geometry from the country column. This uses a CARTO Import API param content_guessing parameter.

url = ''
# retrieve first HTML table from that page
df = pd.read_html(url, header=0)[0]
# send to carto, let it guess polygons based on the 'country'
#   column. Also set privacy to 'public'
cc.write(df, 'life_expectancy',


datetime64[ns] column will lose precision sending a dataframe to CARTO because postgresql has millisecond resolution while pandas does nanoseconds

  • df (pandas.DataFrame) – DataFrame to write to table_name in user CARTO account
  • table_name (str) – Table to write df to in CARTO.
  • temp_dir (str, optional) – Directory for temporary storage of data that is sent to CARTO. Defaults are defined by appdirs.
  • overwrite (bool, optional) – Behavior for overwriting table_name if it exits on CARTO. Defaults to False.
  • lnglat (tuple, optional) – lng/lat pair that can be used for creating a geometry on CARTO. Defaults to None. In some cases, geometry will be created without specifying this. See CARTO’s Import API for more information.
  • encode_geom (bool, optional) – Whether to write geom_col to CARTO as the_geom.
  • geom_col (str, optional) – The name of the column where geometry information is stored. Used in conjunction with encode_geom.
  • **kwargs

    Keyword arguments to control write operations. Options are:

    • compression to set compression for files sent to CARTO. This will cause write speedups depending on the dataset. Options are None (no compression, default) or gzip.
    • Some arguments from CARTO’s Import API. See the params listed in the documentation for more information. For example, when using content_guessing=’true’, a column named ‘countries’ with country names will be used to generate polygons for each country. Another use is setting the privacy of a dataset. To avoid unintended consequences, avoid file, url, and other similar arguments.



DataFrame indexes are changed to ordinary columns. CARTO creates an index called cartodb_id for every table that runs from 1 to the length of the DataFrame.


List all tables in user’s CARTO account

Returns:list of Table
read(table_name, limit=None, decode_geom=False, shared_user=None, retry_times=3)
Read a table from CARTO into a pandas DataFrames. Column types are inferred from database types, to
avoid problems with integer columns with NA or null values, they are automatically retrieved as float64
  • table_name (str) – Name of table in user’s CARTO account.
  • limit (int, optional) – Read only limit lines from table_name. Defaults to None, which reads the full table.
  • decode_geom (bool, optional) – Decodes CARTO’s geometries into a Shapely object that can be used, for example, in GeoPandas.
  • shared_user (str, optional) – If a table has been shared with you, specify the user name (schema) who shared it.
  • retry_times (int, optional) – If the read call is rate limited, number of retries to be made

DataFrame representation of table_name from CARTO.

Return type:



import cartoframes
cc = cartoframes.CartoContext(BASEURL, APIKEY)
df ='acadia_biodiversity')

Delete a table in user’s CARTO account.

Parameters:table_name (str) – Name of table to delete
Returns:True if table is removed
Return type:bool
query(query, table_name=None, decode_geom=False, is_select=None)

Pull the result from an arbitrary SQL SELECT query from a CARTO account into a pandas DataFrame. This is the default behavior, when is_select=True

Can also be used to perform database operations (creating/dropping tables, adding columns, updates, etc.). In this case, you have to explicitly specify is_select=False

This method is a helper for the CartoContext.fetch and CartoContext.execute methods. We strongly encourage you to use any of those methods depending on the type of query you want to run. If you want to get the results of a SELECT query into a pandas DataFrame, then use CartoContext.fetch. For any other query that performs an operation into the CARTO database, use CartoContext.execute

  • query (str) – Query to run against CARTO user database. This data will then be converted into a pandas DataFrame.
  • table_name (str, optional) – If set (and is_select=True), this will create a new table in the user’s CARTO account that is the result of the SELECT query provided. Defaults to None (no table created).
  • decode_geom (bool, optional) – Decodes CARTO’s geometries into a Shapely object that can be used, for example, in GeoPandas. It only works for SELECT queries when is_select=True
  • is_select (bool, optional) – This argument has to be set depending on the query performed. True for SELECT queries, False for any other query. For the case of a SELECT SQL query (is_select=True) the result will be stored into a pandas DataFrame. When an arbitrary SQL query (is_select=False) it will perform a database operation (UPDATE, DROP, INSERT, etc.) By default is_select=None that means that the method will return a dataframe if the query starts with a select clause, otherwise it will just execute the query and return None

When is_select=True and the query is actually a SELECT query this method returns a pandas DataFrame representation of query supplied otherwise returns None. Pandas data types are inferred from PostgreSQL data types. In the case of PostgreSQL date types, dates are attempted to be converted, but on failure a data type ‘object’ is used.

Return type:



CartoException – If there’s any error when executing the query


Query a table in CARTO and write a new table that is result of query. This query gets the 10 highest values from a table and returns a dataframe, as well as creating a new table called ‘top_ten’ in the CARTO account.

topten_df = cc.query(
      ORDER BY value_column DESC
      LIMIT 10

This query joins points to polygons based on intersection, and aggregates by summing the values of the points in each polygon. The query returns a dataframe, with a geometry column that contains polygons and also creates a new table called ‘points_aggregated_to_polygons’ in the CARTO account.

points_aggregated_to_polygons = cc.query(
      SELECT polygons.*, sum(points.values)
      FROM polygons JOIN points
      ON ST_Intersects(points.the_geom, polygons.the_geom)
      GROUP BY polygons.the_geom, polygons.cartodb_id

Drops my_table

      DROP TABLE my_table

Updates the column my_column in the table my_table

      UPDATE my_table SET my_column = 1

Produce a CARTO map visualizing data layers.


Create a map with two data Layers, and one BaseMap layer:

import cartoframes
from cartoframes import Layer, BaseMap, styling
cc = cartoframes.CartoContext(BASEURL, APIKEY)[BaseMap(),
                     color={'column': 'simpson_index',
                            'scheme': styling.tealRose(7)}),
                     color={'column': 'bird_id',
                            'scheme': styling.vivid(10))],

Create a snapshot of a map at a specific zoom and center:'acadia_biodiversity',
  • layers (list, optional) –

    List of zero or more of the following:

    • Layer: cartoframes Layer object for visualizing data from a CARTO table. See Layer for all styling options.
    • BaseMap: Basemap for contextualizng data layers. See BaseMap for all styling options.
    • QueryLayer: Layer from an arbitrary query. See QueryLayer for all styling options.
  • interactive (bool, optional) – Defaults to True to show an interactive slippy map. Setting to False creates a static map.
  • zoom (int, optional) – Zoom level of map. Acceptable values are usually in the range 0 to 19. 0 has the entire earth on a single tile (256px square). Zoom 19 is the size of a city block. Must be used in conjunction with lng and lat. Defaults to a view to have all data layers in view.
  • lat (float, optional) – Latitude value for the center of the map. Must be used in conjunction with zoom and lng. Defaults to a view to have all data layers in view.
  • lng (float, optional) – Longitude value for the center of the map. Must be used in conjunction with zoom and lat. Defaults to a view to have all data layers in view.
  • size (tuple, optional) – List of pixel dimensions for the map. Format is (width, height). Defaults to (800, 400).
  • ax – matplotlib axis on which to draw the image. Only used when interactive is False.

Interactive maps are rendered as HTML in an iframe, while static maps are returned as matplotlib Axes objects or IPython Image.

Return type:

IPython.display.HTML or matplotlib Axes

data_boundaries(boundary=None, region=None, decode_geom=False, timespan=None, include_nonclipped=False)

Find all boundaries available for the world or a region. If boundary is specified, get all available boundary polygons for the region specified (if any). This method is espeically useful for getting boundaries for a region and, with and CartoContext.data_discovery, getting tables of geometries and the corresponding raw measures. For example, if you want to analyze how median income has changed in a region (see examples section for more).


Find all boundaries available for Australia. The columns geom_name gives us the name of the boundary and geom_id is what we need for the boundary argument.

import cartoframes
cc = cartoframes.CartoContext('base url', 'api key')
au_boundaries = cc.data_boundaries(region='Australia')
au_boundaries[['geom_name', 'geom_id']]

Get the boundaries for Australian Postal Areas and map them.

from cartoframes import Layer
au_postal_areas = cc.data_boundaries(boundary='au.geo.POA')
cc.write(au_postal_areas, 'au_postal_areas')'au_postal_areas'))

Get census tracts around Idaho Falls, Idaho, USA, and add median income from the US census. Without limiting the metadata, we get median income measures for each census in the Data Observatory.

cc = cartoframes.CartoContext('base url', 'api key')
# will return DataFrame with columns `the_geom` and `geom_ref`
tracts = cc.data_boundaries(
# write geometries to a CARTO table
cc.write(tracts, 'idaho_falls_tracts')
# gather metadata needed to look up median income
median_income_meta = cc.data_discovery(
    keywords='median income',
# get median income data and original table as new dataframe
idaho_falls_income =
# overwrite existing table with newly-enriched dataframe
  • boundary (str, optional) – Boundary identifier for the boundaries that are of interest. For example, US census tracts have a boundary ID of us.census.tiger.census_tract, and Brazilian Municipios have an ID of br.geo.municipios. Find IDs by running CartoContext.data_boundaries without any arguments, or by looking in the Data Observatory catalog.
  • region (str, optional) –

    Region where boundary information or, if boundary is specified, boundary polygons are of interest. region can be one of the following:

    • table name (str): Name of a table in user’s CARTO account
    • bounding box (list of float): List of four values (two lng/lat pairs) in the following order: western longitude, southern latitude, eastern longitude, and northern latitude. For example, Switzerland fits in [5.9559111595,45.8179931641,10.4920501709,47.808380127]
  • timespan (str, optional) – Specific timespan to get geometries from. Defaults to use the most recent. See the Data Observatory catalog for more information.
  • decode_geom (bool, optional) – Whether to return the geometries as Shapely objects or keep them encoded as EWKB strings. Defaults to False.
  • include_nonclipped (bool, optional) – Optionally include non-shoreline-clipped boundaries. These boundaries are the raw boundaries provided by, for example, US Census Tiger.

If boundary is specified, then all available boundaries and accompanying geom_refs in region (or the world if region is None or not specified) are returned. If boundary is not specified, then a DataFrame of all available boundaries in region (or the world if region is None)

Return type:


data_discovery(region, keywords=None, regex=None, time=None, boundaries=None, include_quantiles=False)

Discover Data Observatory measures. This method returns the full Data Observatory metadata model for each measure or measures that match the conditions from the inputs. The full metadata in each row uniquely defines a measure based on the timespan, geographic resolution, and normalization (if any). Read more about the metadata response in Data Observatory documentation.

Internally, this method finds all measures in region that match the conditions set in keywords, regex, time, and boundaries (if any of them are specified). Then, if boundaries is not specified, a geographical resolution for that measure will be chosen subject to the type of region specified:

  1. If region is a table name, then a geographical resolution that is roughly equal to region size / number of subunits.
  2. If region is a country name or bounding box, then a geographical resolution will be chosen roughly equal to region size / 500.

Since different measures are in some geographic resolutions and not others, different geographical resolutions for different measures are oftentimes returned.


To remove the guesswork in how geographical resolutions are selected, specify one or more boundaries in boundaries. See the boundaries section for each region in the Data Observatory catalog.

The metadata returned from this method can then be used to create raw tables or for augmenting an existing table from these measures using For the full Data Observatory catalog, visit When working with the metadata DataFrame returned from this method, be careful to only remove rows not columns as <> generally needs the full metadata.


Narrowing down a discovery query using the keywords, regex, and time filters is important for getting a manageable metadata set. Besides there being a large number of measures in the DO, a metadata response has acceptable combinations of measures with demonimators (normalization and density), and the same measure from other years.

For example, setting the region to be United States counties with no filter values set will result in many thousands of measures.


Get all European Union measures that mention freight.

meta = cc.data_discovery('European Union',
  • region (str or list of float) –

    Information about the region of interest. region can be one of three types:

    • region name (str): Name of region of interest. Acceptable values are limited to: ‘Australia’, ‘Brazil’, ‘Canada’, ‘European Union’, ‘France’, ‘Mexico’, ‘Spain’, ‘United Kingdom’, ‘United States’.
    • table name (str): Name of a table in user’s CARTO account with geometries. The region will be the bounding box of the table.


      If a table name is also a valid Data Observatory region name, the Data Observatory name will be chosen over the table.

    • bounding box (list of float): List of four values (two lng/lat pairs) in the following order: western longitude, southern latitude, eastern longitude, and northern latitude. For example, Switzerland fits in [5.9559111595,45.8179931641,10.4920501709,47.808380127]


    Geometry levels are generally chosen by subdividing the region into the next smallest administrative unit. To override this behavior, specify the boundaries flag. For example, set boundaries to 'us.census.tiger.census_tract' to choose US census tracts.

  • keywords (str or list of str, optional) – Keyword or list of keywords in measure description or name. Response will be matched on all keywords listed (boolean or).
  • regex (str, optional) – A regular expression to search the measure descriptions and names. Note that this relies on PostgreSQL’s case insensitive operator ~*. See PostgreSQL docs for more information.
  • boundaries (str or list of str, optional) – Boundary or list of boundaries that specify the measure resolution. See the boundaries section for each region in the Data Observatory catalog.
  • include_quantiles (bool, optional) – Include quantiles calculations which are a calculation of how a measure compares to all measures in the full dataset. Defaults to False. If True, quantiles columns will be returned for each column which has it pre-calculated.

A dataframe of the complete metadata model for specific measures based on the search parameters.

Return type:


  • ValueError – If region is a list and does not consist of four elements, or if region is not an acceptable region
  • CartoException – If region is not a table in user account
data(table_name, metadata, persist_as=None, how='the_geom')

Get an augmented CARTO dataset with Data Observatory measures. Use CartoContext.data_discovery to search for available measures, or see the full Data Observatory catalog. Optionally persist the data as a new table.


Get a DataFrame with Data Observatory measures based on the geometries in a CARTO table.

cc = cartoframes.CartoContext(BASEURL, APIKEY)
median_income = cc.data_discovery('transaction_events',
                                  regex='.*median income.*',
                                  time='2011 - 2015')
df ='transaction_events',

Pass in cherry-picked measures from the Data Observatory catalog. The rest of the metadata will be filled in, but it’s important to specify the geographic level as this will not show up in the column name.

median_income = [{'numer_id': 'us.census.acs.B19013001',
                  'geom_id': 'us.census.tiger.block_group',
                  'numer_timespan': '2011 - 2015'}]
df ='transaction_events', median_income)
  • table_name (str) – Name of table on CARTO account that Data Observatory measures are to be added to.
  • metadata (pandas.DataFrame) – List of all measures to add to table_name. See CartoContext.data_discovery outputs for a full list of metadata columns.
  • persist_as (str, optional) – Output the results of augmenting table_name to persist_as as a persistent table on CARTO. Defaults to None, which will not create a table.
  • how (str, optional) – Not fully implemented. Column name for identifying the geometry from which to fetch the data. Defaults to the_geom, which results in measures that are spatially interpolated (e.g., a neighborhood boundary’s population will be calculated from underlying census tracts). Specifying a column that has the geometry identifier (for example, GEOID for US Census boundaries), results in measures directly from the Census for that GEOID but normalized how it is specified in the metadata.

A DataFrame representation of table_name which has new columns for each measure in metadata.

Return type:


  • NameError – If the columns in table_name are in the suggested_name column of metadata.
  • ValueError – If metadata object is invalid or empty, or if the number of requested measures exceeds 50.
  • CartoException – If user account consumes all of Data Observatory quota