Bee Maps Data API

APIs to consume Street Level Imagery, Road Features data, AI Event Videos, and more.

What is the Bee Maps Data API?

APIs to consume street-level intelligence data and incentivize the collection of more data.

What APIs Bee Maps Provides

API Name
API Description
Link to API Docs

Street-level imagery

Geolocated frames with timestamps, GPS accuracy metrics, and IMU data

Map Features

Detected road objects (speed limits, stop signs, turn restrictions, fire hydrants, lane lines) with confidence scores and precise positioning

AI Event Videos

Video clips of notable driving events (harsh braking, speeding, swerving, high g-force, fast acceleration) for training driving models and world models

Burst Locations

Incentivize drivers to map specific areas on-demand

Key Differentiators

  • Fresh data: Days to weeks old, not months/years like traditional providers

  • Global coverage: 37% of world roads mapped, see herearrow-up-right

  • Programmable: Query any geometry (point, line, polygon) with flexible filters

  • Flexible licensing: Build derived datasets from imagery

  • On-demand collection: Request data in specific locations via an API

Getting Started for Developers

Generate API Key

  1. Loginarrow-up-right to your dashboard on Beemaps.com

  2. From Bee Maps Dashboard, go to Developers → API Keyarrow-up-right

  3. Click Get API Key

API Reference Documentation

For complete API documentation including endpoints, request/response schemas, and code examples, see the interactive API documentation:

API Documentation (Scalar)arrow-up-right

API Playground

The API Playgroundarrow-up-right lets you interactively explore and test Bee Maps' Developer APIs without leaving your browser. You can authenticate, define query geometry, retrieve map features or imagery, and view the live API responses – all in one place.

This is ideal for developers who want to experiment with the API before integration.

Quick Start:

  1. Loginarrow-up-right to your dashboard on Beemaps.com

  2. From Bee Maps Dashboard, go to Developers → Playground

  3. Click Get API Key

  4. Enter coordinates (in [lon, lat] format) and a radius

  5. Enable Retrieve Imagery or Retrieve Map Features as needed

  6. Click Submit to see results in the response panel

API Products

Map Features API

  • Detected road features: speed limits, turn restrictions, highway signs, parking restrictions, fire hydrants, lane lines, and more

  • Each feature includes precise positioning, azimuth (direction facing), and confidence score

  • Lane line data available for determining road width and lane count

  • Currently available in US, EU, and UK

Street Level Imagery API

  • Geolocated street-level imagery frames with timestamps, GPS accuracy metrics, and IMU data

  • Globally available - view coveragearrow-up-right

AI Event Videos API

  • Video clips of notable driving events for training autonomous driving models and world models

  • Event triggers: harsh braking, speeding, swerving, high g-force maneuvers, fast acceleration

  • Each video is paired with event metadata—labeled "here's what happened" data for supervised learning

  • Sourced from global Bee cameras fleet across diverse road conditions, weather, and geographies

Burst Locations API

  • Incentivize drivers to map specific areas on-demand

  • When you create a burst, it goes live in the Bee App immediately. Drivers within range receive a push notification alerting them to the incentivized location.

  • Coverage timing depends on local driver density—urban areas often see results within hours, while rural areas may take days.

When to Use Each Endpoint

Need
Endpoint
Notes

Imagery and map features together

Unified endpoint, most flexible

Just Map Features

Slightly faster for features only

Imagery for specific week

Historical image queries

Latest imagery only

Most recent images in area

AI Event Videos

AI Event Videos by category

Incentivize new coverage

Create a new Burst at specific location

Get status of Bursted location

Determine if drivers mapped your Bursted location

Get credit balance

Determine credit usage

AI Agent Integration

Overview

Bee Maps provides a Model Context Protocol (MCP) server that enables AI assistants like Claude, ChatGPT, and Cursor to query road intelligence data using natural language.

The MCP server exposes Bee Maps road intelligence APIs as tools that AI agents can invoke automatically based on user requests like "show me speed limits on Main Street" or "check road conditions along my delivery route."

MCP Setup

Claude Desktop

Add to your Claude Desktop configuration file:

macOS: ~/Library/Application Support/Claude/claude_desktop_config.json

Windows: %APPDATA%\Claude\claude_desktop_config.json

Cursor IDE

Add to your Cursor MCP settings (Settings → MCP):

How Agents Should Use the API

Endpoint Selection

User Request
Endpoint
Why

"What's at this location?"

POST /map-data

Need both features and imagery

"Show me speed limits on this route"

POST /map-data with LineString

Route-based query

"Get all stop signs in this area"

POST /mapFeatures/poly

Features only, polygon query

"Latest photos of this address"

POST /latest/poly

Most recent imagery

"Historical images from last month"

POST /imagery/poly with week param

Specific time period

"I need fresh coverage here"

POST /burst/create

Request new data collection

"Did my burst get mapped?"

GET /bursts

Check burst status

Constraints Agents Must Follow

  1. Break large areas into smaller queries - Don't try to query an entire city at once, must be smaller than 5 Sq Km.

  2. Coordinate order is [longitude, latitude] - Not lat/lon

  3. Check balance periodically - Call GET /balance during multi-query workflows

  4. Image URLs expire - Download images promptly after receiving URLs

  5. Burst timing varies - Urban areas may get coverage in hours, rural areas may take days

Example Agent Workflow

User: "Check if there are any parking restrictions near 123 Main St, San Francisco"

Agent should:

  1. Geocode the address to get coordinates

  2. Call POST /map-data with Point geometry and ~50m radius

  3. Filter results for parking-restriction-sign class

  4. Also return recent imagery for visual verification

  5. Summarize findings for the user

Agent-Optimized OpenAPI Descriptions

When building AI agents that use the Bee Maps API, use these descriptions in your tool definitions:

/map-data

Query map features and/or street-level imagery within a geographic area. Use this for: checking road conditions, validating locations, getting both features and imagery together. Supports Point (with radius), LineString (with buffer), or Polygon geometry. Coordinates must be [longitude, latitude] order.

/mapFeatures/poly

Query detected road features (speed limits, stop signs, turn restrictions, fire hydrants, lane lines) within a polygon. Use when you only need features, not imagery. Returns precise positions, confidence scores, and feature-specific properties.

/latest/poly

Get the most recent street-level imagery within a polygon. Use for current condition monitoring or when you want the freshest available data regardless of specific date.

/burst/create

Create incentivized mapping requests. Use when existing coverage is stale or missing. Drivers get notified and paid to map the area. Urban areas: hours to days. Rural: days to weeks.

/balance

Check remaining API credits. Call periodically during multi-query workflows.

Use Cases

Fleet Operations & Route Validation

Logistics companies verify road conditions before dispatching vehicles—check for construction, changed speed limits, or new traffic patterns along delivery routes.

What you get:

  • Speed limits along your route

  • Turn restrictions that affect routing

  • Recent imagery showing current road conditions

APIs used: Street Level Imagery API, Map Features API


Robotaxi & Delivery Dropoff Validation

Autonomous vehicle and delivery services verify that pickup/dropoff locations are accessible—check for fire hydrants, parking restrictions, or obstacles.

What you get:

  • Fire hydrant locations (no stopping zones)

  • Parking restriction signs and time limits

  • Lane markings to determine if location is in traffic lane vs. shoulder

  • Recent imagery for visual verification

APIs used: Street Level Imagery API, Map Features API


AI Model Training Data

Computer vision teams collect labeled street-level imagery with known map features for model training.

What you get:

  • Images with detected objects and bounding boxes

  • Confidence scores for each detection

  • Feature-specific properties (speed limit values, sign types)

  • Camera intrinsic parameters for image undistortion

APIs used: Street Level Imagery API, Map Features API


POI Validation

Local search and mapping companies verify that business listings match physical reality—check storefronts, signage, and street addresses.

What you get:

  • Street-level imagery from multiple angles

  • Timestamps showing data freshness

  • GPS coordinates with accuracy metrics

APIs used: Street Level Imagery API


Government Asset Inventory

Cities inventory street assets—fire hydrants, traffic signals, signage—for maintenance planning.

What you get:

  • Geolocated positions of all detected assets

  • First/last seen timestamps for change detection

  • Confidence scores for data quality assessment

APIs used: Map Features API


Insurance Assessment

Insurance companies assess road conditions and property visibility for claims or underwriting.

What you get:

  • Recent imagery of locations

  • Road feature data for risk assessment

  • Data freshness timestamps

APIs used: Street Level Imagery API


POI Verification

Retail analytics and local search companies verify whether businesses are still operating at listed locations—check for closed storefronts, changed signage, or vacant properties.

What you get:

  • Recent street-level imagery showing storefront status

  • Multiple angles and timestamps to confirm consistency

  • Historical imagery to track changes over time

APIs used: Street Level Imagery API


Real Estate Situational Awareness

Investors and buyers understand what's actually happening around a property at different times—traffic patterns, nearby businesses, street activity, and neighborhood character.

What you get:

  • Street-level imagery from multiple time periods

  • Road features showing traffic patterns and restrictions

  • Visual context of surrounding area and businesses

  • Freshness timestamps to understand data recency

APIs used: Street Level Imagery API, Map Features API


Map Freshness

Mapping and navigation companies keep their maps current with locally-sourced, recent data rather than relying on infrequent survey vehicles.

What you get:

  • Fresh imagery (days/weeks old, not months/years)

  • Detected road features with timestamps

  • Change indicators via first/last seen metadata

  • On-demand collection via Bursts for priority areas

APIs used: Street Level Imagery API, Map Features API, Burst Locations API


Change Detection

Monitor specific objects or road features over time—track new speed limits, changed signage, road modifications, or infrastructure updates.

What you get:

  • First seen/last seen timestamps for all detected features

  • Historical imagery to compare changes visually

  • Confidence scores indicating detection reliability

  • Precise positioning for spatial tracking

APIs used: Street Level Imagery API, Map Features API

Use Case Deep Dives

Fleet Operations: Route Condition Validation

Scenario

A logistics company needs to validate road conditions along a planned delivery route before dispatching vehicles.

Workflow

  1. Get route coordinates from your routing engine

  2. Query map features along the route using LineString geometry

  3. Analyze results for:

    • Speed limit changes

    • Turn restrictions

    • Traffic signals

  4. Get recent imagery to visually verify conditions

  5. Flag any concerns to dispatchers

Sample Query

What to Look For

  • Speed limits: Plan for variable speed zones

  • Turn restrictions: Ensure route doesn't require prohibited turns

  • Recent imagery timestamps: Confirm data freshness

  • Construction indicators: Visible in imagery


Robotaxi: Pickup / Dropoff Zone Validation

Scenario

An autonomous vehicle service needs to verify a dropoff location is safe and legal.

Workflow

  1. Query the location with Point geometry and 30m radius

  2. Check for blockers:

    • Fire hydrants (no stopping within 15ft)

    • Parking restriction signs

    • Bus stops, crosswalks

  3. Analyze lane lines to determine if location is curbside

  4. Review imagery for obstacles

Sample Query

Decision Logic


AI Training: Labeled Imagery Collection

Scenario

A computer vision team needs training data for a stop sign detector.

Workflow

  1. Query areas known to have stop signs, most residential neighborhoods

  2. Filter features by class and confidence threshold

  3. Download images using the signed URLs

  4. Use bounding box coordinates (x, y, w, h) as labels

  5. Apply camera intrinsics from /devices for undistortion

Sample Query

Processing Results


Government: Asset Inventory

Scenario

A city needs to inventory all fire hydrants for maintenance planning.

Workflow

  1. Divide city into query-sized tiles

  2. Query each tile for map features

  3. Filter for target asset class

  4. Aggregate results into a database

  5. Track first/last seen for change detection

Tiling Large Areas

Aggregating Results


On-Demand Collection with Bursts

Scenario

You need fresh data for a location that has stale or no coverage.

Workflow

  1. Check existing coverage with /latest/poly

  2. If stale or missing, create a burst with /burst/create

  3. Monitor burst status with GET /bursts

  4. Query new imagery once isHit becomes true

Below is an app for automatically posting Bursted locations and tracking their status.

App built to create and manage Bursts

Last updated