Bee Edge AI

Program the Bee camera to collect data from the physical world you want

Build and deploy custom AI workloads on the Bee's edge computing platform. Write Python modules, push them OTA (over the air) to Bee devices, and stream results to your cloud.

Overview

Edge Modules are software programs that run on the Bee camera device to collect the data you want from the physical world. They have access to all onboard sensors and can run custom ML models alongside the native Map AI stack.

Capabilities

Feature
Details

Sensors

12.3MP camera, stereo depth imagery, GPS, IMU, accelerometer

Compute

5.1 TOPS NPU, runs inference offline

Deployment

OTA via Bee Maps infrastructure

Targeting

Country, state, metro

Output

  • Structured JSON e.g. detected objects

  • Imagery

  • Imagery and Depth

  • Video

  • Telemetry

Sensor Access

Edge Modules can access all onboard sensors:

Sensor
Data

Camera

12.3MP RGB frames at 30 FPS

Depth

Stereo depth imagery with distance estimation

GPS

Latitude, longitude, altitude, speed, heading

IMU

Accelerometer and gyroscope (6-axis)

Running Custom Models

There are two approaches to running custom models.

Model Type
Description

Detection Models

Models that detect and position objects in a scene: "speed limit sign exists at these coordinates"

Classification Models

Models that perform classification tasks: "is there a baby stroller in the image?"

Geographic Targeting

Deploy modules to specific regions using the Bee Maps console.

Targeting Options

Level
Description

Country

All devices in a country

State/Province

All devices in a state or province

Metro

All devices in a metropolitan area

City

All devices in a given city e.g. Santa Monica, CA

Devices receive your module only when operating within targeted regions.

Data Offload

All Edge Module output streams to your cloud via Bee Connectivity Services. Alternatively, you can upload to Bee Maps and then use our API to consume the data.

Connectivity Channels

The Bee uses two offload channels to optimize bandwidth and latency:

Channel
Use Case
Behavior

LTE

Real-time critical data, small payloads

Always on, immediate delivery

WiFi

Bulk imagery, large payloads

Batched delivery when connected (typically overnight)

Output Configuration

Configure what data gets sent and when.

Data Type
Typical Size
Default Delivery

Detections (JSON)

~1 KB per event

Real-time via LTE

Frame crop

~50 KB

Real-time via LTE

Full frame (12.3MP)

~2 MB

Usually Batched via WiFi

Depth crop

Variable

Batched via WiFi

Video clip

Variable

Batched via WiFi

Deployment Workflow

  1. Create Module — Define your module configuration and upload your model via the Bee Maps console

  2. Configure Output — Set your endpoint and select which data types to include

  3. Set Targeting — Define geographic regions using the targeting UI

  4. Staging Deploy — Push to a small device subset to validate accuracy and recall

  5. Production Deploy — Roll out to your full target region via OTA

Example Use Cases

Retail & Places Churn

Monitor storefronts to detect business changes—new openings, closures, rebrands.

What you detect:

  • "For lease" signs

  • Changed storefront signage

  • Boarded windows

  • New business openings

Output: Structured change events with imagery, fed into places databases to keep POI data fresh without manual surveys.


Complex Intersection Video

Capture video clips at specific intersections for traffic analysis, urban planning, or safety studies.

How it works:

  • Define target intersections via GeoJSON or automatically detect complex intersections based on detection count of traffic lights

  • Trigger recording when devices enter the zone

  • Collect multi-angle footage as different vehicles traverse the same intersection over time

Output: Geotagged video clips from multiple perspectives, timestamped for temporal analysis.


Long-Tail Event Capture

Detect rare but critical events that matter for autonomous vehicle training and safety validation.

Example events:

  • Pedestrians with strollers in roadway

  • Wheelchair users crossing

  • Animals in road

  • Unusual vehicle types (oversized loads, emergency vehicles)

  • Construction zone edge cases

  • Adverse weather conditions

How it works: Run lightweight classifiers on-device. Upload only when target events are detected. Build datasets of real-world edge cases at global scale.

Output: Annotated imagery and video of rare events, with full sensor context.


World Model Training Data

Collect synchronized video + depth + IMU data for training world models and vision foundation models.

Data captured:

  • High-resolution video (12.3MP @ 30fps)

  • Imagery and stereo depth imagery pair

  • Full IMU telemetry (accelerometer + gyroscope)

  • Precise GPS positioning

Targeting options:

  • Specific road types (highways, urban, rural)

  • Weather conditions

  • Geographic regions

  • Time of day

Output: The raw ingredients for building physical world simulations—synchronized multimodal sensor data at scale.


Getting Started

Ready to deploy custom AI workloads on the Bee platform?

Last updated