In this post, I’m going to show you how you can “talk” to your eBird data. Specifically, I will build a question‑answering app that ingest your own eBird data into a database, then let you ask questions in natural language such as “what is the bird most seen”, “where do I visit the most”, “when and where did I see an American Robin”. Both the query and result are inferenced with an LLM model, accomplishing text-to-SQL (natural language to SQL) functionality.
My approach is to build a minimal viable AI‑powered eBird analytics app, while keeping the door wide open for future extensions like charts, maps, and richer analysis. The stack is powered by Cloudflare Workers + D1 + Workers AI, all operating within their genereous free-tier plan.
By the end of this post, you will be able to:
- upload your eBird data CSV into a Cloudflare D1 database
- store the full eBird schema
- ask natural‑language questions
- automatically generate SQL
- run the SQL against your own data
- summarize the results
- view everything in a simple HTML UI
- upload updated data, as UPSERT is implemented
Motivation
eBird is an indispensible tracker for any enthusiastic birder. It acts like a journal for where and when you saw what. However, for those who love to slice and dice data, the dashboard is a bit limited. Downloading your own data for offline data wrangling definitely sounds like fun.
But what if there is a way for the non-tech savvy folks to look into their history and get some insights using plain everyday English? For example, “when was the last time I saw both a short-eared owl and a red-tail hawk? And where was it?”, or “where is the most productive hotspot with the most species seen? Did it change year after year?”, or “where did I go birding solo most, or with a group?”.
The advent of LLMs and low-cost inferencing has made this possible. Not only can we transform such natural language questions into properly formatted SQL queries, but we can also transform the query result into easy to understand summaries in different languages.
It’s like a chatbot into your own data.
Architecture Overview
We will build our entire stack on Cloudflare’s Workers platform, using the Workers Compute and Storage offerings in their free-tier.
| Service | Free Tier Limits |
|---|---|
| Workers | 100K requests/day |
| D1 Database | 5GB storage, 5M reads/writes per month |
| Workers AI | 10,000 Neurons/day. Models are priced differently. I picked the quantized @cf/qwen/qwen3-30b-a3b-fp8 due to its relatively high parameter and low price. |
I have written a separate post explaining the rationale in choosing the Cloudflare stack.
We will take a modular approach where each endpoint takes care of 1 core task, and an entry worker will route request to the appropriate endpoint.
The entire system consists of:
/src/worker.mjs → routes requests
/src/routes/ingest.js → endpoint for CSV ingestion (with UPSERT)
/src/routes/ask.js → endpoint for AI SQL generation + query D1 + AI summarizer
/src/utils/csv.js → CSV parser
/src/utils/headerMap.js→ CSV headers → DB columns
/public/index.html → static UI
wranger.jsonc → Worker project configurationHere’s the entire system in one diagram:
┌──────────────────────────┐
│ index.html │
│ (Ask questions in UI) │
└─────────────┬────────────┘
POST /ask
│
▼
┌──────────────────────┐
│ Worker AI │
│ Qwen to SQL + summary│
└─────────┬────────────┘
│
▼
┌────────────────┐
│ D1 DB │
│ observations │
└────────────────┘
Ingestion (curl only):
curl → POST /ingest → Worker → D1
1: Download your data
Follow the instruction to request your own data. You will receive a link to download a zip file (containing 1 .csv file) within an hour.
2: Setup Wrangler
Create a free Cloudflare account if you haven’t got one.
Then, in your terminal, install Wrangler CLI globally:
npm install -g wranglerLog in:
wrangler loginThis connects your local environment to your Cloudflare account.
3: Creating D1 Database
Now that you have the data, use the Wrangler CLI to create a SQLite‑compatible D1 database
npx wrangler d1 create myEBirdNote the database_id returned.
4: Setup Project
Create wrangler.jsonc to configure our Worker project with the necessary bindings. A binding is how you give your Worker code permission to use Cloudflare’s GPU (AI) and Database (D1).
{
"name": "my-ebird",
"main": "src/worker.mjs",
"compatibility_date": "2025-03-21",
// Serve index.html + static assets
"assets": { "directory": "public" },
"ai": { "binding": "AI" },
"d1_databases": [
{
"binding": "myEBird",
"database_name": "myEBird"
"database_id": "YOUR_DATABASE_ID"
}
]
// Optional: helpful during development
"observability": {
"enabled": true
}
}In this project configuration file, we:
- bind to Workers AI so we can inference a model endpoint as simply as
env.AI.run("@cf/qwen/qwen3-30b-a3b-fp8", {messages: [{ role: "user", content: sqlPrompt }] }) - bind to the newly created D1 database, so we can execute against it like in
env.myEBird.prepare(sql).all();. ReplaceYOUR_DATABASE_IDwith the actual ID from your Cloudflare dashboard: Cloudflare Dashboard → D1 → myEBird → Overview → Database ID - bind a static asset handler to
env.ASSETS, which serves static assets inside/publicautomatically. - specify the Worker entry file at
src/worker.mjs
5: Create eBird Schema
Unlike other modern DBMS such as Oracle 26ai, Cloudflare does not yet have a “one-click” upload that infers a schema from a CSV file. Instead, we have to define the schema ourselves.
A eBird data CSV file contains 23 columns. We’ll keep all of them so the app can grow into these powerful scenarios in the future:
- maps (latitude/longitude)
- time‑of‑day analysis
- protocol‑based filtering
- breeding‑code analysis
- observer counts
- distance/area‑based effort metrics
Here’s the schema. Notice:
The first column in your csv is
Submission ID. However, each observed species in the same sighting has the same submission ID. So Submission ID is not unique, and should not be used as a primary key. Instead, we’ll use a composite key likePRIMARY KEY (submission_id, common_name). We will also use this primary key for UPSERTeBird CSV headers contain spaces, capital letters, and special characters. We can either double quote all of them to ensure a valid schema, or convert them to snake_case. I have chosen snake_case so that it looks cleaner without quotes, and would work better in LLM prompts later.
All columns use TEXT/INTEGER/REAL (SQLite-native types)
DROP TABLE IF EXISTS observations;
CREATE TABLE IF NOT EXISTS observations (
submission_id TEXT,
common_name TEXT,
scientific_name TEXT,
taxonomic_order INTEGER,
count INTEGER,
state_province TEXT,
county TEXT,
location_id TEXT,
location TEXT,
latitude REAL,
longitude REAL,
date TEXT,
time TEXT,
protocol TEXT,
duration_min INTEGER,
all_obs_reported INTEGER,
distance_km REAL,
area_ha REAL,
num_observers INTEGER,
breeding_code TEXT,
observation_details TEXT,
checklist_comments TEXT,
ml_catalog_numbers TEXT,
PRIMARY KEY (submission_id, common_name)
);Save this as schema.sql and upload it to the remote D1 database:
npx wrangler d1 execute myEBird --remote --file=schema.sqlOur database is now ready to accept rows.
If you want to develop locally, note that Wrangler does NOT use your Cloudflare dashboard database. It uses a local empty SQLite file unless you import data. So, import it into your local dev DB:
npx wrangler d1 execute myEBird --local --file=schema.sqlThis populates your local SQLite copy with the same tables as your remote Cloudflare version. After importing, verify the table exists locally by:
npx wrangler d1 execute myEBird --local --command="SELECT name FROM sqlite_master WHERE type='table';"You should see the name of the table listed. Check the schema:
npx wrangler d1 execute myEBird --local --command="PRAGMA table_info(observations);"6: Create CSV Ingestion Pipeline
We will now build an ingestion pipeline that:
- parse CSV
- map headers → columns
- validate headers
- insert new and updated rows
You might ask: Why not use an online CSV-to-SQL converter to convert our CSV into a series of INSERT statements, then upload it to D1 via
npx wrangler d1 execute ebird-db --remote --file=./data.sql? Keep reading for the need of parsing helpers next.
Create CSV Parsing Helpers
Map CSV headers → DB columns
Remember we have altered the original eBird headers to snake_case. To ensure future uploads respect this schema, we will create a mapping:
export const HEADER_MAP = {
"Submission ID": "submission_id",
"Common Name": "common_name",
"Scientific Name": "scientific_name",
"Taxonomic Order": "taxonomic_order",
"Count": "count",
"State/Province": "state_province",
"County": "county",
"Location ID": "location_id",
"Location": "location",
"Latitude": "latitude",
"Longitude": "longitude",
"Date": "date",
"Time": "time",
"Protocol": "protocol",
"Duration (Min)": "duration_min",
"All Obs Reported": "all_obs_reported",
"Distance (km)": "distance_km",
"Area (ha)": "area_ha",
"Number of Observers": "num_observers",
"Breeding Code": "breeding_code",
"Observation Details": "observation_details",
"Checklist Comments": "checklist_comments",
"ML Catalog Numbers": "ml_catalog_numbers"
};CSV Parser (RFC‑4180 Parser)
eBird CSVs contain:
- quoted fields
- commas inside fields like
"Marymoor Park, Lot D" - escaped quotes like
"Heard ""two"" birds" - empty fields
- trailing newlines
- variable column ordering
If you try to split on commas with line.split(",") on "Marymoor Park, Lot D",47.661,-122.123,..., it breaks by shifting every column after it. It will become ["Marymoor Park", " Lot D", "47.661", "-122.123", ...].
Thus Latitude shifts to the county column. Longitude becomes the location ID. Any UPSERT keys or SQL queries will break.
Since Workers don’t support fs, Node streams or Node CSV packages like csv-parse, PapaParse, fast-csv, we have to write our own parser or use a WASM CSV parser.
Here’s a fully RFC‑4180‑compliant CSV parser that:
- handle all valid CSV cases to ensure correctness
- work inside Cloudflare Workers
- is dependency‑free
- is fast enough for large files
- is easy to test and extend
Create src/utils/csv.js:
// A robust RFC-4180 CSV parser that works in Cloudflare Workers.
// Handles:
// - quoted fields
// - commas in fields
// - escaped quotes ("")
// - empty fields
// - trailing newlines
export function parseCsv(text) {
const rows = [];
let row = [];
let value = "";
let insideQuotes = false;
const pushValue = () => {
row.push(value);
value = "";
};
const pushRow = () => {
rows.push(row);
row = [];
};
for (let i = 0; i < text.length; i++) {
const char = text[i];
const next = text[i + 1];
if (insideQuotes) {
if (char === '"' && next === '"') {
// escaped quote
value += '"';
i++;
} else if (char === '"') {
// end of quoted field
insideQuotes = false;
} else {
value += char;
}
} else {
if (char === '"') {
insideQuotes = true;
} else if (char === ",") {
pushValue();
} else if (char === "\n") {
pushValue();
pushRow();
} else if (char !== "\r") {
value += char;
}
}
}
if (value.length > 0 || row.length > 0) {
pushValue();
pushRow();
}
if (rows.length === 0) return { headers: [], rows: [] };
const headers = rows[0].map(h => h.trim());
const dataRows = rows.slice(1).map(cols => {
const obj = {};
headers.forEach((h, i) => {
obj[h] = (cols[i] ?? "").trim();
});
return obj;
});
return { headers, rows: dataRows };
}Create Ingestion Endpoint
Our ingestion Worker src/routes/ingest.js does the followings:
- to prevent unauthorized upload, checks if the request has the secret ingest key in the header.
- parses CSV with
src/utils/csv.jsand validates headers - maps CSV → DB columns with
src/utils/headerMap.jsto normalize fields
- enforces full schema
- allows updates to the D1 table by ingesting an updated csv file. It performs UPSERT via
ON CONFLICT(submission_id, common_name) DO UPDATE SET ...to ensure repeated checklists don’t create duplicates.
- returns counts
import { parseCsv } from "../utils/csv.js";
import { HEADER_MAP } from "../utils/headerMap.js";
export default async function handleIngest(request, env) {
// Require secret header
if (request.headers.get("X-Ingest-Key") !== env.INGEST_KEY) {
return new Response("Unauthorized", { status: 401 });
}
const csvText = await request.text();
const { headers, rows } = parseCsv(csvText);
// Validate CSV headers
const required = Object.keys(HEADER_MAP);
const missing = required.filter(h => !headers.includes(h));
if (missing.length > 0) {
return new Response(
JSON.stringify({ error: "Missing required columns", missing }),
{ status: 400, headers: { "Content-Type": "application/json" } }
);
}
let inserted = 0;
let updated = 0;
for (const raw of rows) {
const mapped = {};
for (const [csvKey, dbKey] of Object.entries(HEADER_MAP)) {
mapped[dbKey] = raw[csvKey] ?? null;
}
const values = Object.values(mapped);
const sql = `
INSERT INTO observations (
submission_id, common_name, scientific_name, taxonomic_order, count,
state_province, county, location_id, location, latitude, longitude,
date, time, protocol, duration_min, all_obs_reported, distance_km,
area_ha, num_observers, breeding_code, observation_details,
checklist_comments, ml_catalog_numbers
) VALUES (${values.map(() => "?").join(",")})
ON CONFLICT(submission_id, common_name) DO UPDATE SET
scientific_name=excluded.scientific_name,
taxonomic_order=excluded.taxonomic_order,
count=excluded.count,
state_province=excluded.state_province,
county=excluded.county,
location_id=excluded.location_id,
location=excluded.location,
latitude=excluded.latitude,
longitude=excluded.longitude,
date=excluded.date,
time=excluded.time,
protocol=excluded.protocol,
duration_min=excluded.duration_min,
all_obs_reported=excluded.all_obs_reported,
distance_km=excluded.distance_km,
area_ha=excluded.area_ha,
num_observers=excluded.num_observers,
breeding_code=excluded.breeding_code,
observation_details=excluded.observation_details,
checklist_comments=excluded.checklist_comments,
ml_catalog_numbers=excluded.ml_catalog_numbers
`;
const res = await env.myEBird.prepare(sql).bind(...values).run();
if (res.meta.changes === 1) inserted++;
if (res.meta.changes > 1) updated++;
}
return new Response(
JSON.stringify({ inserted, updated }),
{ headers: { "Content-Type": "application/json" } }
);
}7: Create NL2SQL Endpoint
This is the heart of the app. It:
- takes a natural‑language question
- asks Qwen to generate SQL. We have setup these rules in the prompt
- allow fuzzy and multi-word search on common name. This ensures that even if the user enters a partial or mistyped name, or just provides a broad category (e.g., “owl”, “sparrow”, “goose”), we can still return meaningful matches.
- provide structured schema context
- restrict the generated SQL statement to be SELECT only, no INSERT/UPDATE/DELETE is allowed.
- validate the SQL before execution, reject anything that isn’t a safe, syntactically valid SELECT.
- runs it against D1
- asks Qwen to summarize the results
- returns
{ answer, sql, raw result }
Here’s the query flow:
UI → POST /ask → Worker
→ Workers AI: Qwen model generates SQL
→ D1: run SQL
→ Workers AI: Qwen model summarizes results
← Worker → UI: return { answer, sql, raw result }
Because the SQL generator prompt includes the full schema, the model can answer questions like:
- “Which species did I see most often at Marymoor Park?”
- “Show me all sightings within 1 km of latitude X, longitude Y.”
- “What birds did I see in February?”
- “Which checklists had more than 3 observers?”
export default async function handleAsk(request, env) {
try {
const { question } = await request.json();
if (!question) {
return new Response(
JSON.stringify({ error: "Missing question" }),
{ status: 400, headers: { "Content-Type": "application/json" } }
);
}
const sqlPrompt = `
You are a SQL generator for a SQLite database of bird observations.
The table is named "observations" with these columns:
submission_id, common_name, scientific_name, taxonomic_order, count,
state_province, county, location_id, location, latitude, longitude,
date, time, protocol, duration_min, all_obs_reported, distance_km,
area_ha, num_observers, breeding_code, observation_details,
checklist_comments, ml_catalog_numbers.
Rules:
1. When interpreting species names in the user question:
- Never require exact equality on common_name.
- Always use fuzzy matching: LOWER(common_name) LIKE LOWER('%<term>%')
- If the user provides a partial name, misspelling, or missing hyphens
(e.g., "short ear owl", "short eared owl", "short-eared owl"), still match the correct species using fuzzy matching.
- When the user provides a multi-word species name (e.g., "short ear owl"), split the phrase into individual tokens and fuzzy-match each token
independently. For example:
LOWER(common_name) LIKE '%short%'
AND LOWER(common_name) LIKE '%ear%'
AND LOWER(common_name) LIKE '%owl%'
This ensures that partial names, typos, and missing hyphens still match the correct species.
- If the user provides a broad category (e.g., "owl", "sparrow", "goose"), return all species whose common_name contains that term.
- If the user provides a name that does not exactly match any known species, still attempt fuzzy matching using LIKE or LOWER() LIKE.
- Do not hallucinate species names. Only filter using fuzzy matching against the common_name column.
2. Only generate a single SELECT query.
3. Use valid SQLite syntax.
4. Return ONLY the SQL. No explanation.
5. Never modify data (no INSERT/UPDATE/DELETE).
User question: "${question}"
`;
const aiSql = await env.AI.run("@cf/qwen/qwen3-30b-a3b-fp8", {
messages: [{ role: "user", content: sqlPrompt }]
});
const sql = aiSql?.choices?.[0]?.message?.content?.trim();
if (!sql || !/^select/i.test(sql)) {
return new Response(
JSON.stringify({ error: "Invalid SQL generated", sql }),
{ status: 400, headers: { "Content-Type": "application/json" } }
);
}
const result = await env.myEBird.prepare(sql).all();
const summaryPrompt = `
Summarize this result in one short English sentence.
Question: "${question}"
SQL: ${sql}
Result JSON: ${JSON.stringify(result)}
`;
const aiSummary = await env.AI.run("@cf/qwen/qwen3-30b-a3b-fp8", {
messages: [{ role: "user", content: summaryPrompt }]
});
const answer = aiSummary?.choices?.[0]?.message?.content?.trim() || "";
return new Response(
JSON.stringify({ sql, result, answer }),
{ headers: { "Content-Type": "application/json" } }
);
} catch (err) {
return new Response(
JSON.stringify({ error: err.message }),
{ status: 500, headers: { "Content-Type": "application/json" } }
);
}
}8: The Worker Entry: Router
Our src/worker.mjs router handles:
/ingest
/ask
- static assets
import handleIngest from "./routes/ingest.js";
import handleAsk from "./routes/ask.js";
export default {
async fetch(request, env) {
const url = new URL(request.url);
if (url.pathname === "/ingest" && request.method === "POST") {
return handleIngest(request, env);
}
if (url.pathname === "/ask" && request.method === "POST") {
return handleAsk(request, env);
}
// Everything else → static assets
return env.ASSETS.fetch(request);
}
};9: HTML UI
The UI public/index.html is intentionally simple with the following elements:
- a text box to ask questions
- a button to send the query to
/askendpoint vis POST - three
<pre>blocks for- a natural‑language summary
- the generated SQL
- the raw SQL result in JSON
Since this is a proof-of-concept, we are not using any frameworks or client-side libraries.
<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
<title>Talk to your eBird</title>
<style>
body { font-family: Arial; max-width: 700px; margin: 40px auto; }
input { width: 80%; padding: 10px; }
button { padding: 10px 16px; }
pre { background: #f5f5f5; padding: 12px; white-space: pre-wrap; }
</style>
</head>
<body>
<h1>Talk to your eBird data</h1>
<input type="text" id="q" placeholder="Ask a question like 'what bird is most seen'">
<button onclick="runQuery()">Ask</button>
<h2>Summary</h2>
<pre id="summary"></pre>
<h2>SQL</h2>
<pre id="sql"></pre>
<h2>Raw Result</h2>
<pre id="result"></pre>
<script>
async function runQuery() {
const q = document.getElementById('q').value;
document.getElementById('summary').textContent = "Loading...";
document.getElementById('sql').textContent = "";
document.getElementById('result').textContent = "";
const askRes = await fetch("/ask", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ question: q })
});
const askData = await askRes.json();
document.getElementById('sql').textContent = askData.sql;
document.getElementById('result').textContent =
JSON.stringify(askData.result, null, 2);
document.getElementById('summary').textContent = askData.answer;
}
</script>
</body>
</html>10: Deploy
Deploy the app:
npx wrangler deployWrangler will:
- upload your Worker
- upload the
public/assets
- bind R2 + AI
- give you a production URL
Our bird‑log Q&A app is now live at the edge.
11: Ingest the CSV
Now we’re finally ready to ingest our data into the D1 database we created in Step 3. Ingestion is intentionally not exposed in the UI. This is because any secret you put in frontend JavaScript becomes public. Anyone who loads your page could upload arbitrary CSVs into your D1 instance.
So ingestion stays server‑side, protected by a header:
X-Ingest-Key: <your secret>
Here are steps to ingest your data securely to your D1 instance:
- Use
openssl rand -hex 32to generate a random secret. - Cloudflare has two separate secret stores:
Dashboard secrets used by your deployed Worker in the cloud.
Wrangler local secrets used only when running
wrangler dev.These two stores do not sync.
Go to Cloudflare Dashboard → Workers & Pages → Your Worker → Settings → Variables & Secrets section, add a new secret named
INGEST_KEYand enter the generated secret as Value. To enable local development, runnpx wrangler secret put INGEST_KEYin CLI.Both approach will make the key
env.INGEST_KEYavailable inside your Worker.
- Upload your CSV like this:
curl -X POST \
-H "Content-Type: text/csv" \
-H "X-Ingest-Key: YOUR_SECRET" \
--data-binary @ebird.csv \
https://your-worker.workers.dev/ingestReplace your-worker.workers.dev with the production URL you got in Step 8 deployment.
This gives you a secure, private ingestion pipeline.
12: Test everything
curl to the /ask endpoint to test:
curl -X POST https:/your-worker.workers.dev/ask \
-H "Content-Type: application/json" \
-d "{\"question\":\"How many Snow Geese did I see?\"}"You should get back something like:
{
"sql": "...",
"result": [...],
"answer": "..."
}Then visit the production URL and ask:
Which species did I see most in numbers?
You’ll get:
- SQL generated by Qwen
- JSON results from D1
- a natural‑language summary
All powered by your own eBird data.
Since we are returning the whole response from D1, we can also check the metadata for where the request was served and how long it took. It’s D1’s internal measurement of how long SQLite took to execute the query inside the nearest colo. For example, in one of my queries, I logged "sql_duration_ms": 1.7897 which was extremely fast.
Extending the App
This project demonstrates something important: you don’t need a giant stack to build a personal analytics tool. With Cloudflare Workers, D1, and Workers AI, you can create a fully functional, AI‑powered data explorer in a handful of files. Best yet, this proof of concept stays completely in Cloudflare’s generous free-tier!
The minimal system we built is already a powerful foundation. Because we kept the full schema, we can now extend the app in powerful ways like these:
| Category | Fields To Use | What We Can Build |
|---|---|---|
| 🗺️ Maps | latitude, longitude, location_id |
• Heatmaps showing most visited locations • Cluster maps that reveal dense observation zones • Radius‑based filters for “trips near Seattle” tools |
| 📊 Charts (date/time) | date, time, count |
• “Top birds this month” dashboards with trend lines • Seasonal abundance curves across multiple years • Time‑of‑day histograms showing peak activity windows • Observer‑effort graphs (e.g., # of people vs. miles travelled) • Protocol‑based comparisons (stationary vs. traveling) |
| 🐦 Species Profiles | scientific_name, common_name, taxonomic_order |
• Personal species dashboards with lifelist stats • First/last‑seen charts • Frequency‑over‑time visualizations • Location‑specific summaries (e.g., “My top species at this hotspot”) |
| 📋 Checklist Reconstruction | submission_id, protocol, duration_min, distance_km, area_ha |
• Full checklist views with effort metadata • Trip reports summarizing multi‑location outings • Effort‑adjusted summaries (species per hour/km) • Protocol‑specific comparisons (traveling vs. stationary) |
| 📈 Reporting | Entire schema | • Weekly summaries with trends and highlights • Monthly reports showing species turnover • Hotspot comparisons across regions • Long‑term species trend analysis |
What’s Next
Since this is a prototype aiming at quickly validating the NL2SQL pipeline, it lacks authentication and rate limit for the serverless function, which is the gateway to both the data store and inference endpoint. At the minimum, we should implement rate limiting to throttle traffic and control expense.
The post also demonstrates a manual data ingestion approach. In my next post, I will move to a Git-based CI/CD workflow. With that, I can keep uploading updated eBird data files in a repository, and Github Action will automatically ingest and UPSERT to my D1 table.
I will also make improvements in
- Observability: Measuring end‑to‑end latency (worker, inference, query, network), and add per‑query analytics.
- Cost analytics: tracking Worker AI Neuron cost per query.
- Responsible AI: add LLM guardrails
- Geo‑aware: know which location in the edge network served the request.
- Optimized: add index, cache frequent queries, etc.
- Personal Metrics: provide ready-made metrics such as “where you spend the most active birding time versus just the most frequent visits.”, “top species”, etc.
Stay tuned!