# LaunchDetect Academy — Full Course (Plain Text) > Free 30-week online course in space-domain geographic information systems (GIS). Five certification tracks, five hands-on capstones using real geostationary thermal satellite imagery, real two-line element sets, and real spaceport data. Curriculum is free and publicly available; verifiable certificates are gated to the LaunchDetect Gold subscription ($9.99/month). Canonical URL: https://launchdetect.com/academy/ Course provider: LaunchDetect (https://launchdetect.com/) Last updated: 2026-05-11 ## Certification Tracks ### Track 1: Ground Station Operator (Weeks 1-4) Level: Beginner. Duration: 4 weeks. Credential: Certified Ground Station Operator URL: https://launchdetect.com/academy/ground-station-operator/ Foundations of geographic information systems applied to space. By the end you can load, project, and visualize coordinate data, plot every active orbital launch pad on Earth, and reason about coordinate systems and datums without confusion. The capstone delivers a styled global launch-site atlas. Learning outcomes: - Explain the difference between geographic (lat/lon) and projected coordinate systems, and choose the right one for a task. - Identify the correct datum (WGS84 vs alternatives) for satellite-derived data. - Load vector and raster GIS data into QGIS, style it, and export a publication-grade map. - Plot global launch sites and compute basic spatial relationships (distance, nearest neighbor). Prerequisites: None. Comfortable with basic computing (open files, install software). Some Python helpful but not required. Tools: QGIS 3.x, Python 3.11+, geopandas, shapely, pyproj ### Track 2: Orbital Analyst (Weeks 5-10) Level: Intermediate. Duration: 6 weeks. Credential: Certified Orbital Analyst URL: https://launchdetect.com/academy/orbital-analyst/ Spatial analysis and orbital mechanics intertwined. Learn PostGIS for spatial SQL, then layer in two-line element sets, SGP4 propagation, and ground-track geometry to answer questions like 'which spaceports can serve a sun-synchronous orbit?' or 'which countries does the ISS overfly in a 24-hour window?'. Learning outcomes: - Run spatial joins, buffers, intersections, and dissolve operations in QGIS and PostGIS. - Read a TLE and explain what each Keplerian element means physically. - Propagate a satellite using SGP4 in Python (skyfield) and produce ground-track GeoJSON. - Compute coverage / line-of-sight from a ground station to a satellite at any time. - Match spaceports to orbital regimes (LEO / GEO / SSO / Molniya) based on geometry alone. Prerequisites: Ground Station Operator track or equivalent — you must be comfortable with coordinate systems, vector data in QGIS, and basic Python. Tools: PostGIS 16, skyfield, geopandas, Folium, matplotlib ### Track 3: Remote Sensing Specialist (Weeks 11-15) Level: Intermediate. Duration: 5 weeks. Credential: Certified Remote Sensing Specialist URL: https://launchdetect.com/academy/remote-sensing-specialist/ Satellite imagery from sensor physics to plume detection. Learn the electromagnetic spectrum, optical/IR/SAR sensor types, and the GOES-R ABI band suite. The capstone is a working thermal plume detector that operates on real NOAA GOES NetCDF files — the same data feed LaunchDetect uses in production. Learning outcomes: - Map the electromagnetic spectrum to sensor bands and understand what each band sees. - Read Landsat / Sentinel-2 imagery and compute NDVI and false-color composites. - Navigate GOES-R ABI's 16 bands, especially Band 7 (3.9 µm) for thermal-emissive sensing. - Convert raw radiance to brightness temperature and run a hotspot detection. - Georeference fixed-grid GOES imagery to lat/lon, accounting for parallax. Prerequisites: Orbital Analyst track or equivalent. Familiarity with raster data and basic radiometric concepts. Tools: xarray, rasterio, netCDF4, satpy, pyresample ### Track 4: Mission GIS Engineer (Weeks 16-20) Level: Advanced. Duration: 5 weeks. Credential: Certified Mission GIS Engineer URL: https://launchdetect.com/academy/mission-gis-engineer/ Move GIS off the desktop and into the browser, real-time, and 3D. Build vector tile pipelines, deliver maps via Leaflet/MapLibre/OpenLayers, draw orbits on a CesiumJS globe, and stream live satellite positions over WebSockets. Capstone: a real-time satellite tracker web app on a 3D globe. Learning outcomes: - Compare Leaflet, MapLibre, and OpenLayers and pick the right one for a use case. - Build a vector tile pipeline with tippecanoe and serve MBTiles via a tile server. - Draw orbital tracks on a CesiumJS 3D globe. - Stream live TLE updates over WebSockets and render moving satellites. - Run multi-frame change detection over a raster time series. Prerequisites: Remote Sensing Specialist track or equivalent. Comfortable with web technologies (HTML, JavaScript, basic React). Tools: CesiumJS, MapLibre GL JS, tippecanoe, Socket.IO, scikit-image ### Track 5: Space GIS Architect (Weeks 21-30) Level: Expert. Duration: 10 weeks. Credential: Certified Space GIS Architect URL: https://launchdetect.com/academy/space-gis-architect/ Production-grade space GIS. Multi-sensor satellite fusion across GOES-East / GOES-West / Himawari-9, ML object detection in raster, SAR interferometry, cloud-native COG/Zarr/STAC formats, end-to-end AWS pipelines, geospatial APIs with PostGIS + FastAPI, and the ethical / legal frontiers (ITAR, MGRS, sub-meter accuracy). Final capstone is a complete launch-detection mini-pipeline from raw NetCDF to served REST endpoint. Learning outcomes: - Fuse imagery from multiple geostationary satellites into a hemispheric coverage product. - Train and deploy a CNN for object detection in raster imagery (U-Net for segmentation). - Read SAR data and reason about polarimetry and InSAR phase. - Apply EGM2008 geoid corrections for precise positioning. - Build a complete S3 → Lambda → EventBridge → DDB ingest pipeline. - Serve geospatial data via a PostGIS + FastAPI REST API with spatial filters. - Reason about export-controlled (ITAR) and dual-use geospatial data. Prerequisites: Mission GIS Engineer track or equivalent. Comfortable with cloud infrastructure, Python, and production systems. Tools: xarray, PyTorch, rasterio, FastAPI, AWS CDK, STAC API --- ## 30-Week Curriculum ### Week 1: What is GIS? Coordinate systems and datums Track: Ground Station Operator URL: https://launchdetect.com/academy/week/1/ Summary: GIS is more than maps — it's the science of geographic information. This week covers the foundation: coordinate systems, datums, latitude/longitude, and why getting these right matters in space GIS where coordinates come from satellites. Objectives: - Explain what GIS is and isn't - Distinguish geographic (lat/lon) from projected coordinate systems - Identify WGS84 as the datum behind GPS and satellite-derived data - Avoid the most common coordinate-system mistakes in space GIS Opening question (place-based hook): When your kupuna give you directions to a place, do they say latitude and longitude — or do they say it differently? Coordinate systems are how people describe where things are. Pacific peoples did this for thousands of years using stars, waves, and bird flight — long before GPS. This week shows you the modern version (WGS84 lat/lon) and how it connects to what your family already knows. Connecting to Hawaiʻi — Wayfinding and WGS84: When Hōkūleʻa sailed from Hawaiʻi to Tahiti in 1976, the crew used no instruments — they read the stars, the swells, and the wind. That was navigation by a different coordinate system: a star-based one carried in memory across generations. WGS84 (the system GPS uses) is the modern equivalent: a way of saying 'this exact spot on Earth.' Both are valid. Both describe the same ocean. The Polynesian Voyaging Society has shown that traditional wayfinding and modern GPS can teach each other, and many young navigators today are fluent in both. Hint: Your phone says you are at 21.3°N, 157.86°W. Your kupuna might say 'mauka side of the H-1, near Pālolo.' Both are correct. One is precise; one is rooted in place. The discipline of space GIS asks: how do we honor both? Lab: Plot the ISS sub-satellite point — Using a TLE and pyproj, plot the International Space Station's sub-satellite point on a map and convert it between WGS84 lat/lon and Web Mercator meters. Lab notebook: https://github.com/ops-sketch/academy-labs/blob/main/week-01/lab.ipynb Run in Colab: https://colab.research.google.com/github/ops-sketch/academy-labs/blob/main/week-01/lab.ipynb Primer: A Geographic Information System is the combination of data, software, and methods used to capture, store, analyze, and visualize information that is located somewhere. The "somewhere" is what separates GIS from spreadsheets and BI tools. In space GIS, the "somewhere" is everywhere on, above, or around Earth — and the coordinates are almost always derived from satellites. Most newcomers think of GIS as making maps. Maps are a visualization output; they are not the discipline. The real work of GIS is reasoning about geometric and topological relationships: is this launch detection inside this exclusion zone? How close was the rocket plume to the launch pad? Which countries does the ISS fly over in the next hour? Every one of these is a spatial query, and every spatial query depends on coordinates being correctly defined. Geographic vs projected coordinates Coordinates come in two flavors. Geographic coordinates are angles measured from Earth's center — latitude (north/south of the equator) and longitude (east/west of the prime meridian). They live on the curved surface of an ellipsoid. Projected coordinates are the result of flattening the curved Earth onto a 2D plane using a mathematical transformation (a "projection"), and they're measured in linear units — typically meters or feet. Geographic coordinates are best for storage, communication, and most analysis. Projected coordinates are best for measuring distances and areas, and for displaying on flat screens. Every GIS workflow involves choosing the right one for each step. WGS84: the default datum for satellite-derived data A datum is the underlying reference frame: which ellipsoid you're using to model Earth, and where it's anchored. The World Geodetic System 1984 (WGS84) is the datum used by GPS and almost every space-domain dataset. Its EPSG code is 4326 — you'll see this constant everywhere in code: from pyproj import CRS, Transformer # WGS84 geographic — lat/lon on the WGS84 ellipsoid wgs84 = CRS.from_epsg(4326) # Web Mercator — projected meters, used by web maps (Google, Mapbox) web_mercator = CRS.from_epsg(3857) # Convert (lat, lon) → (x, y) in meters to_meters = Transformer.from_crs(wgs84, web_mercator, always_xy=True) x, y = to_meters.transform(-80.6, 28.6) # Kennedy Space Center print(f"Kennedy in Web Mercator: x={x:.0f} m, y={y:.0f} m") When a satellite (or LaunchDetect) emits a position, that position is almost always WGS84 unless explicitly stated otherwise. Mixing datums silently is one of the most common — and silent — sources of bugs in space GIS. Why this matters for thermal launch detection LaunchDetect's ignition coordinates come from NOAA GOES-18 and GOES-19 imagery. The native imagery is in a fixed-grid coordinate system referenced to the satellite, not lat/lon. To compare a hotspot to a known launch pad — which is what makes a detection geocodable — the fixed-grid pixel must be projected to WGS84 lat/lon. That conversion is where 90% of geolocation errors hide. The math is covered in Week 15. For now, the takeaway is: every space-GIS workflow has at least one coordinate-system conversion in it. Knowing which datum you're starting in, which one you're ending in, and which one the next consumer expects is the discipline. The big traps Assuming (x, y) means (lon, lat). GIS libraries are split: some pass (lon, lat), some pass (lat, lon). pyproj's always_xy=True argument forces the consistent (lon, lat) order. Set it explicitly every time. Treating Web Mercator distances as Earth distances. Web Mercator is fine for display, terrible for measurement. At 60° latitude, a Web Mercator meter is roughly half a real meter. For real distances, use a geodesic calculation (Week 5). Ignoring datum entirely. A "lat/lon" position without a datum is almost always WGS84 in modern systems — but a 1980s topographic map may be NAD27, and these differ by tens of meters. The lab walks you through the smallest possible first space-GIS workflow: parsing the ISS two-line element set, computing the current sub-satellite point, and converting that point between WGS84 lat/lon and Web Mercator meters using pyproj. That's the foundation. Everything in the next 29 weeks builds on this. Reflection question (closing): Pick a place that matters to you — your home, a fishing spot, a heiau, a favorite beach. Now think of three different ways to say where it is. Which one is most useful to a stranger? Which one is most useful to your ʻohana? Quiz: Q1. Which datum is used by GPS and most satellite-derived coordinates? A. NAD83 B. WGS84 * C. ED50 D. GRS80 Q2. What does 'projected' mean in GIS? A. The map is in 3D B. The Earth is flattened to a 2D plane using a mathematical transformation * C. The data is forecast forward in time D. The map uses Mercator Q3. Latitude/longitude is measured in: A. Meters B. Feet C. Degrees * D. Radians (always) Q4. EPSG:4326 refers to: A. Web Mercator B. WGS84 lat/lon * C. UTM Zone 33N D. British National Grid Q5. Why is WGS84 the default in space GIS? A. It's the prettiest B. GPS and most satellite-derived positions are defined in WGS84 * C. It's the only one Python supports D. Historical accident --- ### Week 2: Vector vs raster, and map projections Track: Ground Station Operator URL: https://launchdetect.com/academy/week/2/ Summary: Vector vs raster is the fundamental data-model split in GIS. Then comes projections: how do you flatten a sphere onto a screen? This week covers Web Mercator, UTM, equirectangular, and polar stereographic, and when each is right. Objectives: - Distinguish vector from raster GIS data - Pick an appropriate projection for a given task - Understand why Web Mercator distorts polar regions - Choose UTM, equirectangular, or polar stereographic correctly Opening question (place-based hook): Why does Greenland look bigger than all of Africa on Google Maps — when in real life Africa is 14 times bigger? It is not a mistake — it is a choice the map-makers made. The same choice happens every time you flatten a sphere onto a screen. This week, you will see exactly what's getting distorted, and you'll learn how to pick a projection that's honest for your work. Connecting to Hawaiʻi — Projections and Hawaiʻi's place on the map: On most world maps you've seen in school, Hawaiʻi looks like a tiny speck near the edge. That is not because Hawaiʻi is small — it's because the projection used (Web Mercator) puts the focus on Europe and North America and stretches everything near the equator. Switch to a Pacific-centered projection (`+proj=merc +lon_0=180`) and suddenly Hawaiʻi is at the CENTER of the world map — because the Pacific Ocean really is the center of the world for the people who live in it. A map is never neutral. It always answers a question. The question European cartographers asked was 'how do I draw the Atlantic without breaking it?' The question we should ask is 'how do I draw the Pacific without breaking it?' Hint: Try this: open Wikipedia, search 'Pacific-centered map.' That is the world from a Hawaiian perspective. Notice how it changes which countries feel like neighbors and which feel far away. Lab: Reproject a launch trajectory — Take a Falcon 9 launch trajectory in lat/lon and reproject it into UTM, Web Mercator, and equirectangular. Compare the visual results and the computed track lengths. Lab notebook: https://github.com/ops-sketch/academy-labs/blob/main/week-02/lab.ipynb Run in Colab: https://colab.research.google.com/github/ops-sketch/academy-labs/blob/main/week-02/lab.ipynb Primer: Geospatial data comes in two fundamental forms: vector and raster. Choosing the right model for a task is one of the highest-leverage decisions in a GIS workflow because almost every operation downstream (storage, querying, analysis, rendering) is asymmetric between the two. Vector data: points, lines, polygons Vector data represents the world as discrete geometric objects with attributes. A spaceport is a point. A rocket's ground track is a line (or a multiline if it crosses the dateline). A range-safety exclusion zone is a polygon. Each feature has a geometry and an attribute table — like a spreadsheet where one column happens to be geometry. Vector data is best for things that are precisely located and countable. The GeoJSON format is the lingua franca: { "type": "FeatureCollection", "features": [ { "type": "Feature", "geometry": {"type": "Point", "coordinates": [-80.6, 28.6]}, "properties": {"name": "Kennedy Space Center", "operator": "NASA"} } ] } Raster data: gridded pixels Raster data is a grid of cells, each holding a value. A satellite image is a raster. A digital elevation model is a raster. A thermal brightness-temperature map is a raster. The grid is defined by an extent, a resolution (cell size in real-world units), and a coordinate system; every cell has an implicit position derived from its (row, col) index. Raster data is best for continuous fields — temperature, elevation, reflectance — and for anything sampled at regular intervals. The GeoTIFF format is the lingua franca. rasterio is the standard Python library: import rasterio with rasterio.open('goes18_band7.tif') as src: band7 = src.read(1) # numpy array, shape (height, width) transform = src.transform # maps (row, col) → (x, y) print(f"Shape: {band7.shape}, dtype: {band7.dtype}, CRS: {src.crs}") Map projections: flattening the sphere Earth is approximately an ellipsoid. Screens are flat. Every projection is a mathematical compromise: you can preserve shape (conformal), area (equal-area), or distance (equidistant), but never all three. The right projection depends on what you're trying to do. Web Mercator (EPSG:3857) is conformal — it preserves angles, which makes it ideal for slippy web maps where the user pans and zooms freely. It catastrophically distorts area near the poles (Greenland looks the size of Africa). Never compute areas in Web Mercator. Universal Transverse Mercator (UTM) divides Earth into 60 zones, each 6° wide. Within a single zone, UTM is conformal AND nearly equidistant. It's the right projection for any local analysis where you need accurate distance and area at city or regional scale. Cape Canaveral is in UTM zone 17N (EPSG:32617). Equirectangular (plate carrée, EPSG:32662) just plots latitude vs longitude as x vs y. It's the cheapest projection — a one-to-one map of degrees to pixels — and it's how most full-globe satellite imagery is published (including GOES "geographic" products). It badly stretches near the poles but is fine near the equator. Polar stereographic (EPSG:3413 north, 3031 south) is the right choice when you actually care about the poles: Antarctic sea-ice extent, Arctic shipping lanes, polar-orbiting satellite ground tracks at high latitudes. Picking a projection for a launch trajectory A SpaceX Falcon 9 from Cape Canaveral arcs east over the Atlantic, ascending through ~120 km. To display the trajectory on a globe, equirectangular or Web Mercator is fine. To measure the downrange distance, project to UTM 17N for the early ascent, then use a great-circle calculation for the longer arc. To compute the maritime exclusion polygon (overlapping shipping lanes), keep everything in WGS84 lat/lon and use geodesic operations. The lab walks through reprojecting a real Falcon 9 trajectory through three projections and shows visually (and numerically) what each one does to your distances. The takeaway you'll keep for the rest of the course: choose a projection per task, not per workflow. A single pipeline often touches three or four CRSes. Reflection question (closing): Which projection do you think a map of Hawaiʻi should use? Why? What would change about your reasoning if the map were going to be used for navigation, vs. for teaching kids in school, vs. for measuring a coral reef's size? Quiz: Q1. A satellite image is what kind of GIS data? A. Vector B. Raster * C. Both D. Neither Q2. Web Mercator is bad for showing what? A. Streets in Manhattan B. Antarctic ice extent * C. Houston city limits D. Local highway networks Q3. UTM zones are how wide? A. 3 degrees B. 6 degrees * C. 10 degrees D. 15 degrees Q4. Equirectangular is also called: A. Mercator B. Plate carrée * C. Polar azimuthal D. Albers Q5. For a polar orbiting satellite ground track, which projection is most appropriate? A. Web Mercator B. UTM C. Polar stereographic * D. Albers conic --- ### Week 3: QGIS hands-on: load, style, export Track: Ground Station Operator URL: https://launchdetect.com/academy/week/3/ Summary: QGIS is the free and open-source workhorse of desktop GIS. This week is hands-on: load real launch-site data, query it, style it with rules-based symbology, and export a publication-quality map. Objectives: - Install QGIS and open a project - Load vector (GeoJSON) and raster (GeoTIFF) layers - Use the attribute table and run a simple query - Style a layer with rules-based symbology - Export a print-quality map composition Opening question (place-based hook): If you had a map of every reef break and every fishing ground around Oʻahu, who would you share it with? Who would you keep it from? Why? QGIS is a free tool for making and sharing maps. Power and responsibility ride together: the same map can protect a place by being public, or harm it by being too public. This week, learn the tool — then think about who your maps serve. Connecting to Hawaiʻi — QGIS and ʻāina-based mapping: Several Hawaiʻi-based organizations use QGIS every day: Kuaʻāina Ulu ʻAuamo for community-based fisheries mapping, the Office of Hawaiian Affairs for ceded-land tracking, watershed-restoration groups for ahupuaʻa-scale planning. QGIS is free and runs on any computer — that matters because expensive GIS software has historically been a barrier that kept community groups out. The whole reason QGIS exists is so that decision-making about a place isn't limited to people who can afford a $5,000/year ArcGIS license. Learning it is, in a small way, a political act. Hint: When you make your first map, ask: who is this map FOR? Who could it help? Who could it harm if seen by the wrong audience? A good map-maker thinks about consent as carefully as a good photographer does. Lab: Build your first space map in QGIS — Open QGIS, load a global spaceport GeoJSON and a basemap raster, style by country and operator, label by name, export to PDF. Lab notebook: https://github.com/ops-sketch/academy-labs/blob/main/week-03/lab.ipynb Run in Colab: https://colab.research.google.com/github/ops-sketch/academy-labs/blob/main/week-03/lab.ipynb Primer: QGIS is the free and open-source desktop GIS. It is the industry-standard alternative to ESRI's ArcGIS Pro, used by governments, NGOs, research institutions, and a huge swath of professional GIS practitioners. For an academy graduate, fluency in QGIS is non-negotiable — and unlike commercial alternatives, the entire toolchain is free, cross-platform, and installable in 5 minutes. Installing QGIS Download the long-term-release (LTR) version from qgis.org/download. The LTR is what professional teams standardize on; the regular release ships every 4 months and has more bugs. At time of writing, QGIS 3.34 LTR is current. On macOS, the official installer is a .dmg that puts QGIS in /Applications. On Windows, the official installer is an MSI. On Linux, your distribution's package manager almost always has a recent build. After installation, open QGIS — you'll see a project view, a layers panel on the left, a map canvas in the center, and a toolbox of geoprocessing tools on the right. Layers: vector + raster + basemap A QGIS project is a stack of layers. Each layer is either vector (GeoJSON, Shapefile, GeoPackage, etc.) or raster (GeoTIFF, NetCDF, COG, etc.). Add a layer via Layer → Add Layer → Add Vector Layer, or just drag-and-drop a file onto the canvas. For this week's lab, you'll load three things: A spaceport GeoJSON (provided in the lab repo): 20 active orbital launch sites with attributes (name, country, operator, vehicles, status). A Natural Earth basemap: a low-resolution country boundary layer that gives geographic context. An optional OpenStreetMap tile layer, added via the QuickMapServices plugin, for satellite-imagery-style detail at high zoom. The attribute table Every vector layer has an attribute table — a spreadsheet of the layer's properties. Right-click the layer → Open Attribute Table. From there you can filter, sort, and run expressions. Try the expression "operator" = 'SpaceX' to filter for SpaceX pads only. The filter applies live to the map. Attributes are also what you style by. Layer styles can be uniform (every feature same), categorized (group features by an attribute value and assign each group a color), graduated (continuous values binned into ranges), or rules-based (the most flexible: an arbitrary expression per rule). For the lab, you'll use a categorized style on the operator field — every operator gets a distinct color. Symbology, labels, and basemaps Open Layer Properties → Symbology. Choose Categorized, set the Column to operator, click Classify, then assign each category a color from the QGIS palette. The legend updates automatically. For labels, go to Layer Properties → Labels. Choose Single Labels, set the field to name, then choose a font (Inter or Helvetica look professional), a buffer (white halo, so labels are readable over any basemap), and a placement rule (offset from point, with leader line for overlapping labels). Print layout: from screen to PDF A QGIS map isn't done until it's exported. Open Project → New Print Layout. Give the layout a name. In the layout window, add a map item (the canvas content), then add a title, a legend (automatically pulled from the layers panel), a scale bar, a north arrow, and a data attribution text box in the corner. Export to PDF at 300 DPI. The result is print-publication ready — the same quality you'd expect from a professional cartographer's deliverable. The five must-know QGIS shortcuts Ctrl+Shift+S — save project Ctrl+J — zoom to layer F7 — toggle attribute table F9 — toggle field calculator Ctrl+Shift+M — toggle measure-distance tool (uses geodesic by default) QGIS is the foundation. Every subsequent week's lab can be opened, validated, and visualized in QGIS — even when the lab itself runs in Python. Get comfortable here. Reflection question (closing): Imagine you are mapping something important to your community — a sacred site, a learning trail, a community garden, a favorite surf break. What information do you include on the public version? What stays in a private version, and why? Quiz: Q1. QGIS is: A. Proprietary software B. Free and open source * C. An online-only service D. Built by ESRI Q2. An attribute table is: A. The map legend B. A spreadsheet of feature properties * C. The projection definition D. Style settings Q3. Rules-based symbology lets you: A. Style features by attribute conditions * B. Print to PDF C. Connect to PostGIS D. Calculate buffers Q4. GeoTIFF is: A. A vector format B. A raster format with embedded georeference * C. A 3D format D. A web-only format Q5. QGIS print layouts let you compose: A. Code B. Print-quality maps with legends and scale bars * C. SQL queries D. Python plugins --- ### Week 4: Plotting global launch sites (Capstone 1 week) Track: Ground Station Operator URL: https://launchdetect.com/academy/week/4/ Summary: Track 1 culminates here: every active orbital launch pad on Earth, geocoded, attributed (country, operator, status, vehicles), styled, and mapped. The week 4 lab IS the capstone start — you finish it for cert 1. Objectives: - Compile the world's active orbital launch pads into a GeoJSON - Compute nearest-neighbor distances between spaceports - Identify which spaceports share an inclination band - Produce a styled global atlas map ready for publication Opening question (place-based hook): Hawaiʻi's Pacific Missile Range Facility on Kauaʻi is one of the few US launch sites in the Pacific. Have you ever seen one of its launches? PMRF (the Pacific Missile Range Facility at Barking Sands) is a launch site too — for sounding rockets and missile-defense tests. It is on the global atlas you're about to build. So is every other active orbital pad on Earth. Connecting to Hawaiʻi — PMRF and the Pacific launch network: When you compile the global launch-site atlas for this capstone, you'll include the major orbital pads — Cape Canaveral, Kourou, Baikonur, Wenchang, etc. But you'll also notice that the Pacific has its own quiet network: PMRF on Kauaʻi (suborbital + missile defense), Wallops on the US east coast (Rocket Lab Electron + Antares), Mahia in New Zealand (Rocket Lab Electron), and Tanegashima in Japan (JAXA). The Pacific has been a launch theater for sixty years. Hawaiʻi sits in the middle of it. Hint: Add PMRF to your atlas as a 'suborbital' entry. Not every launch site reaches orbit — but every one of them matters for what flies overhead. Lab: Global Launch Site Atlas (capstone start) — Build a GeoJSON FeatureCollection of all currently-active orbital launch pads worldwide. Style by operator and country. Export the styled QGIS map to PDF. This is the deliverable for Capstone 1. Lab notebook: https://github.com/ops-sketch/academy-labs/blob/main/week-04/lab.ipynb Run in Colab: https://colab.research.google.com/github/ops-sketch/academy-labs/blob/main/week-04/lab.ipynb Primer: This week is the synthesis of the first three. Coordinate systems are settled (Week 1). Vector vs raster, projections — settled (Week 2). QGIS — settled (Week 3). Now we apply all three to a real space-domain dataset: every active orbital launch pad on Earth. Defining "active" and "orbital" The world has dozens of launch sites; most are not active, and many are suborbital-only. For this week's atlas, a launch pad qualifies if it has hosted at least one orbital launch attempt in the past 24 months. That filter leaves roughly 20–25 sites — a number that fits cleanly on a single map and a single dataset. "Orbital" means the launch was attempting to reach a closed orbit around Earth (LEO and above), not a suborbital trajectory like a sounding rocket or New Shepard. The distinction matters because the launch-pad infrastructure for orbital launches (vertical integration, range safety, downrange recovery) is fundamentally different from suborbital pads. The attributes that matter A useful spaceport feature includes: name — the canonical name, e.g. "Cape Canaveral Space Force Station" cc — ISO 3166-1 alpha-2 country code, e.g. "US" operator — the agency or company that runs the pad: NASA, SpaceX, ULA, ESA, Roscosmos, etc. vehicles — list of active launch vehicles flown from this pad status — "active" / "proposed" / "retired" first_orbital_launch — year of first orbital launch from the site latest_orbital_launch — year of most recent orbital launch lat, lon — coordinates to 4 decimal places minimum (~10 m precision) Sources you can cite This is where rigor distinguishes a good atlas from a Wikipedia copy. Cite primary sources: The UN Office for Outer Space Affairs publishes the authoritative registry of objects launched into outer space. Each launch operator publishes a "press kit" PDF for each launch with the pad ID and exact coordinates. The FAA AST (Office of Commercial Space Transportation) publishes the licensed US commercial spaceports. LaunchDetect's own spaceport atlas tracks 17 active orbital sites with continuously updated coordinates. Nearest-neighbor and inclination bands With the GeoJSON loaded, two analytical questions become natural: Nearest neighbor. For each pad, what is the nearest other orbital pad? Use a geodesic distance (not planar) — pads can be on different continents and the great-circle distance is what matters. shapely's planar distance is wrong here; use geopy.distance.geodesic or PostGIS ST_Distance_Sphere. Inclination band. A pad at latitude φ can launch directly into orbits with inclination ≥ |φ|. So Kourou (5.2° N) can launch equatorial orbits cheaply; Plesetsk (62.9° N) cannot. Group your pads into inclination bands (equatorial, mid-inclination, polar) and visualize. The capstone The Week 4 lab is the start of Capstone 1: Global Launch Site Atlas. You'll build the GeoJSON, style and label it in QGIS, and export an A2-sized PDF map. The full rubric is on the capstone page; finishing it earns the Certified Ground Station Operator credential. A finished atlas is not just a map. It's a citation-grade dataset that any space-domain researcher can use. Coordinate precision must be defensible. Attribute values must be sourced. The visual styling must enable the reader to draw conclusions at a glance: which countries cluster geographically? Which operators have monopoly access to which inclinations? Which sites have surged in activity post-2020? Track 1 closes here. Going into Track 2 (Orbital Analyst), you'll layer orbital mechanics onto this base. Every TLE you propagate in Track 2 will be referenced back to one of the pads you mapped this week. Reflection question (closing): Your atlas is going to be a public document. When you publish it, what does it tell readers about how the world is organized? Who controls which pads? Which countries get to launch, and which don't? What does that say about who gets to ask questions about space? Quiz: Q1. Which spaceport is most equatorial? A. Cape Canaveral B. Kourou * C. Vandenberg D. Plesetsk Q2. Which spaceport is best for polar orbits? A. Cape Canaveral B. Kourou C. Vandenberg * D. Wenchang Q3. Equatorial launch sites are preferred for what orbit? A. Polar B. Sun-synchronous C. Geostationary * D. Molniya Q4. How many active orbital launch sites operate today, approximately? A. 5 B. 10 C. 20 * D. 100 Q5. Starbase is operated by: A. NASA B. SpaceX * C. ULA D. Blue Origin --- ### Week 5: Spatial operations: joins, buffers, intersects, dissolve Track: Orbital Analyst URL: https://launchdetect.com/academy/week/5/ Summary: The core verbs of spatial analysis. Spatial joins, buffers, intersections, and dissolve — taught with launch-site and airspace-exclusion-zone data. Objectives: - Run a spatial join in QGIS and in geopandas - Compute buffers and explain when geodesic vs planar matters - Run intersection, union, and difference operations - Use dissolve to aggregate features by attribute Opening question (place-based hook): When a rocket launches from Vandenberg in California heading west into the Pacific, where do the spent stages land? How does that affect Hawaiian waters? Every coastal launch produces a maritime exclusion zone — a polygon vessels must avoid. The math for that polygon is spatial-analysis 101: buffers, intersections, dissolve. Learn it, and you'll understand why fishermen on certain days get told 'not today.' Connecting to Hawaiʻi — Maritime exclusion and Pacific fisheries: A SpaceX Falcon 9 launching south from Vandenberg into a polar orbit drops its first stage roughly 600 km downrange — toward the open Pacific. NOTMAR (Notice to Mariners) advisories define the no-fishing-allowed polygons days in advance. Hawaiian fishermen who fish the Pacific Northwest's offshore grounds occasionally get those notices. The polygons are exactly the kind of geodesic-buffered shape you'll build this week — but instead of demo data, real fishermen plan around them. Hint: The same math used to build a launch exclusion polygon builds the polygons defining Marine Protected Areas around Hawaiian reefs. Learn this technique once; use it twice. Lab: Compute the maritime exclusion zone for a launch — Take a Falcon 9 launch from Cape Canaveral. Buffer the predicted downrange trajectory by 50 km. Intersect with shipping lanes from AIS data. Output the maritime exclusion polygon as GeoJSON. Lab notebook: https://github.com/ops-sketch/academy-labs/blob/main/week-05/lab.ipynb Run in Colab: https://colab.research.google.com/github/ops-sketch/academy-labs/blob/main/week-05/lab.ipynb Primer: Spatial analysis is the verbs of GIS: combining geometries to answer questions. This week introduces the five operations that show up in every space-domain workflow — spatial joins, buffers, intersections, dissolve, and overlay. Spatial joins A spatial join attaches attributes from one layer to another based on a spatial relationship, not a key-value match. The classic example: given launch detection points and country polygons, attach each detection's country (so you can ask "which country had the most launches this year?"). In geopandas: import geopandas as gpd detections = gpd.read_file('detections.geojson').to_crs(epsg=4326) countries = gpd.read_file('countries.geojson').to_crs(epsg=4326) joined = detections.sjoin(countries, how='left', predicate='within') The predicate can be within, intersects, contains, touches, or crosses. Choose carefully — within is exclusive of the boundary, intersects includes it. For a launch detection on a coastal border, this matters. Buffers: planar vs geodesic A buffer expands a geometry by a distance. The catch in space GIS: distance on the Earth's curved surface is not the same as Euclidean distance in a projected plane. A 50 km planar buffer in Web Mercator at 60° latitude is actually a ~25 km buffer on the ground. Every range-safety exclusion zone is a geodesic buffer, computed on the WGS84 ellipsoid. from shapely.geometry import Point from shapely.ops import transform from pyproj import Geod geod = Geod(ellps='WGS84') # Buffer Cape Canaveral by 50 km on the ellipsoid lat, lon = 28.6, -80.6 exclusion = [geod.fwd(lon, lat, az, 50_000)[:2] for az in range(0, 360, 5)] Intersection, union, difference Once you have two polygons, you can ask three questions: Intersection — what's in both. The overlap of a launch exclusion zone with a shipping lane is the area that must be cleared. Union — what's in either. Combining two adjacent NOTAM zones into a single advisory. Difference — what's in A but not B. A launch corridor minus the recovery zone equals the "transit" portion. Dissolve: aggregating features Dissolve merges multiple features into one based on a shared attribute. A dataset of every Falcon 9 launch detection can be dissolved by year, producing one polygon (or multi-point) per year for time-series visualization. In geopandas: by_year = detections.dissolve(by='launch_year', aggfunc='sum') Overlay Overlay combines two layers with all four operations at once (intersection / union / identity / symmetric difference) producing a new layer where each output polygon has attributes from both inputs. Use it sparingly — overlays explode geometry counts and can be slow on big inputs. This week's lab: range-safety maritime exclusion Every Falcon 9 launch from Cape Canaveral produces a NOTMAR (Notice to Mariners) defining a maritime exclusion zone for the launch corridor. The lab reconstructs one of these: take the predicted trajectory polyline, geodesically buffer by 50 km, intersect with global AIS shipping lanes (from MarineCadastre.gov), and output the exclusion polygon as GeoJSON. The output is directly comparable to the published NOTMAR. This is the same logic LaunchDetect's AIS layer uses in production — minus the real-time AIS feed, which is Week 19. Reflection question (closing): Whose interests does the maritime exclusion zone protect? Whose are left out? When the polygon goes up for two days, who pays the cost — and who benefits from the launch? Quiz: Q1. A spatial join joins features by: A. Their attribute table only B. Their spatial relationship (within, intersects, etc.) * C. Random sampling D. Date overlap Q2. A geodesic buffer is computed: A. On a flat plane B. On the WGS84 ellipsoid (accurate at any latitude) * C. In Web Mercator pixels D. By bounding box Q3. Dissolve aggregates features by: A. A shared attribute value * B. Distance C. Time D. ID Q4. ST_Intersects returns: A. True if geometries share any point * B. Only the intersection geometry C. True only for full overlap D. A distance Q5. When is planar buffer wrong? A. Always B. Near the equator C. When buffer is large relative to Earth's curvature * D. Never --- ### Week 6: PostGIS: spatial SQL fundamentals Track: Orbital Analyst URL: https://launchdetect.com/academy/week/6/ Summary: PostGIS turns PostgreSQL into a full GIS engine. Spatial data types, ST_ functions, GIST indexes — the toolkit for serious geospatial backends. Objectives: - Install PostGIS and load a shapefile - Write SELECT queries using ST_Within, ST_Distance, ST_Intersects - Create a spatial index (GIST) and explain its impact - Build a query that finds all launches within 100 km of a coastline Opening question (place-based hook): If you were a state agency tracking which streams empty into which bays in Hawaiʻi, how would you store that data — a spreadsheet, or a database? Spreadsheets work until they don't. PostGIS is what real geospatial teams use when 'works on my laptop' isn't good enough. This week, you'll learn the queries that power production GIS — the same kind that monitor stream-flow alerts statewide. Connecting to Hawaiʻi — PostGIS and the Commission on Water Resource Management: The State of Hawaiʻi's Commission on Water Resource Management tracks every stream gauge, every aquifer, every well permit. Behind it is a PostGIS-style spatial database that lets staff ask questions like 'which stream gauges show critical low-flow conditions right now AND are upstream of a community water intake?' That is exactly the kind of query you'll write this week. The query pattern is universal — replace 'stream gauge' with 'launch detection' and you have LaunchDetect's production database. Hint: Watershed management, reef protection, fishery enforcement, ceded-land tracking — all of them in Hawaiʻi rely on PostGIS-style spatial queries. Learning this opens doors. Lab: Find every launch within 100 km of a coastline — Load a global coastlines table and a launch-detection points table into PostGIS. Write the spatial SQL query that returns all launches within 100 km of any coastline, sorted by distance. Lab notebook: https://github.com/ops-sketch/academy-labs/blob/main/week-06/lab.ipynb Run in Colab: https://colab.research.google.com/github/ops-sketch/academy-labs/blob/main/week-06/lab.ipynb Primer: PostGIS is the spatial extension to PostgreSQL. It turns the world's most-loved relational database into a full-featured GIS engine: spatial data types, hundreds of ST_* functions, GIST indexes for sub-millisecond spatial queries, and the ability to combine all of that with the regular relational and JSON capabilities you already use Postgres for. For any serious space-GIS backend — including LaunchDetect's production pipeline — PostGIS is the default. This week is your fluency primer. Installation The easiest path is Docker: docker run -d --name pg-spatial \ -e POSTGRES_PASSWORD=academy \ -p 5432:5432 \ postgis/postgis:16-3.4 Then enable PostGIS in your database: CREATE EXTENSION IF NOT EXISTS postgis; SELECT PostGIS_Version(); -- confirms installation Spatial data types PostGIS adds three core types: GEOMETRY (planar), GEOGRAPHY (geodesic), and the array variants. The decision between geometry and geography is the most important one in your schema: GEOMETRY — fast, works in any projected coordinate system. Use for local-scale analysis (within a UTM zone, within a city). GEOGRAPHY — slower, but distances and areas are correct on the WGS84 ellipsoid regardless of where on Earth. Use for global-scale analysis. Define a launch detections table: CREATE TABLE detections ( id bigserial PRIMARY KEY, detected_at timestamptz NOT NULL, vehicle text, brightness_k real, -- WGS84 geographic; SRID 4326 = lat/lon on WGS84 position geometry(Point, 4326) NOT NULL ); CREATE INDEX detections_position_gix ON detections USING GIST (position); Loading data Load a GeoJSON FeatureCollection via ogr2ogr (from GDAL): ogr2ogr -f PostgreSQL "PG:host=localhost user=postgres dbname=academy" \ detections.geojson -nln detections -append Or via Python with geopandas: import geopandas as gpd from sqlalchemy import create_engine gdf = gpd.read_file('detections.geojson') engine = create_engine('postgresql://postgres:academy@localhost:5432/academy') gdf.to_postgis('detections', engine, if_exists='replace', index=False) The ST_* family The functions you'll use 90% of the time: ST_Within(a, b) — is a entirely inside b? ST_Intersects(a, b) — do a and b share any point? ST_Distance(a, b) — distance between a and b (units depend on SRID; geography type returns meters) ST_Buffer(geom, dist) — expand a geometry by dist (planar; for geodesic, cast to geography) ST_DWithin(a, b, dist) — true if a and b are within dist of each other (uses index efficiently) ST_Transform(geom, srid) — reproject between coordinate systems GIST indexes: the secret to speed Without a spatial index, every spatial query is a full table scan. With a GIST index, PostgreSQL uses an R-tree to prune to candidate features in O(log n) time. Always create a GIST index on the geometry column. Always. The lab You'll load the Natural Earth global coastline polygons table and the LaunchDetect launches table into PostGIS, then write the spatial SQL query that finds every detection within 100 km of any coastline, sorted by distance. This single query is the core of LaunchDetect's coastal-spaceport heuristic for ranking detection confidence (a thermal hotspot 500 km from any coast is more likely a wildfire than a launch). Reflection question (closing): PostGIS makes spatial data easier to query at scale. Easier-to-query also means easier-to-surveil. Where's the line between useful and intrusive? How would you decide? Quiz: Q1. PostGIS adds what to PostgreSQL? A. JSON support B. Spatial data types and functions * C. Time series D. Web hosting Q2. ST_Distance returns distance in: A. Always meters B. Always degrees C. The units of the input geometry's SRID * D. Kilometers Q3. GIST index is used for: A. Numeric columns B. Spatial columns * C. Text search D. Sequence generation Q4. ST_Within(a, b) returns true when: A. a is fully inside b * B. a touches b C. a equals b D. a is north of b Q5. SRID 4326 means: A. WGS84 lat/lon * B. Web Mercator C. UTM Zone 26 D. OSGB36 --- ### Week 7: Orbital mechanics primer: TLEs and Keplerian elements Track: Orbital Analyst URL: https://launchdetect.com/academy/week/7/ Summary: What is a TLE? What is inclination? What is an ascending node? This week is the gateway from 'satellite' as abstract object to 'satellite' as a geometric trajectory you can plot, predict, and analyze. Objectives: - Read a TLE and identify each of the 6 Keplerian elements - Distinguish LEO / MEO / GEO / Molniya / SSO from inclination + altitude - Explain what a ground track is and how it relates to orbital period - Understand why GPS orbits at MEO and ISS at LEO Opening question (place-based hook): When you look up at the night sky from Mauna Kea, you can see satellites moving against the stars. Where are they coming from? Every satellite has six numbers (Keplerian elements) that describe its orbit, distributed worldwide as a TLE. This week you'll learn to read one. Once you can, you'll know which satellites are passing overhead at any moment. Connecting to Hawaiʻi — Mauna Kea and Pacific astronomy: Mauna Kea hosts some of the world's most powerful telescopes (a source of ongoing conversation about cultural responsibility and stewardship). The same dark skies that make Mauna Kea ideal for ground telescopes also make it ideal for tracking satellites — the US Air Force has used the summit for satellite tracking since the 1970s. Whether the future of Mauna Kea is more telescopes, fewer telescopes, or something different, the science of orbital tracking — including the SGP4 propagation you'll learn — was partially developed using observations from there. Hint: Tonight, look up at the sky at dawn or dusk (when satellites are sunlit but the ground is dark). Anything moving in a steady straight line that isn't blinking is a satellite. A TLE for it is on CelesTrak right now. Lab: Read your first TLE — Download the ISS TLE from CelesTrak. Parse it. Identify the 6 Keplerian elements. Compute the orbital period from the mean motion field. Verify against the published value. Lab notebook: https://github.com/ops-sketch/academy-labs/blob/main/week-07/lab.ipynb Run in Colab: https://colab.research.google.com/github/ops-sketch/academy-labs/blob/main/week-07/lab.ipynb Primer: Every object in orbit can be described, at any moment, by six numbers: the Keplerian elements. This week introduces them through the most-used real-world format for distributing them: the two-line element set or TLE. The six Keplerian elements Two Body orbital mechanics gives us six independent parameters that uniquely identify an orbit's shape and orientation in inertial space: Semi-major axis (a) — the orbit's "size" (half the longest diameter). For a circular orbit, this is the radius from Earth's center. Eccentricity (e) — how oval the orbit is. e = 0 is circular; 0 < e < 1 is elliptical; e = 1 is parabolic (escape). Inclination (i) — the tilt of the orbital plane relative to Earth's equator, in degrees. i = 0 is equatorial; i = 90 is polar; i > 90 is retrograde. Right ascension of the ascending node (Ω) — where the orbit crosses the equator going north, measured from the vernal equinox direction. Argument of periapsis (ω) — where the closest point to Earth lies, measured within the orbital plane from the ascending node. True anomaly (ν) or mean anomaly (M) — where the satellite is in its orbit right now. The first five describe the orbit's geometry; the sixth places the satellite on it at a specific time (the epoch). The TLE format A two-line element set is a NORAD-defined plain-text format that encodes a satellite's mean Keplerian elements plus a few drag/perturbation terms in 70 characters per line: ISS (ZARYA) 1 25544U 98067A 24130.50145833 .00018539 00000-0 33188-3 0 9994 2 25544 51.6406 348.5395 0006703 117.9568 358.1729 15.50289267449420 Reading line 2 left to right: catalog number (25544), inclination (51.64°), RAAN (348.54°), eccentricity (0.0006703 — the leading 0. is implicit), argument of perigee (117.96°), mean anomaly (358.17°), mean motion (15.50289267 revs/day). From TLE to orbital regime The mean motion tells you the orbital period: period_minutes = 1440 / mean_motion. For the ISS at 15.50 revs/day, period is 92.9 minutes. From period and Kepler's third law you can back out the semi-major axis (~6,778 km from Earth's center, ~407 km altitude). Combined with inclination, you can classify the orbit: LEO equatorial: i ≈ 0°, altitude < 2000 km. Rare; tropical launch sites only. LEO mid-inclination: i matches launch-site latitude. ISS (51.6° = Baikonur latitude). LEO sun-synchronous: i ≈ 98° (retrograde). Always passes the equator at the same local solar time. Used for Earth observation. MEO: 2,000–35,786 km. GPS (~20,200 km), GLONASS, Galileo. GEO: 35,786 km, i ≈ 0°. Period exactly 24 hours; appears stationary above the equator. Used by GOES, Himawari, and almost every communications satellite. Molniya: highly elliptical, i ≈ 63.4°. Used by Russia for high-latitude coverage where GEO doesn't reach. Where TLEs come from The U.S. Space Force's 18th Space Defense Squadron generates and publishes TLEs for every tracked object via Space-Track.org (account required) and the public mirror CelesTrak (no account, just curl). TLEs are updated daily and have an "epoch" timestamp — the further you propagate from the epoch, the more error accumulates. After ~1 week, accuracy degrades significantly; after ~30 days, refresh. The lab fetches the ISS TLE from CelesTrak, parses it with sgp4.tle (or by hand using string slicing per the format spec), and prints each Keplerian element with its physical meaning. By the end you'll be able to look at any TLE and roughly visualize the orbit. Reflection question (closing): There are 10,000+ active satellites in orbit. About 6,000 are Starlink. What does that tell you about who is using space — and who is making decisions about how it gets used? Quiz: Q1. TLE stands for: A. Two-Line Element set * B. Total Launch Estimate C. Tracking Latitude Elevation D. Telemetry Local Element Q2. ISS inclination is approximately: A. 0 degrees B. 28.5 degrees C. 51.6 degrees * D. 98 degrees Q3. A 98-degree inclination implies: A. GEO B. Sun-synchronous (polar) * C. Equatorial D. Molniya Q4. GEO altitude is approximately: A. 400 km B. 2,000 km C. 20,000 km D. 35,786 km * Q5. Mean motion in a TLE is in units of: A. Degrees per second B. Revolutions per day * C. Kilometers per hour D. Radians per minute --- ### Week 8: SGP4 propagation in Python with skyfield Track: Orbital Analyst URL: https://launchdetect.com/academy/week/8/ Summary: skyfield is Python's premier orbit-propagation library. SGP4 is the workhorse algorithm. This week you'll propagate the ISS, generate ground tracks, and produce a publishable GeoJSON. Objectives: - Use skyfield to load a TLE and propagate it - Compute sub-satellite point (lat/lon directly below) at any UTC time - Generate a 24-hour ground track as a GeoJSON LineString - Handle the timing subtlety of converting between UTC, TT, and TLE epoch Opening question (place-based hook): If a satellite is overhead at exactly 7:42 PM tonight, where will it be at 7:43? At 8:42? SGP4 is how every satellite-tracking app on your phone answers that question. This week you'll write it yourself in 10 lines of Python. By the end, you can compute the ISS's ground track 24 hours into the future. Connecting to Hawaiʻi — Ground tracks over the Pacific: The International Space Station is on a 51.6° inclination orbit, which means its ground track passes over Hawaiʻi roughly once every few days. From Honolulu the ISS is visible (at favorable geometry) maybe 2–4 times a week, just before dawn or just after sunset. The Pacific Voyaging Society has used satellite tracking as a teaching tool — the same SGP4 math gives the ISS's next pass over Hōkūleʻa wherever she's sailing. Different navigation traditions; same physics underneath. Hint: ISS Detector and Heavens-Above are free apps that show you when to look. Both use SGP4. After this week you'll know how they work. Lab: Generate a 24-hour ISS ground track — Load the current ISS TLE. Propagate it from now + 24 hours in 60-second steps. Compute the sub-satellite point at each step. Output as a GeoJSON LineString with timestamps. Lab notebook: https://github.com/ops-sketch/academy-labs/blob/main/week-08/lab.ipynb Run in Colab: https://colab.research.google.com/github/ops-sketch/academy-labs/blob/main/week-08/lab.ipynb Primer: A TLE describes where the orbit is; an SGP4 propagator computes where the satellite is at any given time. This week we connect the two using Python — the same code path that powers every satellite tracking app on the internet. What SGP4 does Simplified General Perturbations 4 is the orbital propagation model standardized by NORAD for use with TLEs. It accounts for the first-order perturbations of low-Earth orbit: Earth's J2 oblateness, atmospheric drag, solar/lunar gravity, and solar radiation pressure. It's accurate to within ~1 km for a fresh TLE and degrades over time as the underlying model assumptions diverge from reality. SGP4 is not the most accurate propagator available — for centimeter-precision (GPS satellites, geodetic missions) you'd use a numerical propagator with high-fidelity models — but it's accurate enough for almost every satellite-tracking and ground-station application, and it's standardized across the industry. skyfield: Python's reference implementation skyfield is a high-precision astronomy + satellite-propagation library that wraps the canonical SGP4 implementation and adds proper time-system handling (UT1 vs UTC vs TT vs TDB), Earth orientation, and a clean Python API. It is the right default choice; the older sgp4 standalone package is lower-level and lacks the time-system polish. from skyfield.api import EarthSatellite, load, wgs84 tle_lines = [ "1 25544U 98067A 24130.50145833 .00018539 00000-0 33188-3 0 9994", "2 25544 51.6406 348.5395 0006703 117.9568 358.1729 15.50289267449420", ] ts = load.timescale() iss = EarthSatellite(tle_lines[0], tle_lines[1], "ISS", ts) t = ts.now() geocentric = iss.at(t) subpoint = wgs84.subpoint(geocentric) print(f"ISS sub-satellite point: ({subpoint.latitude.degrees:.4f}, {subpoint.longitude.degrees:.4f}), alt {subpoint.elevation.km:.1f} km") Time matters Satellite propagation is exquisitely time-sensitive. A 1-second error means a 7 km position error for an ISS-like LEO orbit. skyfield handles four time systems: UTC, UT1, TT (Terrestrial Time), and TDB (Barycentric Dynamical Time). TLE epochs are in UT1 (Universal Time 1), which is Earth-rotation-based and differs from UTC by up to ±0.9 seconds — skyfield handles the conversion automatically via the IERS (International Earth Rotation Service) tables it downloads on first use. Always use skyfield's ts.utc(...), ts.tt(...), etc., to construct times. Never compute time offsets manually. Sub-satellite points and ground tracks The sub-satellite point is the lat/lon directly below the satellite (the point on Earth's surface intersected by the line from Earth's center through the satellite). For a 24-hour ground track, you propagate to a series of times, extract the sub-satellite point at each, and stitch them into a polyline. from skyfield.api import load import numpy as np t0 = ts.now() times = [ts.tt_jd(t0.tt + i/1440) for i in range(0, 1440, 1)] # 1-minute steps for 24h positions = [(wgs84.subpoint(iss.at(t)).longitude.degrees, wgs84.subpoint(iss.at(t)).latitude.degrees) for t in times] One subtlety: ground tracks cross the antimeridian (180° / -180°) and must be split there into separate line segments for proper visualization, or the line will draw across the entire map. The lab notebook handles this with a simple jump-detection split. The lab You'll generate the ISS's next 24-hour ground track at 1-minute resolution (1,440 points) as a GeoJSON LineString with embedded timestamps. Then you'll visualize it in QGIS over a Natural Earth basemap, color-coded by altitude (use a small vertical exaggeration since ISS altitude varies only ~10 km over an orbit). This is the same pipeline LaunchDetect uses for the live satellite tracker in production — the only differences are streaming updates (Week 19) and a 3D globe instead of a 2D map (Week 18). Reflection question (closing): Knowing exactly when a satellite is overhead is useful — for spotting, for radio contact, for photography. It's also useful for adversarial purposes (timing operations to avoid surveillance, or timing them to USE surveillance). What's the right way to teach a powerful skill? Quiz: Q1. SGP4 stands for: A. Simplified General Perturbations 4 * B. Solar Geostationary Path 4 C. Satellite Geoid Position 4 D. Standard GPS Propagation 4 Q2. skyfield is: A. A web app B. A Python library for high-precision astronomy and satellite propagation * C. A C library D. A QGIS plugin Q3. The sub-satellite point is: A. The closest ground station B. The point on Earth's surface directly below the satellite * C. The orbital periapsis D. The launch site Q4. TLE epochs are in: A. UTC B. GPS time C. TT (Terrestrial Time) D. UT1 * Q5. skyfield's `wgs84.subpoint()` returns: A. Lat/lon/elevation * B. Only lat C. Only lon D. ECEF coordinates --- ### Week 9: Ground-to-satellite line-of-sight and coverage Track: Orbital Analyst URL: https://launchdetect.com/academy/week/9/ Summary: Can a ground station see a given satellite right now? This is a question of geometry, line-of-sight, and (a little) atmosphere. This week you build the math and the code. Objectives: - Compute look angles (azimuth, elevation) from a ground station to a satellite - Identify pass start, max-elevation, and pass end times - Build a coverage cone polygon for a given sensor swath - Account for atmospheric refraction at low elevation angles Opening question (place-based hook): If you wanted to take a picture of the ISS streaking through the sky, when should you go out? Where should you look? It's a geometric question, not a guess. Given a TLE and your lat/lon, you can compute pass times to the second. This week is the math — and it's the foundation for everything from amateur radio to phone AR apps. Connecting to Hawaiʻi — Pass prediction for Pacific observers: Visibility of a satellite from your location depends on three things: where you are (lat/lon), where the satellite is (from its TLE), and whether the satellite is sunlit while you're in darkness. From Hawaiʻi, late spring and early summer pre-dawn passes of the ISS are spectacular — the station is sunlit (because it's high enough that the sun still reaches it) while the ground is still in shadow. The math for predicting these passes is exactly the math you'll write this week. Hint: Try ISS Detector this week. Note when the next high-elevation pass is. Then run the Week 9 lab and verify the prediction yourself. Lab: Predict next ISS pass over a user-supplied location — Given a lat/lon, propagate the ISS and find the next overhead pass (max elevation > 30°). Output the pass start, max-elevation time + angle, and pass end as JSON. Lab notebook: https://github.com/ops-sketch/academy-labs/blob/main/week-09/lab.ipynb Run in Colab: https://colab.research.google.com/github/ops-sketch/academy-labs/blob/main/week-09/lab.ipynb Primer: Visibility from the ground is the bread and butter of practical satellite tracking. "When can I see the ISS from my backyard?" "Will Starlink overfly me tonight?" "Does my ground station have line-of-sight to GOES-19?" All three reduce to the same geometric question: given a satellite ephemeris and an observer location, when is the satellite above the observer's horizon, and at what direction (azimuth) and how high (elevation) in the sky? Azimuth and elevation From any point on Earth's surface, a satellite's position relative to you can be described by two angles: Azimuth — the compass direction in the horizontal plane, measured from true north (0°) clockwise. Due east is 90°, due south is 180°, due west is 270°. Elevation — the angle above the horizontal plane. 0° is the horizon; 90° is directly overhead (zenith); negative values are below the horizon. A satellite is "visible" when its elevation is greater than zero. For practical purposes (atmospheric absorption, trees, buildings), most ground stations consider visibility to start at ~10° elevation. Astronomy observations often require >30° elevation to escape atmospheric turbulence. Pass geometry A satellite pass has three key moments: Rise (AOS — acquisition of signal) — the satellite crosses 0° elevation from below. Culmination (TCA — time of closest approach) — the satellite reaches its maximum elevation during the pass. Set (LOS — loss of signal) — the satellite returns to 0° elevation and disappears. For an ISS pass, the entire cycle takes 4–10 minutes depending on geometry. A "great" pass has a culmination >70°; an unusable pass culminates <10°. Computing this with skyfield from skyfield.api import load, wgs84 ts = load.timescale() iss = EarthSatellite(line1, line2, "ISS", ts) observer = wgs84.latlon(40.7128, -74.0060) # NYC # Look for events in next 24 hours t0 = ts.now() t1 = ts.tt_jd(t0.tt + 1.0) times, events = iss.find_events(observer, t0, t1, altitude_degrees=10.0) # events: 0=rise, 1=culminate, 2=set for t, ev in zip(times, events): alt, az, _ = (iss - observer).at(t).altaz() print(f"{t.utc_iso()} event={ev} alt={alt.degrees:.1f}° az={az.degrees:.1f}°") Line-of-sight Pass visibility assumes nothing blocks the line of sight. For a real ground station, you also need to account for: Atmospheric refraction — the atmosphere bends light, making satellites appear ~0.5° higher than their geometric position near the horizon. skyfield handles this if you ask for topos.altaz(refraction=True). Terrain horizon — a mountain to your west blocks satellites at low westward elevation. Real ground stations build a "horizon mask" with the local skyline. Building obstructions — for urban deployments, the local horizon mask is built from building footprints + height. Coverage cone From the satellite's perspective, the inverse question is: which observers can see me right now? The answer is a circular "footprint" on Earth's surface. For a LEO satellite at 400 km altitude with a 5° minimum elevation, the footprint radius is ~2,100 km — a circle covering most of the eastern United States. The lab takes a user-supplied lat/lon and finds the next visible ISS pass within the next 24 hours, outputting AOS time, TCA time + max elevation, and LOS time as JSON. This is the same logic that powers launchdetect.com/satellite-tracker/'s "next visible pass" feature. Reflection question (closing): Visibility math also tells you when a satellite CAN'T see you. Why might that matter — for someone studying coral reefs (when does Sentinel-1 next image my reef?), or for a community planning ceremony (when is the sky free of bright passes?)? Quiz: Q1. Elevation angle of 0° means: A. Directly overhead B. On the horizon * C. Below the horizon D. Doesn't exist Q2. Azimuth is measured: A. From north, clockwise * B. From east, counter-clockwise C. From south, clockwise D. Up from horizon Q3. A satellite at 90° elevation is: A. On the horizon B. Directly overhead * C. Below the ground D. Just risen Q4. Atmospheric refraction matters most at: A. High elevation B. Low elevation * C. Nadir D. Apogee Q5. Pass prediction depends on: A. Ground station lat/lon/alt and the satellite ephemeris * B. Only the satellite name C. Date alone D. Weather --- ### Week 10: Spaceports and orbits (Capstone 2 week) Track: Orbital Analyst URL: https://launchdetect.com/academy/week/10/ Summary: Track 2 culminates here: combine ground station coverage analysis with orbital mechanics to answer the matching question — given an orbit, which spaceport? Given a spaceport, which orbits? The capstone delivers a ground-track coverage tool. Objectives: - Match spaceport latitude to feasible orbital inclinations - Explain why Kourou is the GEO launch capital - Identify which spaceports can serve sun-synchronous orbits - Build a coverage polygon for a hypothetical 1000-km swath sensor Opening question (place-based hook): Mahia Peninsula in Aotearoa New Zealand and Wallops in Virginia both host Rocket Lab Electron launches. Why these two sites? The answer is in the geometry — which orbits each site can efficiently reach. This week brings together orbital mechanics and spaceport geography into one practical question: given an orbit you want, which pad? Connecting to Hawaiʻi — Pacific Voyaging Society and Mahia: Mahia Peninsula in Aotearoa New Zealand is home to Rocket Lab's Launch Complex 1 — the only private orbital spaceport in the Southern Hemisphere. Mahia is at 39°S, which gives Electron access to a wide range of inclinations including sun-synchronous polar orbits popular for Earth observation. Pacific voyagers have known Mahia's coordinates for centuries (it's an important navigation landmark on the journey between Aotearoa and Rarotonga). The Pacific Voyaging Society has cultural ties throughout the region, and Hōkūleʻa has visited Aotearoa. The Pacific is one ocean; the launch network sits inside it. Hint: Capstone 2 builds a tool that takes any TLE and tells you which countries the satellite overflies and for how long. Try it for Hawaiʻi: how long is the ISS over the State of Hawaiʻi each day? Lab: Ground-Track Coverage Tool (capstone start) — Given any TLE, output: (1) 24h ground track as GeoJSON, (2) 1000-km-swath coverage polygon, (3) country-overflight table with dwell time per country. This is the deliverable for Capstone 2. Lab notebook: https://github.com/ops-sketch/academy-labs/blob/main/week-10/lab.ipynb Run in Colab: https://colab.research.google.com/github/ops-sketch/academy-labs/blob/main/week-10/lab.ipynb Primer: Track 2 culminates here. With orbital mechanics (Week 7), SGP4 propagation (Week 8), and ground-station visibility (Week 9) in hand, you can answer the central matching question of operational space-domain awareness: given an orbital regime, which spaceports can serve it? And given a spaceport, which orbits can be efficiently reached? The geometric constraint The fundamental constraint is simple: a rocket launched due east (the most efficient azimuth, gaining maximum benefit from Earth's rotation) ends up in an orbit with inclination equal to the launch site's latitude. To reach higher inclinations, you launch progressively more northward or southward, sacrificing the eastward velocity bonus. Some rules of thumb that fall out of the math: Minimum achievable inclination from a launch site is the site's latitude. Kourou (5.2°N) can reach equatorial orbits efficiently; Plesetsk (62.9°N) cannot reach any orbit with inclination below 62.9° without a costly plane change. Sun-synchronous orbits (i ≈ 98°) require launches to the south. Vandenberg (34.7°N) is ideal because trajectories head south over the open Pacific. Cape Canaveral (28.5°N) cannot do SSO safely because southward trajectories would overfly populated Florida and the Caribbean. GEO insertions favor equatorial sites. A GEO target requires zero inclination; reaching it from Kourou costs ~250 m/s less delta-V than from Cape Canaveral (28°). Over a 15-year satellite life, that's roughly 200 kg of saved fuel — substantial. Molniya orbits (i = 63.4°) match high-latitude launch sites. Russia's Plesetsk and Vostochny are at exactly the right latitude to insert directly. Spaceport-to-orbit table A reference matrix for the world's active spaceports: SpaceportLatitudeBest for Kourou5.2°NGEO, equatorial Sriharikota13.7°NGEO, mid-inclination Wenchang19.6°NGEO, lunar (Long March 5) Cape Canaveral / Kennedy28.5°NLEO, GTO, ISS (with dogleg) Vandenberg34.7°NPolar, SSO Wallops37.9°NMid-inclination LEO Tanegashima30.4°NGEO, SSO Baikonur46.0°NISS (51.6°), Soyuz LEO Plesetsk62.9°NMolniya, polar Coverage polygons For Earth-observation satellites, the more practical question is the swath: the strip of Earth's surface within the sensor's field of view at any moment. For a sensor with swath width w, the coverage polygon is the ground track buffered by w/2. For Landsat 9 (185 km swath), buffer the ground track by 92.5 km on each side. For a hypothetical 1000-km-swath sensor (e.g. SAR), buffer by 500 km. Coverage is asymmetric in time: the ascending pass and descending pass cover different ground, and a single satellite revisits the same swath only every ~16 days for Landsat or ~5 days for Sentinel-2 (which has two satellites). The capstone The Week 10 lab is the start of Capstone 2: Ground-Track Coverage Tool — a Python tool that, given any TLE, outputs the 24-hour ground track as GeoJSON, a 1000-km-swath coverage polygon, and a country-overflight table with dwell time per country. The full rubric is on the capstone page; finishing it earns the Certified Orbital Analyst credential. Track 3 (Remote Sensing Specialist) starts next week, where the focus shifts from where the satellite is to what it sees. Reflection question (closing): When you compute country-overflight dwell time, who has the most overflight from any satellite? Who has the least? What does that map of inequality tell you? Quiz: Q1. Why is Kourou ideal for GEO launches? A. It's coldest B. Equatorial latitude maximizes velocity bonus from Earth's rotation * C. It's the cheapest D. It has the best weather Q2. A spaceport at 51.6° latitude can launch directly into: A. GEO B. Equatorial orbits C. Polar orbits D. Inclinations of 51.6° and higher * Q3. Sun-synchronous orbits are typically: A. Equatorial B. Highly inclined polar (~98°) * C. GEO D. Molniya Q4. Vandenberg's polar advantage is: A. Cold air B. Trajectories head south over open Pacific * C. Cheap fuel D. Closer to Hawaii Q5. Maximum inclination from a spaceport equals: A. The launch site's latitude * B. 180 minus latitude C. Always 90 D. Depends on rocket only --- ### Week 11: EM spectrum, sensor types, and radiometry Track: Remote Sensing Specialist URL: https://launchdetect.com/academy/week/11/ Summary: Remote sensing is physics first, GIS second. This week: the EM spectrum, sensor categories, and the radiometric quantities (radiance, brightness temperature) that every satellite product is built on. Objectives: - Map the electromagnetic spectrum from gamma rays to radio - Distinguish passive (optical, IR) from active (radar, lidar) sensors - Define radiance, irradiance, and reflectance - Explain why thermal IR sees rocket plumes but optical doesn't (always) Opening question (place-based hook): Look at a coral reef from above and a coral reef under thermal imaging. What information does each show that the other can't? Different parts of the electromagnetic spectrum let you see different things. Visible light shows surface color. Thermal IR shows temperature. SAR sees through clouds. This week you'll learn what each band tells you — and why it matters for protecting the places you care about. Connecting to Hawaiʻi — The EM spectrum and reef health: When the Hawaiian Islands had massive coral bleaching events in 2014–2015 and again in 2019, NOAA scientists tracked them using thermal infrared satellite imagery — the same band region (thermal IR around 11 µm) that's used for sea-surface temperature monitoring. Visible-light satellite imagery couldn't have caught the bleaching directly; reefs look blue-green either way. But thermal IR showed the temperature anomalies that bleach corals weeks before visible damage appeared. Different bands = different questions answered. Hint: NOAA Coral Reef Watch publishes near-real-time bleaching alerts at coralreefwatch.noaa.gov. They are built on the exact thermal IR data products you'll work with in Weeks 13–15. Lab: Plot the EM spectrum and band assignments — Build a Python plot of the EM spectrum from 0.4 µm (blue) to 13 µm (long-wave IR), annotated with the GOES-R ABI bands, Landsat 9 bands, and Sentinel-2 bands. Lab notebook: https://github.com/ops-sketch/academy-labs/blob/main/week-11/lab.ipynb Run in Colab: https://colab.research.google.com/github/ops-sketch/academy-labs/blob/main/week-11/lab.ipynb Primer: Remote sensing is the science of measuring something from a distance. In space remote sensing, the "distance" is hundreds to tens of thousands of kilometers, and the "measurement" is almost always electromagnetic radiation that reaches a satellite's sensor. This week is the physics primer: what the EM spectrum is, what each region tells you, and why thermal infrared is the key to launch detection. The electromagnetic spectrum The EM spectrum spans an enormous range of wavelengths. Earth observation uses a sliver of it: Visible (0.4–0.7 µm) — what your eyes see. Sentinel-2 bands 2 (blue, 0.49 µm), 3 (green, 0.56 µm), 4 (red, 0.66 µm). Near-infrared (NIR, 0.7–1.4 µm) — invisible to humans but reflected strongly by healthy vegetation. The basis of NDVI. Short-wave infrared (SWIR, 1.4–3 µm) — sensitive to water content, mineral composition, and active fires. Mid-wave infrared (MWIR, 3–8 µm) — thermal emissive. Surfaces emit measurable radiation in this band based on their temperature. GOES Band 7 (3.9 µm) is here. This is the band LaunchDetect uses for plume detection. Long-wave infrared (LWIR, 8–14 µm) — also thermal emissive, but for cooler objects. GOES Bands 13–15 (10.3, 11.2, 12.3 µm) and Landsat 9 TIRS bands. Microwave (1 mm–1 m) — penetrates clouds and (somewhat) ground. Used by passive radiometers and active radar (SAR). Passive vs active sensors Passive sensors measure radiation that already exists — sunlight reflected (optical, NIR, SWIR) or thermal radiation emitted by Earth's surface (MWIR, LWIR). They don't send anything; they just listen. Most weather satellites are passive. Active sensors emit radiation and measure what bounces back. SAR (Sentinel-1, RadarSat) emits microwaves; LiDAR emits laser pulses; altimeters emit short radar pulses. Active sensors see in the dark and through clouds; passive optical doesn't. Radiometric quantities Three quantities every remote sensing pipeline computes: Radiance (L) — the raw measurement: W/m²/sr/µm. How much electromagnetic energy is hitting the sensor per unit area, per unit solid angle, per unit wavelength. Reflectance (ρ) — for solar bands (visible, NIR, SWIR): the fraction of incoming sunlight reflected by the surface. Dimensionless, 0–1. Computed by dividing radiance by the solar irradiance at that wavelength. Brightness temperature (Tb) — for thermal bands: the temperature of a perfect black body that would emit the same radiance. Computed via the inverse Planck function. Kelvin units. Why thermal IR sees plumes A rocket plume is hot — combustion gases at 1,500–3,000 K. Earth's surface at "ambient" is ~290 K. The Planck curve says: the hotter an object, the more radiation it emits, and the peak emission shifts to shorter wavelengths. At 3,000 K, the peak emission is around 1 µm (still in the SWIR). At 290 K, the peak is around 10 µm (LWIR). So why does GOES Band 7 (3.9 µm) work better than Band 14 (11.2 µm) for plume detection? Because at 3.9 µm, a 1,500 K plume emits roughly 5,000× more radiance than a 290 K background. At 11 µm, the ratio is much smaller. Band 7 has the highest thermal contrast for hot objects, which is why it's the band of choice for fire and plume detection. The lab You'll build a Matplotlib plot of the EM spectrum from 0.4 µm to 13 µm, annotated with: the human-visible range, the GOES-R ABI 16-band layout, Landsat 9's 11 bands, Sentinel-2's 13 bands, and the Planck curves for a 290 K (Earth) and 1,500 K (plume) emitter. The resulting plot is a one-image reference you'll refer back to all of Track 3. Reflection question (closing): If you had access to satellite imagery of your favorite reef in real time — visible AND thermal AND SAR — what would you do with it? Whose decisions could it inform? Quiz: Q1. Optical sensors operate in what wavelength range? A. 0.4–0.7 µm (visible) * B. 1–10 cm C. 10–100 nm D. 1 mm–1 m Q2. GOES-R ABI Band 7 wavelength is: A. 0.47 µm B. 3.9 µm * C. 10.3 µm D. 13.3 µm Q3. SAR is: A. Synthetic Aperture Radar (active) * B. Solar Active Reading (passive) C. Sun Angle Range (geometry) D. Satellite Atomic Resonance Q4. Brightness temperature is computed from: A. Reflectance B. Radiance via the inverse Planck function * C. GPS position D. Time of day Q5. Why doesn't optical always see plumes? A. Plumes don't reflect visible light strongly compared to surroundings; thermal IR sees their heat emission * B. Optical sensors are broken C. Plumes are invisible D. Plumes are too small --- ### Week 12: Landsat / Sentinel-2: bands, NDVI, false color Track: Remote Sensing Specialist URL: https://launchdetect.com/academy/week/12/ Summary: Optical Earth observation 101 with the two most-used civilian sensors: NASA's Landsat (50 years of continuous coverage) and ESA's Sentinel-2 (10-meter, 5-day repeat). Objectives: - Identify the major bands of Landsat 9 and Sentinel-2 - Compute NDVI from NIR and red bands - Build a false-color composite for vegetation - Use rasterio + numpy to load and operate on multi-band rasters Opening question (place-based hook): If you wanted to know how much vegetation is on Kauaʻi's south shore right now, do you ask someone — or do you ask a satellite? Both are valid. The satellite (Sentinel-2 or Landsat) computes NDVI from optical bands and tells you the answer pixel-by-pixel in 10 m squares. This week you'll learn how. Connecting to Hawaiʻi — Sentinel-2 over the Hawaiian Islands: Sentinel-2 (ESA, two satellites in orbit) passes over Hawaiʻi every 5 days at 10:30 AM local time. Each pass produces a 290 km swath of 10-meter-resolution imagery, free and public via AWS Open Data. The Hawaiʻi Statewide GIS Program uses Sentinel-2 data for vegetation-cover mapping, invasive-species detection, and post-storm assessment. The NDVI you compute this week is the same metric they use. Hint: Try the Sentinel Hub EO Browser (free): search 'Hawaiʻi' and pick a recent cloud-free scene. NDVI is one of the visualizations you can toggle on directly in the browser. Lab: NDVI map of a launch site — Download a Sentinel-2 scene over Cape Canaveral. Compute NDVI. Mask cloud pixels. Output a styled PNG showing vegetation cover around the launch facilities. Lab notebook: https://github.com/ops-sketch/academy-labs/blob/main/week-12/lab.ipynb Run in Colab: https://colab.research.google.com/github/ops-sketch/academy-labs/blob/main/week-12/lab.ipynb Primer: Landsat and Sentinel-2 are the two great open civilian Earth-observation programs. Landsat (NASA + USGS) has flown continuously since 1972 — 50+ years of consistent imagery — and Sentinel-2 (ESA) provides 10-meter, 5-day-revisit coverage of the entire globe. Together they are the workhorses of every academic, NGO, and commercial Earth observation workflow. Landsat 9: the heritage program Landsat 9 launched in 2021 and is the operational US Earth-observation flagship. Key specs: Orbit: sun-synchronous, 705 km, ~99° inclination, 10:00 AM mean local time descending node. Revisit: 16 days (the same orbit returns over a given point every 16 days). With Landsat 8 also in orbit, the combined revisit is 8 days. Sensors: OLI-2 (Operational Land Imager 2) and TIRS-2 (Thermal Infrared Sensor 2). Bands: 11 spectral bands ranging from coastal aerosol (0.443 µm) to thermal IR (12 µm). Resolution: 30 m for most bands, 15 m for the panchromatic band, 100 m for thermal. Sentinel-2: the European twin-pair Sentinel-2A (launched 2015) and Sentinel-2B (launched 2017) fly the same orbit on opposite sides of the planet, giving a 5-day revisit at the equator and 2–3 days at mid-latitudes. Specs: Orbit: sun-synchronous, 786 km, ~98.6° inclination. Sensor: MSI (Multi-Spectral Instrument). Bands: 13 spectral bands. Resolution: 10 m for the visible + NIR bands (Bands 2, 3, 4, 8), 20 m for red-edge and SWIR bands, 60 m for atmospheric correction bands. NDVI: the most-used vegetation index NDVI (Normalized Difference Vegetation Index) is: NDVI = (NIR - Red) / (NIR + Red) Healthy vegetation reflects NIR strongly (chlorophyll's mesophyll layer is highly reflective at ~0.85 µm) but absorbs visible red (which is what chlorophyll uses for photosynthesis). The difference between NIR and Red, normalized by their sum, is high for vegetation (0.4–0.9), low for bare soil (0.1–0.2), and near zero or negative for water and built surfaces. import rasterio import numpy as np with rasterio.open('s2_b08_nir.tif') as src: nir = src.read(1).astype(float) with rasterio.open('s2_b04_red.tif') as src: red = src.read(1).astype(float) ndvi = (nir - red) / (nir + red + 1e-10) False-color composites A natural-color image puts R=Red, G=Green, B=Blue — what your eye would see. A "false-color" composite swaps in NIR. The classic false-color infrared (R=NIR, G=Red, B=Green) renders healthy vegetation as bright red, water as black, and bare soil as gray-blue. It's the fastest way to scan a scene for vegetation patterns. Where to get the data Both Landsat and Sentinel-2 are freely available from multiple cloud-native catalogs: AWS Open Data — Landsat: s3://usgs-landsat/collection02/; Sentinel-2: s3://sentinel-s2-l2a/ (free, requester-pays). Both serve Cloud-Optimized GeoTIFF (COG) format, so you can range-request just the bands and tiles you need without downloading entire scenes. Microsoft Planetary Computer — a STAC catalog with both datasets, free access with a free account. Google Earth Engine — both datasets indexed, but tied to the Earth Engine compute platform. The lab You'll download a recent Sentinel-2 L2A scene over Cape Canaveral (Florida) — 5-day-revisit means you can almost always find a cloud-free one within the past month. Compute NDVI. Mask out clouds using the L2A cloud probability layer (it ships with the scene). Output a styled PNG showing vegetation cover around the launch facilities, with the Kennedy and Cape Canaveral pads annotated. This is the same workflow used for environmental impact monitoring around launch sites — a topic that gets significant attention as Starbase, in particular, expands operations into ecologically sensitive Texas Gulf Coast wetlands. Reflection question (closing): Sentinel-2 imagery is free for anyone. That includes you. What's one question about Hawaiian ecosystems you could answer with it — that maybe nobody else has answered yet? Quiz: Q1. NDVI is: A. (NIR - Red) / (NIR + Red) * B. (Red - NIR) / (Red + NIR) C. NIR / Red D. Red - NIR Q2. Sentinel-2 highest resolution is: A. 1 m B. 10 m * C. 30 m D. 100 m Q3. False color (R=NIR, G=Red, B=Green) shows healthy vegetation as: A. Green B. Red * C. Blue D. Yellow Q4. Landsat 9 has how many spectral bands? A. 3 B. 7 C. 11 * D. 16 Q5. rasterio is: A. A C library B. A Python interface to GDAL for raster IO * C. A QGIS plugin D. Database engine --- ### Week 13: GOES-R ABI: full-disk, CONUS, mesoscale Track: Remote Sensing Specialist URL: https://launchdetect.com/academy/week/13/ Summary: GOES-R (GOES-16/17/18/19) revolutionized geostationary weather imagery. 16 bands, refresh as fast as 30 seconds (mesoscale), and the data is free. This is the sensor LaunchDetect uses in production. Objectives: - List the 16 ABI bands and what each is used for - Distinguish full-disk, CONUS, and mesoscale scanning modes - Read a GOES-R ABI NetCDF file with xarray + satpy - Explain why GOES is fixed at GEO and what that means for resolution Opening question (place-based hook): GOES-18 (the GOES-West satellite) is in geostationary orbit at 137.2°W. That is the satellite watching Hawaiʻi right now. Have you ever seen its imagery? GOES-18 looks down at Hawaiʻi 24 hours a day. Its imagery refreshes every 30 seconds during severe weather. The same satellite watches Pacific hurricanes — and would see a Kīlauea eruption plume, or a rocket launch from PMRF. Connecting to Hawaiʻi — GOES-West watches Hawaiʻi: Of the three geostationary satellites in the GOES-R series, GOES-18 (West, 137.2°W) is the one stationed directly over the eastern Pacific to watch Hawaiʻi, the US West Coast, and Pacific weather systems. NOAA's Central Pacific Hurricane Center in Honolulu uses GOES-18 imagery every hour to track tropical systems that could become Hawaiian hurricanes. The Hawaiʻi Volcano Observatory has used GOES-18 thermal infrared imagery to monitor Kīlauea eruptions. This is YOUR satellite. The lab this week opens a real GOES-18 file. Hint: RealEarth at NOAA STAR shows live GOES-18 imagery. Bookmark it. Watch a Pacific storm spiral in real time. Lab: Open a GOES-18 ABI Band 7 mesoscale scene — Download a real GOES-18 Band 7 mesoscale NetCDF from the NOAA public AWS bucket. Open it with satpy. Plot it. Identify the geographic coverage area. Lab notebook: https://github.com/ops-sketch/academy-labs/blob/main/week-13/lab.ipynb Run in Colab: https://colab.research.google.com/github/ops-sketch/academy-labs/blob/main/week-13/lab.ipynb Primer: The GOES-R series (GOES-16, 17, 18, 19) is NOAA's flagship geostationary weather constellation. From 35,786 km altitude over the equator, each satellite stares continuously at one hemisphere of Earth, producing the highest-cadence operational Earth imagery available to the public — including the 30-second mesoscale refresh that LaunchDetect uses for real-time launch detection. The GOES-R fleet SatellitePositionRole GOES-16drifted to 105W (stowed)Backup GOES-18137.2°WGOES-West operational GOES-1975.2°WGOES-East operational (since 2025) The Americas + Atlantic are covered by GOES-19 (East); the Pacific + western US + Hawaii by GOES-18 (West). Combined, they cover from longitude ~160°E to ~10°W. Outside that range, the geometry is too oblique to be useful — that's where JMA Himawari-9 takes over (East Asia and Western Pacific). The ABI sensor The Advanced Baseline Imager (ABI) is the primary instrument on every GOES-R satellite. It has 16 spectral bands: Bands 1–2 (visible, 0.47 and 0.64 µm) — for clouds, daytime imagery. Band 2 is the "red" channel used in true-color composites. Band 3 (veggie, 0.86 µm) — vegetation and aerosol. Bands 4–6 (cirrus + SWIR, 1.37–2.24 µm) — thin cirrus, cloud-particle size, snow/ice discrimination. Band 7 (shortwave window, 3.9 µm) — the workhorse for thermal hotspot detection. Sensitive to fires, plumes, gas flares, hot industrial sources. The band LaunchDetect uses. Bands 8–10 (water vapor, 6.2, 6.9, 7.3 µm) — upper, mid, lower troposphere water vapor. Bands 11–13 (longwave window, 8.4, 9.6, 10.3 µm) — cloud-top phase, ozone, "clean" thermal IR. Bands 14–15 (longwave window, 11.2, 12.3 µm) — "dirty" thermal IR pair, used for split-window SST retrieval. Band 16 (CO₂ longwave, 13.3 µm) — cloud-top temperature, CO₂ absorption. Three scanning modes ABI scans Earth in three nested modes simultaneously: Full Disk — the entire Earth hemisphere, every 10 minutes (in Mode 6, current standard). Resolution: 2 km for Band 7 (and most others); 0.5 km for Band 2. CONUS / FullDisk Sectors — fixed sector covering the contiguous United States, every 5 minutes. Mesoscale 1 & 2 — two operator-controllable 1,000 × 1,000 km windows, every 30 seconds. Each mesoscale window is moved by NOAA operators to focus on active weather events — hurricanes, severe convection, and (when an operator agrees to cooperate) active launches. The 30-second mesoscale cadence is what makes real-time launch detection possible. A Falcon 9 ascent from Vandenberg passes through several mesoscale frames during boost phase; LaunchDetect captures the plume in each. The data Every ABI scene is published as a NetCDF file to NOAA's public AWS Open Data bucket within seconds of generation: s3://noaa-goes18/ABI-L1b-RadM/{year}/{day-of-year}/{hour}/{filename}.nc For Band 7, mesoscale 1: OR_ABI-L1b-RadM1-M6C07_G18_*.nc. The file is ~3 MB per scene. Open with xarray, netCDF4, or — most conveniently — the satpy library which handles georeferencing automatically. The lab You'll download a real GOES-18 Band 7 mesoscale scene over the Pacific, open it with satpy, plot it on a map with proper geographic coordinates (handled via Scene.resample()), and identify the geographic coverage area. By the end you'll understand the data shape, units, and georeferencing — preparation for Week 14 where the same data is used for actual plume detection. Reflection question (closing): What does it mean that 'a satellite watches Hawaiʻi'? Is it watching FOR us — for hurricane warning, for fire detection — or watching us? When is the difference important? Quiz: Q1. GOES-R ABI has how many bands? A. 8 B. 12 C. 16 * D. 20 Q2. Mesoscale mode covers approximately: A. Full Earth disk B. Continental US C. 1000 km x 1000 km * D. Polar regions Q3. GOES-19 is the: A. GOES-East operational satellite (75.2W) * B. GOES-West operational satellite (137.2W) C. Test satellite only D. Retired Q4. Mesoscale refresh interval is: A. 30 seconds * B. 5 minutes C. 15 minutes D. 1 hour Q5. GOES is in: A. LEO B. MEO C. GEO * D. Polar --- ### Week 14: Thermal IR Band 7: brightness temperature and hotspots Track: Remote Sensing Specialist URL: https://launchdetect.com/academy/week/14/ Summary: The heart of LaunchDetect's methodology. Band 7 at 3.9 µm sees thermal emission strongly — rocket plumes show up as ~340 K hotspots against a background of ~290 K. This week you build a working hotspot detector. Objectives: - Convert raw ABI Band 7 radiance to brightness temperature in Kelvin - Identify a rocket plume's thermal signature in Band 7 - Set a brightness-temperature threshold for hotspot detection - Distinguish a real plume from a wildfire from a hot industrial source Opening question (place-based hook): When Kīlauea erupted in 2018 and lava flowed through Leilani Estates, scientists watched it from space. Which satellite, and how? The answer is GOES-18 Band 7 (mid-wave infrared) and several other thermal sensors. The same band region that LaunchDetect uses to spot rocket plumes also tracks lava flows. This week you'll learn the math — the same Planck inversion the Hawaiʻi Volcano Observatory uses. Connecting to Hawaiʻi — Band 7, lava flows, and rocket plumes: GOES Band 7 at 3.9 µm is sensitive to anything hot. When Kīlauea's lower East Rift Zone opened in May 2018, the Hawaiian Volcano Observatory used GOES-18 thermal imagery (along with MODIS, VIIRS, and ASTER) to track the lava flow in near-real time. Lava at 1,000+ K, rocket plumes at 1,500–3,000 K, wildfire fronts at 500–800 K — Band 7 catches them all. The brightness-temperature math you write this week is the exact same math HVO uses. The threshold is different (HVO cares about >800 K for active lava), the source is different — but the physics is one. Hint: USGS Hawaiian Volcano Observatory publishes live thermal-anomaly maps at volcanoes.usgs.gov/observatories/hvo/. Their pipeline is what you're learning to build. Lab: Detect a real launch plume in GOES-18 Band 7 — Download GOES-18 Band 7 frames spanning a known SpaceX launch from Vandenberg. Convert to brightness temperature. Threshold at >320 K. Output detected hotspot pixels with timestamps and lat/lon. Lab notebook: https://github.com/ops-sketch/academy-labs/blob/main/week-14/lab.ipynb Run in Colab: https://colab.research.google.com/github/ops-sketch/academy-labs/blob/main/week-14/lab.ipynb Primer: This is the heart of LaunchDetect's methodology, taught from the ground up. By the end of this week you'll have a working thermal hotspot detector that operates on real NOAA GOES-18 NetCDF files — the same input, same physics, and same threshold logic that powers the production pipeline at launchdetect.com. Band 7 and the Planck function GOES-R ABI Band 7 is centered at 3.9 µm. The level-1b (calibrated radiance) data product reports radiance in mW/m²/sr/cm⁻¹. To turn radiance into something physically meaningful, you invert the Planck function: T_b = (h * c / (k * λ)) / log((2 * h * c² / (λ⁵ * L)) + 1) where L is radiance, λ is the band center wavelength, and h, c, k are Planck's constant, the speed of light, and Boltzmann's constant. The output is a temperature in Kelvin — the temperature a perfect black body would need to emit the observed radiance. This is the brightness temperature, Tb. For GOES Band 7, NOAA helpfully publishes the Planck constants in each NetCDF file: import xarray as xr import numpy as np ds = xr.open_dataset('OR_ABI-L1b-RadM1-M6C07_G18_*.nc') Rad = ds.Rad.values # radiance, mW/m²/sr/cm⁻¹ # Constants from the NetCDF fk1 = ds.planck_fk1.values fk2 = ds.planck_fk2.values bc1 = ds.planck_bc1.values bc2 = ds.planck_bc2.values Tb = (fk2 / np.log(fk1 / Rad + 1) - bc1) / bc2 # Kelvin Typical brightness temperatures Open ocean: ~290 K (16°C effective) — Band 7 is sensitive to skin temperature, which differs from bulk SST. Land surfaces: 280–320 K depending on time of day, season, and surface type. Clouds: highly variable, often 220–270 K (cold) or 280–290 K (low warm clouds). Wildfires: 320–500 K in actively burning pixels. Rocket plumes: typically 340–400 K at GOES Band 7 spatial resolution. The actual plume temperature is 1,500–3,000 K, but the plume only fills a small fraction of the 2 km × 2 km pixel, so the area-weighted brightness temperature is much lower. The threshold approach The simplest detector: threshold the brightness temperature. If Tb > threshold, flag the pixel as a hotspot. Sensible thresholds: 320 K: aggressive — catches all plumes but also many false positives (wildfires, gas flares, industrial sources). 340 K: balanced — good plume recall, fewer false positives. 360 K: conservative — very few false positives, but misses smaller plumes (small launchers like Electron). LaunchDetect's production threshold is set dynamically and combined with spatial coincidence with a known spaceport geofence + temporal pattern matching (a real plume grows then shrinks over 1–3 minutes; a wildfire stays hot for hours). That logic is Track 4 and Track 5. Common false positives The single biggest source of false positives is wildfires. Both produce hotspots in Band 7. The discriminators: Spatial coincidence — a thermal hotspot inside a known spaceport's geofence is almost certainly a launch; outside, it's almost certainly a fire. Temporal pattern — a launch plume appears for 1–3 minutes then disappears; a wildfire persists for hours. FIRMS overlap — NASA's FIRMS (Fire Information for Resource Management System) publishes confirmed fire hotspots in near-real-time. A Band 7 hotspot that overlaps a FIRMS detection is almost certainly a fire. Other false positives: industrial gas flares (Iraqi/Saudi oil fields), reflective sun glint over water, and rarely volcanic eruptions. The lab You'll download several GOES-18 Band 7 frames spanning a known SpaceX Falcon 9 launch from Vandenberg, convert radiance to brightness temperature, threshold at 320 K, output the detected hotspot pixels with timestamps and lat/lon (via Week 15's georeferencing). The lab produces the same primary detection that drives a real launchdetect.com entry — without (yet) the parallax correction, clustering, or scoring layers. Reflection question (closing): The same technology that helps HVO warn communities about lava can be used to watch any hot thing on Earth. Who decides what gets watched? Whose data is it? Quiz: Q1. Band 7 wavelength is: A. 3.9 µm (mid-wave IR) * B. 10.3 µm (long-wave IR) C. 0.64 µm (red) D. 1.38 µm (water vapor) Q2. A rocket plume in Band 7 appears as: A. A cold spot B. A hotspot * C. Invisible D. Striped Q3. Typical Earth surface brightness temperature is: A. ~150 K B. ~290 K * C. ~400 K D. ~1000 K Q4. Why use brightness temperature, not raw radiance? A. It's intuitive (Kelvin) and threshold-comparable across scenes * B. It looks better C. It's required by law D. Radiance can't be measured Q5. Common false positives in Band 7 hotspot detection? A. Wildfires, gas flares, industrial sources, sun glint * B. Only clouds C. Only ocean D. Only night --- ### Week 15: Georeferencing GOES and parallax (Capstone 3 week) Track: Remote Sensing Specialist URL: https://launchdetect.com/academy/week/15/ Summary: Track 3 culminates here: a GOES hotspot at (x, y) in the fixed grid is NOT directly at the (lat, lon) directly below — parallax matters, especially for tall plumes. This week is the math, and the capstone is a working thermal plume detector. Objectives: - Convert ABI fixed-grid coordinates to lat/lon - Apply parallax correction for high-altitude plumes - Handle the GOES 'view from the equator' geometry - Account for limb effects at the edge of the disk Opening question (place-based hook): When GOES-18 sees a thermal hotspot at pixel (1023, 645), where is that hotspot really, on the ground? GOES gives you pixel coordinates, not lat/lon. Converting between them is geometry — and getting it wrong means your launch detection (or your lava flow alert) is off by tens of kilometers. This week is the math. Connecting to Hawaiʻi — Parallax over the Pacific: GOES-18 is at 137.2°W on the equator. Hawaiʻi is at ~20°N, about 2,500 km from GOES-18's sub-satellite point. That distance means the satellite looks at Hawaiʻi at an angle — and any tall thermal source (rocket plume, eruption column, volcanic ash plume) appears DISPLACED from where it actually is. A 30 km Kīlauea ash plume could appear ~3 km offset to the north because GOES is south of Hawaiʻi. The parallax-correction math you learn this week is exactly what HVO and NOAA apply to their thermal imagery to get the geolocation right. Hint: The Capstone 3 deliverable — a working plume detector — is the same architecture used for volcanic-ash detection at airports across the Pacific. Lab: Thermal Plume Detector (capstone start) — Given a GOES Band 7 NetCDF and a known launch event, output: detected plume records with (timestamp, lat, lon, brightness_temp_K, area_km²), Folium heatmap visualization. This is the deliverable for Capstone 3. Lab notebook: https://github.com/ops-sketch/academy-labs/blob/main/week-15/lab.ipynb Run in Colab: https://colab.research.google.com/github/ops-sketch/academy-labs/blob/main/week-15/lab.ipynb Primer: Track 3 closes here. By Week 14 you can threshold-detect a hotspot in Band 7 brightness temperature. But the (x, y) pixel coordinate is not the (lat, lon) of the heat source on the ground — not directly. To get a usable geocoded detection, the pixel must be projected through the GOES fixed-grid math, and parallax must be corrected for tall plumes. That math is this week, and the capstone is a working end-to-end plume detector. The GOES fixed grid GOES doesn't deliver imagery in lat/lon. It delivers imagery in a scan-angle coordinate system tied to the satellite's frame of reference. Each pixel has an "x" value (east-west scan angle from satellite, in radians) and a "y" value (north-south elevation angle from satellite, in radians). The NetCDF file contains x and y 1-D coordinate arrays plus a goes_imager_projection variable with the geostationary projection metadata (perspective point altitude, satellite longitude, sweep angle axis). To get the (lat, lon) of a pixel, you project a line from the satellite through the pixel down to the Earth ellipsoid: import numpy as np H = 35786023.0 + 6378137.0 # satellite altitude + Earth equatorial radius λ0 = -137.2 * π / 180 # satellite longitude (GOES-18 West) r_eq, r_pol = 6378137.0, 6356752.31414 x, y = scan_x, scan_y # radians from the NetCDF a = sin(x)**2 + (cos(x)**2 * (cos(y)**2 + (r_eq/r_pol)**2 * sin(y)**2)) b = -2 * H * cos(x) * cos(y) c = H**2 - r_eq**2 r_s = (-b - sqrt(b**2 - 4*a*c)) / (2*a) s_x = r_s * cos(x) * cos(y) s_y = -r_s * sin(x) s_z = r_s * cos(x) * sin(y) lat = atan2((r_eq/r_pol)**2 * s_z, sqrt((H - s_x)**2 + s_y**2)) lon = λ0 + atan2(s_y, H - s_x) This is the exact algorithm in the GOES-R Product Definition Document. satpy wraps it cleanly. The parallax problem The math above assumes the pixel lies on Earth's surface. A rocket plume at 50 km altitude is well above the surface, and from GOES's geostationary perspective, the apparent position of the plume is offset from the actual position of the rocket. This offset is the parallax. For a plume directly below GOES (sub-satellite point), parallax is zero. Move 1,000 km north and parallax of a 50 km plume reaches ~5 km. Move to 4,000 km from the sub-satellite point and parallax is ~25 km — well above LaunchDetect's 5 km accuracy target. Parallax correction: given the apparent (lat, lon) and an estimated plume altitude, compute the offset vector pointing back toward the sub-satellite point, scaled by altitude/(altitude + Earth radius). The corrected position is the apparent position shifted by that vector. Limb effects At the edge of GOES's visible disk (the "limb"), pixel geometry is highly oblique. A pixel that's nominally 2 km × 2 km in the center of the disk becomes 6+ km × 2 km near the limb (foreshortening), and the radiometric path length through the atmosphere is much longer (more attenuation and scatter). Practical rule: discard detections more than ~70° away from sub-satellite point. The capstone Week 15's lab is the start of Capstone 3: Thermal Plume Detector. Given a GOES Band 7 NetCDF for a known launch event, output records with (timestamp, lat, lon, brightness_temp_K, area_km²) for each plume detection, with parallax correction applied, FIRMS-based false-positive filtering, and a Folium heatmap visualization. The full rubric is on the capstone page; finishing it earns the Certified Remote Sensing Specialist credential. Track 4 (Mission GIS Engineer) starts next week, moving the focus into web-based delivery and 3D globes. Reflection question (closing): Capstone 3 will let you detect a thermal anomaly on Earth from satellite data. That is a capability. What does it mean to wield it responsibly? Who do you tell first when you spot something? Quiz: Q1. GOES fixed grid uses what coordinates? A. Latitude/longitude B. Scan and elevation angle from satellite * C. UTM D. ECEF Q2. Parallax error increases with: A. Plume altitude and distance from sub-satellite point * B. Time of day C. Wavelength D. File size Q3. GOES-18 is fixed over: A. 75.2W B. 137.2W * C. 0 degrees D. 180 degrees Q4. satpy handles georeferencing how? A. Manually via numpy only B. Via Scene.resample() to projected grids * C. It doesn't D. Via QGIS plugin Q5. Limb effects at the disk edge cause: A. Brighter pixels B. Geometric and radiometric distortion (foreshortening, longer path) * C. Nothing D. File corruption --- ### Week 16: Web mapping: Leaflet vs MapLibre vs OpenLayers Track: Mission GIS Engineer URL: https://launchdetect.com/academy/week/16/ Summary: Moving GIS into the browser. Three battle-tested libraries, three different tradeoffs. This week you build the same map in all three. Objectives: - Compare Leaflet (lightweight), MapLibre GL JS (vector tiles), and OpenLayers (heavyweight) - Build a basic Leaflet map with markers and popups - Build a MapLibre map with a vector tile source and styled layers - Pick the right library for a use case Opening question (place-based hook): If you wanted to share a map of every public beach access on Oʻahu with everyone in your school, would you use Google Maps — or build your own? Web mapping libraries (Leaflet, MapLibre, OpenLayers) let you build the maps you want, without Google in the middle. This week you'll learn the three big ones and when each shines. Connecting to Hawaiʻi — Open-source mapping for community projects: Several Hawaiʻi-based community organizations have built their own public-facing maps using open-source libraries: the Surfrider Foundation Oʻahu chapter for beach water-quality data, Kuaʻāina Ulu ʻAuamo for traditional fishing-area mapping, and several public-access surfing-spot websites. Why open-source? Because you own the experience. Google can change its API pricing or terms tomorrow; Leaflet doesn't have an owner who can pull the rug out. Hint: Build a map of public ahupuaʻa boundaries on Oʻahu, or of every shave-ice place on your island. Same tools. Different scale of importance, same dignity. Lab: The same launch-site map in 3 libraries — Build the same global launch-site map (markers, popups, basemap) three times: once in Leaflet, once in MapLibre GL JS, once in OpenLayers. Compare file size, performance, and code complexity. Lab notebook: https://github.com/ops-sketch/academy-labs/blob/main/week-16/lab.ipynb Run in Colab: https://colab.research.google.com/github/ops-sketch/academy-labs/blob/main/week-16/lab.ipynb Primer: Web mapping is GIS delivered in the browser. Three battle-tested open-source libraries dominate the space — Leaflet, MapLibre GL JS, and OpenLayers — each with different strengths and a different "right when" sweet spot. This week you'll build the same map in all three and develop a feel for which one to reach for next time. Leaflet Leaflet (leafletjs.com) is the elder statesman — actively maintained since 2011, ~40 KB minified+gzipped, no dependencies. It renders raster tiles using DOM elements (or Canvas in newer versions), which makes it lightweight, predictable, and accessible (interactive markers are real DOM nodes with proper ARIA support out of the box). Leaflet is the right default when: you need a map quickly, you're working with raster tiles (OpenStreetMap, satellite imagery), you have a few hundred to a few thousand features, and you don't need to style the basemap dynamically. Most non-trivial public GIS sites built before ~2019 use Leaflet, and it's still the right choice for any "just plot some points on a map" task. const map = L.map('map').setView([28.6, -80.6], 5); L.tileLayer('https://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png', { attribution: '© OpenStreetMap' }).addTo(map); L.marker([28.6, -80.6]).addTo(map).bindPopup('Cape Canaveral'); MapLibre GL JS MapLibre GL JS is a community-maintained fork of Mapbox GL JS, forked in 2020 when Mapbox changed its license to a proprietary one. It renders vector tiles in WebGL, which means smooth zooming at any level (no pixelation), client-side styling (change colors / fonts without re-fetching tiles), and 60 fps rendering on modern hardware. MapLibre is the right default when: you need vector tiles, you want to style the map client-side, you need smooth animations, or you're rendering more than ~10k features. It's slightly heavier (~200 KB minified+gzipped) and the learning curve is steeper, but the ceiling is much higher. const map = new maplibregl.Map({ container: 'map', style: 'https://demotiles.maplibre.org/style.json', center: [-80.6, 28.6], zoom: 5 }); map.on('load', () => { map.addSource('pads', { type: 'geojson', data: 'pads.geojson' }); map.addLayer({ id: 'pads', type: 'circle', source: 'pads', paint: { 'circle-radius': 6, 'circle-color': '#ff6b35' }}); }); OpenLayers OpenLayers is the heavyweight — comprehensive support for every projection in the EPSG registry, full-featured editing tools, drag-and-drop layer composition, and unmatched depth on raster/vector mixing. It's much larger (~500 KB) and has a steeper learning curve, but for serious enterprise GIS in the browser — and especially anything involving non-standard projections — it's the right tool. OpenLayers is the right default when: you need non-Web-Mercator projections (e.g. polar stereographic for Arctic shipping), you're building editing tools (digitizing, snapping, geometry validation), or you're porting a desktop GIS workflow to the browser and need every feature. The vector-tile shift The web-mapping world has shifted hard toward vector tiles in the past 5 years. Vector tiles are smaller, styleable on the client, sharper at high zoom, and they compose better with overlay data (you can style multiple data sources with the same color scheme client-side). The trade-off: vector tiles need a renderer (MapLibre) that's heavier than Leaflet's DOM-based approach, and serving vector tiles is more complex than serving pre-rendered PNGs. The lab You'll build the same map — 20 global launch sites as markers, OpenStreetMap basemap, popup with metadata — three times: once in Leaflet (raster tiles), once in MapLibre GL JS (vector tiles), once in OpenLayers (mixed). Compare: bundle size, time to first render, frame rate during pan/zoom, and lines of code. The output is informed taste, not just code. Reflection question (closing): Whose maps do you trust, and why? What does it mean to make a map that someone might trust as much as the one you're using right now? Quiz: Q1. Leaflet is best for: A. Lightweight raster-tile maps with minimal config * B. WebGL vector tile rendering C. 3D globes D. Server-side rendering Q2. MapLibre GL JS uses: A. Canvas raster tiles B. WebGL vector tiles * C. SVG only D. DOM elements Q3. OpenLayers' strength is: A. Simplicity B. Comprehensive projection and feature support, big API surface * C. Smallest bundle D. Mobile-first Q4. Vector tiles vs raster tiles primary benefit? A. Smaller files, styleable on client, better at high zoom * B. Always faster C. Higher resolution D. Required by spec Q5. MapLibre forked from: A. Mapbox GL JS (before the license change in 2020) * B. OpenLayers C. Leaflet D. ESRI --- ### Week 17: Vector tiles with tippecanoe and MBTiles Track: Mission GIS Engineer URL: https://launchdetect.com/academy/week/17/ Summary: Vector tiles changed web mapping. tippecanoe is the industry-standard generator. This week you take a GeoJSON of 10,000 satellites and produce a smooth, multi-zoom vector tile set. Objectives: - Generate vector tiles from GeoJSON with tippecanoe - Understand MBTiles file format and PMTiles cloud-optimized variant - Serve tiles via TiTiler or martin - Configure zoom-dependent tile generalization Opening question (place-based hook): If you wanted to show 10,000 satellites on a map and still scroll smoothly, how would you do it? The answer is vector tiles. Same underlying idea as the slippy-tile maps you scroll on your phone — but for vector data instead of pre-rendered pixels. This week you'll build the pipeline. Connecting to Hawaiʻi — Vector tiles and reef monitoring at scale: The Hawaiian Islands Humpback Whale Sanctuary publishes vessel-traffic maps that include thousands of AIS data points per day. The Pacific Disaster Center, headquartered on Maui, runs interactive damage-assessment maps that show tens of thousands of features. Both use vector-tile pipelines: pre-index the data into a pyramid of small tiles, serve only the tiles in view. Same architecture works whether you're plotting whales or satellites. Hint: tippecanoe and PMTiles are open-source. Both run on a laptop. You can prototype an island-wide dataset today with zero infrastructure cost. Lab: 10,000 satellites in vector tiles — Download the active CelesTrak TLE catalog. Compute current sub-satellite points. Export as GeoJSON. Generate vector tiles with tippecanoe across zoom levels 0–10. Serve via PMTiles and render with MapLibre. Lab notebook: https://github.com/ops-sketch/academy-labs/blob/main/week-17/lab.ipynb Run in Colab: https://colab.research.google.com/github/ops-sketch/academy-labs/blob/main/week-17/lab.ipynb Primer: The TLE catalog has 10,000+ active objects. Plotting them all as GeoJSON in a browser brings any vector renderer to its knees — every feature must be loaded, parsed, indexed, and drawn. Vector tiles solve this elegantly: instead of one giant GeoJSON, you generate a pre-indexed pyramid of small tiles, each containing only the features visible at that zoom and location. The vector tile pyramid A vector tile is a Protocol Buffers (PBF) file containing a subset of features for a specific (z, x, y) tile address. The pyramid follows the standard slippy-tile convention: z=0 is the entire world in one tile, z=1 splits it into 4 tiles, z=2 into 16, and so on. At z=10, the world is 1,048,576 tiles. Each tile is small (typically 5–50 KB) and the renderer fetches only the tiles in the current viewport. tippecanoe tippecanoe (now MIT-licensed at github.com/felt/tippecanoe) is the industry-standard generator: it takes a GeoJSON and produces an MBTiles or PMTiles archive. It's a C++ tool you install via Homebrew (brew install tippecanoe) or build from source. tippecanoe \ -o satellites.pmtiles \ --maximum-zoom=10 \ --minimum-zoom=0 \ --layer=satellites \ --drop-densest-as-needed \ --extend-zooms-if-still-dropping \ satellites.geojson The key flags: --minimum-zoom / --maximum-zoom — what zoom range to generate. Generate fewer zooms when your features don't need them. --drop-densest-as-needed — at low zooms, drop the densest features to keep tile size manageable. Essential for large catalogs. --extend-zooms-if-still-dropping — if you're still dropping features at the max zoom, automatically extend. --layer — the layer name inside the tiles. Use it for styling later. MBTiles vs PMTiles MBTiles is the original spec: a SQLite database file containing every tile in the pyramid. Easy to generate, easy to serve via a tile server (martin, TiTiler, mbtileserver), but requires running infrastructure. PMTiles is the modern improvement: a single file with an internal index that lets HTTP range requests fetch individual tiles directly. No tile server needed — just upload the PMTiles file to S3 (or any HTTP host) and configure the renderer to range-request from the URL. This is the modern default for static deployments. // MapLibre with PMTiles const protocol = new pmtiles.Protocol(); maplibregl.addProtocol("pmtiles", protocol.tile); map.addSource('satellites', { type: 'vector', url: 'pmtiles://https://my-bucket.s3.amazonaws.com/satellites.pmtiles' }); Tile size optimization Aim for tiles to be 50 KB or smaller. If a tile is too large, MapLibre's frame rate drops. Strategies: Lower max zoom — at z=10, each tile covers ~40 km on a side; rarely do you need higher resolution for satellite data. Drop attributes — every feature can have N properties; if you don't render them, drop them with --include filters in tippecanoe. Simplify geometries — for lines and polygons, set --simplification to reduce vertex count. The lab You'll download the active CelesTrak satellite catalog (10,000+ TLEs), compute current sub-satellite points for all of them with skyfield, export as GeoJSON, generate vector tiles with tippecanoe across zoom levels 0–10, serve via PMTiles directly from local file system, and render in MapLibre. The result is a smooth, pannable map of every satellite in orbit — at 60 fps even on a 5-year-old laptop. This is the architecture LaunchDetect uses for the /atlas/ page's spaceport layer (smaller catalog but same pipeline). PMTiles + S3 + CloudFront = no tile server to maintain. Reflection question (closing): Vector tiles scale to millions of features without slowing down. What's a story about Hawaiʻi that needs that scale to tell? What's a story that doesn't? Quiz: Q1. tippecanoe is by: A. Mapbox (now MIT-licensed) * B. ESRI C. Google D. NASA Q2. MBTiles is: A. A SQLite file containing tile data * B. A JSON manifest C. A binary protocol D. A streaming format Q3. PMTiles improves on MBTiles by: A. Being a single file servable directly from S3 via HTTP range requests * B. Higher resolution C. Vector-only D. More compression Q4. Tile zoom levels are typically: A. 0 (whole Earth) to ~22 (street level) * B. 1 to 10 C. 100 to 200 D. -5 to +5 Q5. tippecanoe's `--drop-densest-as-needed` does what? A. Drops the densest features at low zooms to keep tile size manageable * B. Drops random features C. Deletes the source D. Increases tile size --- ### Week 18: 3D globes: CesiumJS and orbital visualization Track: Mission GIS Engineer URL: https://launchdetect.com/academy/week/18/ Summary: From 2D maps to 3D globes. CesiumJS is the open-source heavyweight champion. This week you put a real satellite into a real-time 3D orbital simulation. Objectives: - Initialize a CesiumJS viewer in HTML - Add an entity with a position track in time - Draw an orbital path in 3D space - Use Cesium's time slider for playback Opening question (place-based hook): When you imagine the ISS overhead, do you see it as a moving dot on a 2D map — or floating above Earth in 3D? 3D is closer to the truth. Cesium is the open-source 3D globe library that lets you put any data into orbital context. This week you'll fly your own. Connecting to Hawaiʻi — 3D globes and the wayfinding mindset: When Polynesian wayfinders teach navigation, they often start by having students imagine themselves standing still while the stars and the Earth move around them. The instinct for 3D-spatial-mental-models is deep in Hawaiian navigational tradition. Cesium reflects that perspective: it shows Earth as a 3D body in space, not a flat map. The same skill that helps a wayfinder visualize wind cells and ocean swells helps a programmer reason about orbital geometry. Hint: Cesium's free demo at cesium.com/cesiumjs/ lets you fly the camera anywhere on Earth. Try flying to Hawaiʻi. Tilt the view. Notice how 3D changes your sense of where things are. Lab: ISS in 3D — Build a CesiumJS web page that loads the current ISS TLE, propagates 24 hours, draws the orbital path as a polyline, and animates the ISS along it with the Cesium time control. Lab notebook: https://github.com/ops-sketch/academy-labs/blob/main/week-18/lab.ipynb Run in Colab: https://colab.research.google.com/github/ops-sketch/academy-labs/blob/main/week-18/lab.ipynb Primer: 2D web maps work fine for showing where things are on Earth's surface. But satellites are NOT on Earth's surface — they're hundreds to thousands of kilometers above it, moving fast, in highly inclined orbits that wrap around the planet. To show where a satellite actually is in a way that makes intuitive sense, you need a 3D globe. CesiumJS CesiumJS (cesium.com) is the open-source 3D globe library that became the de-facto standard for serious 3D web GIS. NASA's Eyes on Asteroids, the FAA's traffic visualizations, every major satellite-tracker website with a 3D mode — all built on Cesium. The library is Apache 2.0 licensed; the commercial side is Cesium Ion, which hosts 3D tiles. Cesium renders Earth as a WGS84 ellipsoid in WebGL. Coordinates are world-coordinate Cartesian (ECEF) internally; user-facing APIs accept lat/lon/altitude. The library handles all the camera math (orbit, zoom, tilt) so you don't have to. Hello CesiumJS <link rel="stylesheet" href="https://cesium.com/downloads/cesiumjs/releases/1.121/Build/Cesium/Widgets/widgets.css"> <script src="https://cesium.com/downloads/cesiumjs/releases/1.121/Build/Cesium/Cesium.js"></script> <div id="cesiumContainer" style="width:100%;height:600px"></div> <script> Cesium.Ion.defaultAccessToken = 'your-ion-token-or-empty-for-offline'; const viewer = new Cesium.Viewer('cesiumContainer'); viewer.entities.add({ name: 'Cape Canaveral', position: Cesium.Cartesian3.fromDegrees(-80.6, 28.6, 0), point: { pixelSize: 10, color: Cesium.Color.ORANGE } }); </script> Entities, time, and SampledPositionProperty An Entity is Cesium's unit of "a thing on the globe". It has a position (which can be a function of time), graphics (point, billboard, polyline, polygon, model, etc.), and metadata. A satellite is an entity whose position changes over time. You provide a series of timestamped positions; Cesium interpolates between them. The interpolation can be linear, Lagrange, or Hermite — for orbital motion, Hermite with high degree gives the smoothest result. const positions = new Cesium.SampledPositionProperty(); positions.setInterpolationOptions({ interpolationDegree: 7, interpolationAlgorithm: Cesium.HermitePolynomialApproximation }); // For each propagated point from skyfield (or satellite.js): for (const {time, lon, lat, alt} of orbital_points) { const jd = Cesium.JulianDate.fromIso8601(time); const xyz = Cesium.Cartesian3.fromDegrees(lon, lat, alt * 1000); positions.addSample(jd, xyz); } viewer.entities.add({ name: 'ISS', position: positions, point: { pixelSize: 10, color: Cesium.Color.CYAN }, path: { width: 2, material: Cesium.Color.CYAN.withAlpha(0.5), leadTime: 0, trailTime: 5400 } // show 90-min trail }); Time control Cesium has a built-in Clock and an animation widget. The clock advances time; the SampledPositionProperty re-samples on each tick. The animation widget lets the user scrub, pause, and change playback speed. viewer.clock.startTime = Cesium.JulianDate.fromIso8601('2026-05-11T00:00:00Z'); viewer.clock.stopTime = Cesium.JulianDate.fromIso8601('2026-05-12T00:00:00Z'); viewer.clock.currentTime = viewer.clock.startTime.clone(); viewer.clock.multiplier = 60; // 60x real time viewer.clock.shouldAnimate = true; Performance ceiling Cesium can comfortably render ~10,000 simple entities at 60 fps on a modern desktop. Beyond that you need to use primitives (a lower-level API) instead of entities. The whole CelesTrak catalog (~10,000) renders fine; the Starlink constellation alone (~6,000) is also fine. Whole synthetic constellations of ~100,000 satellites need primitives. The lab You'll build a CesiumJS page that loads the current ISS TLE, uses satellite.js (the JavaScript SGP4 port) to propagate 24 hours, draws the orbital path as a polyline, and animates the ISS along it with the Cesium time control. By the end you'll have a working real-time satellite tracker — the foundation for Capstone 4 in Week 20. Reflection question (closing): A 2D map is a choice (where do you center it, which projection?). A 3D globe is also a choice (where is the camera, which way is north?). What does each choice say? Quiz: Q1. CesiumJS uses: A. WebGL * B. Canvas 2D C. SVG D. DOM Q2. Cesium's coordinate system is: A. WGS84 (matching GPS) * B. Web Mercator C. UTM D. Local tangent plane Q3. An `Entity` in Cesium has: A. A position (possibly time-dependent), graphics, and metadata * B. Only a position C. Only graphics D. Only metadata Q4. Cesium Ion is: A. A 3D tile hosting service from Cesium (commercial) * B. A free CDN C. A Cesium fork D. A QGIS plugin Q5. To animate over time, use Cesium's: A. Clock + SampledPositionProperty * B. setTimeout loop C. CSS animations D. WebSocket only --- ### Week 19: Real-time GIS: WebSockets and streaming TLEs Track: Mission GIS Engineer URL: https://launchdetect.com/academy/week/19/ Summary: Real-time isn't optional in serious space GIS. This week you build the WebSocket pipeline that streams live satellite positions to a browser, throttling and reconnecting. Objectives: - Build a WebSocket server in Python (FastAPI / Socket.IO) - Stream live satellite position updates to a browser - Throttle updates to keep the browser responsive - Handle disconnects and reconnects gracefully Opening question (place-based hook): When a tsunami warning gets issued in the Pacific, how does the alert reach your phone in seconds, not minutes? Real-time geospatial systems push events from where they're detected (a buoy, a seismometer) to where they're needed (your phone, the school siren). WebSockets are how. This week you'll build the same pipeline. Connecting to Hawaiʻi — PTWC and Pacific real-time alerts: The Pacific Tsunami Warning Center, based in ʻEwa Beach on Oʻahu, issues warnings for the entire Pacific basin. When their seismologists detect a M7+ earthquake, alerts have to reach ~50 Pacific Island nations within minutes. The technical stack uses real-time message-passing — the same WebSocket pattern you'll learn this week. WebSocket is also how LaunchDetect streams launch detections, how Pacific Disaster Center streams damage assessments, how RealEarth streams GOES imagery. Real-time GIS is critical for everyone living around the Pacific Ring of Fire. Hint: tsunami.gov shows PTWC's live alerts. Their system architecture is publicly documented — and it's structurally identical to what you're learning. Lab: Live 100-satellite stream — Build a FastAPI WebSocket endpoint that streams positions of 100 satellites at 1 Hz. Build a MapLibre client that receives the stream and updates marker positions. Handle disconnects. Lab notebook: https://github.com/ops-sketch/academy-labs/blob/main/week-19/lab.ipynb Run in Colab: https://colab.research.google.com/github/ops-sketch/academy-labs/blob/main/week-19/lab.ipynb Primer: Real-time is not optional in serious space GIS. A launch detection that takes 5 minutes to appear in a user's browser is much less valuable than one that appears in 5 seconds. This week you build the WebSocket pipeline that makes this possible. WebSocket vs polling vs SSE Three options for getting server data to a browser in near-real-time: HTTP polling — client fetches once per N seconds. Simple, works through any proxy, but inefficient for high-frequency updates and high latency (you're at most 1 polling interval behind reality). Server-Sent Events (SSE) — server pushes events over a long-lived HTTP connection. Works through proxies and CDNs (it's just HTTP), simpler than WebSockets, but one-way (server → client only). WebSocket — bidirectional persistent TCP connection. The right choice when you need server → client at high frequency AND occasional client → server messages. Most space-domain live trackers use WebSockets. FastAPI as a WebSocket server FastAPI (fastapi.tiangolo.com) has first-class WebSocket support — declarative, async, type-checked. The skeleton: from fastapi import FastAPI, WebSocket, WebSocketDisconnect import asyncio, json from skyfield.api import EarthSatellite, load, wgs84 app = FastAPI() ts = load.timescale() sats = [EarthSatellite(l1, l2, name, ts) for name, l1, l2 in tle_catalog] @app.websocket("/ws/positions") async def positions(ws: WebSocket): await ws.accept() try: while True: t = ts.now() payload = [] for sat in sats[:100]: sp = wgs84.subpoint(sat.at(t)) payload.append({ "name": sat.name, "lat": sp.latitude.degrees, "lon": sp.longitude.degrees, "alt_km": sp.elevation.km }) await ws.send_text(json.dumps(payload)) await asyncio.sleep(1.0) # 1 Hz throttle except WebSocketDisconnect: pass # clean disconnect — nothing to do The browser side The browser's WebSocket constructor opens the connection. On each message, update marker positions. Throttle DOM updates if needed (1 Hz to the server is fine; rendering at 60 fps in the browser is overkill — animate-tween between samples for smoothness): const ws = new WebSocket('wss://launchdetect.com/ws/positions'); ws.onmessage = (ev) => { const positions = JSON.parse(ev.data); for (const p of positions) { const marker = markers.get(p.name); marker?.setLngLat([p.lon, p.lat]); } }; ws.onclose = () => setTimeout(reconnect, 1000 * Math.min(retryCount++, 30)); Throttling and back-pressure 10,000 satellites × 60 Hz = 600,000 updates/second. No browser will keep up; no server should send that. Realistic limits: Server: 1–4 Hz aggregate update cadence is typical. SGP4 is fast (microseconds per satellite), so the server can compute. The bottleneck is network bandwidth and client capacity. Client: roughly 1,000 marker updates per frame is the ceiling on modern hardware. For 10,000 satellites at 1 Hz, the client needs to spread updates across multiple frames or batch them via WebGL. Reconnection WebSockets disconnect for many reasons: WiFi flap, server restart, mobile network switch, load balancer hiccup. A production client must reconnect with exponential backoff (1 s, 2 s, 4 s, 8 s, ..., capped at 30 s) and detect close vs error events. Socket.IO (socket.io) bundles reconnection, namespaces, rooms, and HTTP-polling fallback — at the cost of a heavier wire protocol. For pure server-to-client streaming, raw WebSockets are usually enough. The lab You'll build a FastAPI WebSocket endpoint that streams positions of 100 satellites at 1 Hz, and a MapLibre client that receives the stream and updates marker positions in real time. Handle disconnects gracefully with exponential backoff. This is the architecture LaunchDetect uses to stream live detection events to the browser — minus authentication and selective subscription, which are Track 5 topics. Reflection question (closing): Real-time data saves lives — and exposes them. Pacific peoples have always known the ocean is in motion; now we have a way to track every motion. What changes in our relationship to ocean when we can see all of it at once? Quiz: Q1. WebSocket is: A. Bidirectional persistent connection over a single TCP connection * B. Just HTTP polling C. UDP only D. Server-side only Q2. Why throttle update rate to the browser? A. High update rates degrade browser rendering and battery * B. WebSockets are slow C. Required by spec D. Servers can't handle it Q3. Socket.IO adds what over raw WebSockets? A. Reconnection, namespaces, rooms, fallbacks * B. Just JSON encoding C. Encryption only D. Compression only Q4. For 10,000 satellites at 1 Hz, what's the bottleneck likely? A. Client rendering (DOM/WebGL), not WebSocket throughput * B. WebSocket protocol C. Server CPU only D. DNS Q5. Handling disconnects requires: A. Detecting close events and re-establishing with backoff * B. Trying again every 10 ms C. Ignoring the issue D. Reloading the page only --- ### Week 20: Spatiotemporal change detection (Capstone 4 week) Track: Mission GIS Engineer URL: https://launchdetect.com/academy/week/20/ Summary: Track 4 culminates here: bringing time into the analysis. Multi-frame change detection lets you see plumes evolving, deforestation progressing, or coastlines shifting. The capstone is a real-time satellite tracker. Objectives: - Stack multiple raster frames in time - Compute pixel-wise differences between frames - Apply threshold + morphology to identify change regions - Combine optical and thermal change indicators Opening question (place-based hook): If you photograph the same coral reef every week for a year, how can a computer tell you what's changing? Spatiotemporal change detection is how. Stack the frames, subtract baselines, threshold the difference, label the changes. This week the math is universal — and the applications are everywhere. Connecting to Hawaiʻi — Change detection for Kīlauea + reefs: The Hawaiian Volcano Observatory has been doing change detection on Kīlauea for decades — every lava flow, every collapse, every uplift gets quantified by comparing satellite imagery before and after. The same technique watches coral bleaching across Kāneʻohe Bay, watches forest regrowth after fires across Hawaiʻi Island, watches coastline retreat from sea-level rise on every island. Change detection is how we measure what we care about, over time. Hint: Google Earth Engine has a free 'Time Slider' that shows change at any location 1984–today. Try it on Kīlauea, on Honolulu, on your own backyard. Lab: Real-Time Satellite Tracker (capstone start) — Build a Cesium-based web app that shows ISS + Starlink visible passes for a user-supplied lat/lon. Click-to-inspect orbital elements. 24h replay. This is the deliverable for Capstone 4. Lab notebook: https://github.com/ops-sketch/academy-labs/blob/main/week-20/lab.ipynb Run in Colab: https://colab.research.google.com/github/ops-sketch/academy-labs/blob/main/week-20/lab.ipynb Primer: Static analysis answers "what is here." Temporal analysis answers "what changed." For space GIS, the second question is often the more interesting one: did a plume appear?, did the landscape recover from a launch event?, did a sea-launch platform move? All of these reduce to comparing imagery at multiple times. Stacking The starting point is a stack: multiple frames of raster imagery, all aligned to the same projected grid. Source frames are rarely already aligned — they come at different times, sometimes from different orbits, and need resampling to a common reference grid. The standard tool is rasterio.warp.reproject() or, more conveniently, satpy.Scene.resample() when working with satellite NetCDFs. import xarray as xr # Each frame's brightness_temp variable, time-indexed frames = [xr.open_dataset(f).rename({'Rad': f'frame_{i}'}) for i, f in enumerate(file_list)] stack = xr.concat([f.rename(t_var='time') for f in frames], dim='time') # stack now has dims (time, y, x) — 3-D Per-pixel difference The simplest change detection: subtract one frame from another. Positive values are pixels that got hotter; negative values are pixels that got cooler. For consecutive Band 7 frames during a launch, you'll see a small positive cluster (the plume forming) and then a small negative cluster (the plume fading). diff = stack['frame_1'] - stack['frame_0'] # element-wise raster math hot = diff > 30 # Boolean mask of pixels that gained >30 K Morphology to clean noisy masks Raw difference masks are noisy — single-pixel hits from sensor noise, edge artifacts from imperfect georegistration. Morphological operations clean this up: Erosion — remove single-pixel hits by requiring contiguous neighbors. scipy.ndimage.binary_erosion. Dilation — grow valid regions to fill small gaps. scipy.ndimage.binary_dilation. Opening (erosion + dilation) — removes small features while preserving large ones. Good for noise removal. Closing (dilation + erosion) — fills small holes in larger features. Good for plume mask consolidation. Connected components After cleaning, label connected regions. Each connected hotspot cluster is a distinct "blob" — likely a single plume (or single false-positive source). scipy.ndimage.label assigns each blob a unique integer ID. You can then compute per-blob properties (area, centroid, bounding box, mean brightness) and filter blobs that meet plume-like criteria. from scipy.ndimage import label, center_of_mass mask = hot.values labeled, n_blobs = label(mask) centroids = center_of_mass(mask, labeled, range(1, n_blobs + 1)) areas = [(labeled == i).sum() for i in range(1, n_blobs + 1)] Multi-sensor change detection Combining sensors makes detection more robust. A plume that appears in BOTH Band 7 thermal AND visible Band 2 is more likely a real launch (or a wildfire) than a Band-7-only signal (which could be a calibration artifact). Common multi-sensor combinations: Band 7 + Band 2 — both for confirmation that a hotspot is also bright in visible (real combustion). Band 7 + Band 14 — split-window difference distinguishes plume (sub-pixel) from extended cloud. Optical + SAR — for change detection of ground infrastructure (new buildings, pad construction) where clouds would block optical. The capstone Week 20's lab is the start of Capstone 4: Real-Time Satellite Tracker. Build a Cesium-based web app that shows ISS + Starlink visible passes for a user-supplied lat/lon, with click-to-inspect orbital elements and 24-hour replay. The full rubric is on the capstone page; finishing it earns the Certified Mission GIS Engineer credential. Track 5 (Space GIS Architect) goes deeper into production-grade and expert-tier topics from here. Reflection question (closing): What is something near where you live that's changed in your lifetime? How would you measure that change if you wanted to make a case for protecting it? Quiz: Q1. Stacking rasters means: A. Aligning multiple frames to a common grid for time-series analysis * B. Concatenating files C. Compression D. Format conversion Q2. Per-pixel difference between two frames yields: A. A change raster (positive = increase, negative = decrease) * B. Always zero C. RGB D. A vector Q3. Morphology (dilation/erosion) is used to: A. Clean up noisy detection masks * B. Compute NDVI C. Reproject D. Compress Q4. Why combine optical + thermal in change detection? A. They see different physics — together more robust to false positives * B. Required by spec C. It's cheaper D. Doesn't matter Q5. xarray's `Dataset.diff()` does what? A. Differences along a dimension (e.g. time) * B. Compresses C. Rasterizes D. Filters by extent --- ### Week 21: Multi-sensor fusion: GOES-East, GOES-West, Himawari-9 Track: Space GIS Architect URL: https://launchdetect.com/academy/week/21/ Summary: One satellite sees one hemisphere. Three satellites see almost the whole disk. Fusing them is non-trivial: different projections, different timestamps, different radiometric calibrations. Objectives: - Reproject and align imagery from three different GEO satellites - Handle the seam between GOES-West and Himawari-9 over the Pacific - Build a hemispheric mosaic with consistent radiometric scaling - Manage the time-sync challenge across asynchronous sensors Opening question (place-based hook): GOES-18 watches Hawaiʻi. Himawari-9 watches Japan. What can the two see together that neither sees alone? The Pacific. From the Aleutians to New Zealand. Combining the two satellites is how you get hemispheric coverage. This week you'll learn to fuse their imagery. Connecting to Hawaiʻi — Pacific basin coverage: GOES-18 reaches its useful viewing limit somewhere near 180° (the antimeridian). Himawari-9, parked at 140.7°E, picks up from there and extends west to Australia and the South China Sea. Together, the two satellites give continuous coverage of the entire Pacific basin — including every Polynesian island. The Pacific Voyaging Society could use this fused imagery for weather routing on a long Hōkūleʻa voyage. NOAA uses it for Pacific tropical-cyclone tracking. You can use it for any Pacific-scale question. Hint: Multi-sensor fusion is what makes 'truly global' geospatial monitoring possible. The Pacific got it first because the Pacific is where both satellites overlap. Lab: Pacific seam mosaic — Download time-matched GOES-18 and Himawari-9 frames. Reproject both to a common Pacific-centric projection. Stitch across the dateline seam. Output a hemispheric mosaic GeoTIFF. Lab notebook: https://github.com/ops-sketch/academy-labs/blob/main/week-21/lab.ipynb Run in Colab: https://colab.research.google.com/github/ops-sketch/academy-labs/blob/main/week-21/lab.ipynb Primer: One geostationary satellite sees one hemisphere. Three together — GOES-East at 75.2°W, GOES-West at 137.2°W, and JMA Himawari-9 at 140.7°E — give you nearly the entire disk of Earth, with overlap regions for cross-calibration. Fusing them is the foundation of LaunchDetect's global coverage: a plume from any spaceport on Earth, from Mahia (New Zealand) to Kourou (French Guiana), is in at least one satellite's field of view. Why three satellites Each geostationary satellite has a useful viewing range of roughly ±70° in longitude and ±70° in latitude from its sub-satellite point. Beyond that, limb effects make data marginal. Combining the three covers from ~10°W (eastern edge of GOES-East) all the way around to ~140°W (western edge of GOES-West, overlapping with Himawari-9's eastern edge at ~70°W effective). The only major coverage gap is the Indian Ocean / Africa-East / Middle East region, covered by Meteosat (which has different bands and is harder to integrate). The radiometric problem The three satellites' Band 7 channels are nominally identical (3.9 µm) but in practice differ: Different spectral response functions (the exact wavelength sensitivity curve). Different ground sampling distances (2 km for ABI, 2 km for Himawari AHI). Different vicarious calibration cycles. The result: a brightness temperature of 320 K from GOES-18 may correspond to 322 K from Himawari-9 looking at the same physical hotspot at the same moment. For absolute comparisons (is this >320 K?) you need cross-calibration. NOAA + JMA publish cross-calibration coefficients quarterly; the alternative is empirical regression against ground truth (SST buoys, etc.). The time problem Each satellite scans on its own schedule. GOES Full Disk takes 10 minutes per scan. Himawari Full Disk takes 10 minutes too, but starts at a different moment. To create a synchronized mosaic at time T, you need to interpolate each sensor's coverage to T — which means knowing the per-pixel scan time, not just the file start time. The NetCDFs publish this; satpy handles the interpolation if you ask it to. The dateline problem The line where GOES-West and Himawari-9 overlap is near the antimeridian (180°). Naive operations on lat/lon coordinates split a geometry crossing 180° into two pieces (one near -180°, one near +180°). Use a Pacific-centric projection like Mercator centered at 180° (PROJ string: +proj=merc +lon_0=180) to keep the seam continuous: from rasterio.warp import calculate_default_transform, reproject, Resampling src_crs = {'init': 'epsg:4326'} dst_crs = '+proj=merc +lon_0=180 +datum=WGS84' # Reproject each sensor's frame to Pacific-centric Mercator transform, width, height = calculate_default_transform( src_crs, dst_crs, src.width, src.height, *src.bounds) Stitching strategies Where two sensors overlap (e.g., GOES-West and Himawari-9 overlap from ~150°E to ~150°W), choose one as the "master" or blend them: Hard cutover — a single longitude boundary; west of it use Himawari, east of it use GOES-West. Simple, but visible seam. Linear blend — in the overlap region, take a weighted average that smoothly transitions. Eliminates the visible seam but smears the radiometric truth. Quality-weighted — use whichever sensor has the higher view-angle quality (closer to sub-satellite point) at each pixel. Best fidelity but most complex. For thermal hotspot detection, hard cutover is usually fine — the threshold logic dominates the seam artifact. The lab You'll download time-matched GOES-18 (Band 7) and Himawari-9 (AHI Band 7) Full Disk frames over the Pacific from each agency's AWS Open Data bucket, reproject both to a common Pacific-centric Mercator, stitch across the dateline with a hard cutover at 180°, and output a hemispheric mosaic GeoTIFF showing the entire Pacific hemisphere's thermal field. This is the architecture LaunchDetect uses for global launch coverage. Reflection question (closing): The Pacific is one ocean, but Western mapmaking traditions usually cut it down the middle. Multi-satellite Pacific fusion shows it as it is — whole. What other things look different if you stop cutting them in half? Quiz: Q1. GOES-East and GOES-West longitude difference is: A. ~62 degrees (75.2W - 137.2W) * B. ~10 degrees C. ~120 degrees D. ~180 degrees Q2. Himawari-9 is at: A. 140.7E * B. 0 degrees C. 75.2W D. 137.2W Q3. Radiometric scaling across sensors requires: A. Cross-calibration coefficients or empirical regression * B. Nothing C. Multiplication by a constant only D. Just thresholding Q4. Time-sync challenge across GEO sats means: A. Their scans complete at slightly different wall times — must interpolate or align * B. All scan instantly together C. They scan only at noon D. Their clocks differ by hours Q5. Dateline-aware projection is needed because: A. At 180°, longitude wraps from -180 to +180 and naive operations split geometries * B. Cartography rule C. Required by spec D. Aesthetic --- ### Week 22: ML for satellite imagery: CNNs and U-Net segmentation Track: Space GIS Architect URL: https://launchdetect.com/academy/week/22/ Summary: Deep learning has rewritten remote sensing. CNNs (object detection) and U-Nets (semantic segmentation) are now standard. This week you train one on real GOES data. Objectives: - Train a CNN for object detection in raster imagery - Build a U-Net for semantic segmentation of clouds / plumes / fires - Generate training data via thresholding + manual labels - Evaluate with IoU and confusion matrices Opening question (place-based hook): Could a machine learning model spot bleaching coral from satellite imagery before a human diver does? Yes — and several already do. This week you'll learn U-Net, the same architecture researchers use for reef-bleaching detection, lava-flow mapping, and (yes) rocket-plume segmentation. Connecting to Hawaiʻi — ML for reef health: Researchers at the Hawaiʻi Institute of Marine Biology, at NOAA Coral Reef Watch, and at University of Hawaiʻi have published work using U-Net and similar CNNs to detect coral bleaching from satellite imagery — automating what used to require human diver surveys. The same architecture (encoder-decoder with skip connections) you'll learn this week powers those reef-monitoring systems. Weak supervision (training on imperfect labels) was a key technique because hand-labeling bleaching is expensive. Hint: Coral Reef Watch's daily bleaching alerts use thermal-IR + ML. Every alert that goes out saves diver-survey time and lets reef managers respond faster. Lab: U-Net for plume segmentation — Generate training data from threshold-detected plumes in GOES Band 7. Train a small U-Net to segment plume pixels. Evaluate on held-out launches with IoU and confusion matrices. Lab notebook: https://github.com/ops-sketch/academy-labs/blob/main/week-22/lab.ipynb Run in Colab: https://colab.research.google.com/github/ops-sketch/academy-labs/blob/main/week-22/lab.ipynb Primer: Deep learning has rewritten the playbook for satellite imagery analysis over the past decade. Convolutional neural networks now do object detection, semantic segmentation, super-resolution, and change detection at production scale across every major Earth-observation platform. This week is the practical primer: when to use deep learning vs threshold rules, the U-Net architecture, and how to train one on real GOES data. When deep learning beats thresholding Threshold rules (Week 14's Band 7 > 320 K) work when the discriminator is a single scalar feature. They break down when: The discriminator is spatial-contextual (a plume looks different from a wildfire in shape and spatial neighborhood, not just brightness). You need probabilistic output (confidence scores) for downstream cost-of-error decisions. You have many labeled examples and want a single classifier that captures complex patterns. Threshold rules are great for fast, explainable, debuggable baseline detection. Deep learning shines for the next layer: scoring, classification, and segmentation refinement. The U-Net architecture U-Net (Ronneberger et al. 2015) is the workhorse for image segmentation in remote sensing. It's an encoder-decoder with skip connections: Encoder — successive 3x3 convolutions + 2x2 max-pool, halving the spatial dimensions and doubling the channel count at each level. By the bottleneck, the feature map is small but channel-rich. Decoder — successive 2x2 transpose convolutions + 3x3 convolutions, doubling the spatial dimensions and halving channels. Reconstructs the original resolution. Skip connections — at each decoder level, concatenate the corresponding encoder feature map. This preserves fine-grained spatial detail that would otherwise be lost in the bottleneck. The output is a same-size map of per-pixel class probabilities. For plume segmentation, the classes are {background, plume}; for multi-class fire/plume/cloud, expand accordingly. import torch.nn as nn class UNet(nn.Module): def __init__(self, in_ch=1, out_ch=1, n_features=32): super().__init__() # ... 4 down blocks + bottleneck + 4 up blocks ... # Each block: Conv3x3 → BatchNorm → ReLU → Conv3x3 → BatchNorm → ReLU # Down: Conv block + MaxPool2x2 # Up: ConvTranspose2x2 + concat with skip + Conv block Weak supervision The training-data problem: who hand-labels rocket plumes in tens of thousands of GOES frames? Nobody. The trick is weak supervision — generate the training labels programmatically. For plumes: run Week 14's threshold detector + Week 20's morphology cleanup over a year of GOES frames around known launches. Cross-check against the published launch schedule. Use those pixel masks as training labels. The labels are noisy (some false positives, some false negatives), but with enough volume the U-Net learns to denoise — it picks up on spatial context the threshold rule can't see. Evaluation: IoU and confusion matrices For segmentation, accuracy is misleading (a network that predicts "no plume everywhere" gets 99.99% accuracy because most pixels really are no plume). Use: IoU (Intersection over Union) — area of overlap / area of union. 1.0 is perfect, 0.0 is no overlap. Compute per-class IoU and report the mean (mIoU). Confusion matrix — true positives, false positives, false negatives, true negatives at the pixel level. Derive precision (TP / (TP + FP)) and recall (TP / (TP + FN)). For LaunchDetect's production gate, the rule is false positive rate must stay below 5%. Small models, not big For thermal plume segmentation in 200×200 pixel windows, a 32-feature U-Net (~1M parameters) is more than enough. Don't reach for big pretrained models — they need huge training sets, they're slow to deploy, and the feature distribution of satellite imagery is far enough from ImageNet that pretrained weights help less than you'd expect. The lab You'll generate weakly-supervised training data from threshold detections + morphology over a year of GOES Band 7 frames, train a small U-Net in PyTorch, evaluate on held-out launches with IoU + confusion matrices, and confirm the per-pixel false-positive rate is below 5%. This is the architecture for LaunchDetect's "Layer 3" classifier — the production model that scores threshold-detected hotspots for plume-vs-fire-vs-noise. Reflection question (closing): ML automates pattern detection that used to require people. Sometimes that's freeing (humans stop doing repetitive work). Sometimes that's deskilling (humans lose embodied knowledge). Where is the line? Quiz: Q1. U-Net architecture is: A. Encoder-decoder with skip connections, ideal for segmentation * B. Just a CNN C. Recurrent D. Transformer-only Q2. IoU (intersection over union) measures: A. Overlap between predicted and ground-truth mask * B. Loss only C. Reprojection error D. Compression ratio Q3. Generating training data via thresholding is called: A. Weak supervision (programmatic labels) * B. Manual labeling C. Synthetic data D. Augmentation Q4. CNNs work well on images because: A. Translation invariance and locality * B. They're newest C. Only choice D. Marketing Q5. Why use a small U-Net (not a giant model)? A. Faster inference, less overfitting on small training sets, deployable to edge * B. Always smaller is worse C. Required by law D. No reason --- ### Week 23: SAR: Sentinel-1, polarimetry, InSAR Track: Space GIS Architect URL: https://launchdetect.com/academy/week/23/ Summary: SAR sees through clouds, day and night. It can measure ground deformation to the centimeter. This week you decode the magic. Objectives: - Understand synthetic aperture radar fundamentals - Distinguish single, dual, and quad polarization - Compute interferometric phase between two SAR acquisitions (InSAR) - Detect ground deformation at centimeter scale with InSAR Opening question (place-based hook): When the ground shifts beneath us — Kīlauea inflating before an eruption, or the Pacific plate sliding — can satellites measure it? Yes, to the millimeter. SAR interferometry (InSAR) does it from 700 km up using radar phase. This week you'll see the math. Connecting to Hawaiʻi — InSAR over Kīlauea: The USGS Hawaiian Volcano Observatory uses InSAR routinely to monitor Kīlauea's inflation and deflation. Before the 2018 eruption, InSAR showed millimeter-scale uplift weeks in advance. During the eruption, it tracked subsidence as magma drained. After the eruption, it continued to map ground motion as the volcano re-equilibrated. The technique works in clouds (radar sees through them), it works at night, and it works on every volcano in the world. Same technique applies to sea-level rise on coastal Hawaiʻi. Hint: Sentinel-1 InSAR for Kīlauea is published openly. ASF DAAC has the raw data; processed interferograms are at HVO's site. Lab: InSAR over a volcanic deformation event — Download two Sentinel-1 SLC acquisitions over a known deformation event (volcanic uplift). Coregister. Form the interferogram. Identify the deformation pattern. Lab notebook: https://github.com/ops-sketch/academy-labs/blob/main/week-23/lab.ipynb Run in Colab: https://colab.research.google.com/github/ops-sketch/academy-labs/blob/main/week-23/lab.ipynb Primer: Synthetic Aperture Radar is the most underused superpower in civilian Earth observation. It sees through clouds, day or night. It measures ground deformation to the millimeter via interferometry. It distinguishes surface texture, vegetation density, and soil moisture in ways optical sensors cannot. This week is the SAR primer for space-GIS practitioners. How SAR works SAR is an active sensor — it transmits its own microwave illumination and measures the backscattered signal. The "synthetic aperture" trick: a single small antenna on a moving spacecraft synthesizes the effective resolution of a much larger antenna by combining returns from successive positions along the orbit. This is what enables 5–10 meter resolution from a satellite-borne radar — impossible with a "real aperture" antenna of feasible size. Wavelengths and polarizations SAR satellites operate in distinct microwave bands, each with different penetration and sensitivity: X-band (~3 cm) — TerraSAR-X, COSMO-SkyMed. High resolution, surface scattering, sensitive to roughness. C-band (~5.6 cm) — Sentinel-1, RADARSAT-2. The civilian workhorse. Balanced between penetration and resolution. L-band (~24 cm) — ALOS-2, NISAR (launching 2026). Penetrates vegetation canopy. Good for biomass and below-canopy mapping. P-band (~70 cm) — BIOMASS (ESA, launched 2026). Penetrates dense forest canopy entirely. Polarization adds more information: a radar can transmit horizontally (H) or vertically (V) polarized waves and receive either. Single-pol is one combination (e.g. VV). Dual-pol is two (VV + VH). Quad-pol is all four (HH, HV, VH, VV). Quad-pol enables polarimetric decomposition, distinguishing scattering mechanisms: surface (smooth ground), volume (vegetation), and double-bounce (urban walls). Sentinel-1: the open SAR workhorse Sentinel-1 (ESA, two satellites A and B before B's failure in 2021; C launched 2024) provides free global C-band SAR with 5-day revisit at the equator. Three main acquisition modes: IW (Interferometric Wide, 250 km swath, 5×20 m resolution, the standard mode), EW (Extra Wide, 400 km swath, lower resolution, for sea ice), SM (Strip Map, 80 km swath, higher resolution, on request only). Data is on the ESA Copernicus Hub and on AWS Registry of Open Data. InSAR: phase-based deformation The most powerful trick in SAR is interferometry. The radar return at each pixel has both amplitude (signal strength) and phase (the wave's position in its cycle, modulo 2π). The phase encodes the path length from satellite to ground and back. Take two SAR acquisitions of the same scene from very similar viewing geometries, weeks or months apart. Compute the phase difference at each pixel — the interferogram. If the ground hasn't moved between acquisitions, the phase difference is zero (modulo 2π) everywhere. If part of the ground has subsided 28 mm (half a Sentinel-1 wavelength), the phase difference is 2π × 0.5 = π — visible as a single fringe in the interferogram. InSAR practical applications: Volcanic deformation (uplift before eruption, subsidence after). Earthquake co-seismic displacement (centimeters of horizontal slip). Urban subsidence from groundwater extraction (millimeters/year, slow but visible over multi-year time series). Landslide motion (slow creep visible over weeks). Building / infrastructure settlement. Coherence InSAR only works where the surface is stable enough between acquisitions to preserve phase. Vegetation, snow, and water are usually decoherent — the phase signal is noise. Bare rock, concrete, urban surfaces, and stable agricultural ground are coherent. Coherence (0–1) is a per-pixel quality measure. High coherence regions give reliable deformation; low coherence regions are masked out. The lab You'll download two Sentinel-1 SLC (Single Look Complex) acquisitions over a known volcanic deformation event from the past 5 years, use snap (ESA's Sentinel Application Platform) or a Python wrapper to coregister them, form the interferogram, unwrap the phase, and identify the deformation pattern. The output is a centimeter-scale displacement map of a real geophysical event. SAR / InSAR is a major specialization in itself; this week is the orientation. For space GIS, SAR's relevance is increasing — both for change detection of orbital infrastructure (new pads, modified facilities) and for environmental monitoring (subsidence around launch facilities, coastal erosion at Starbase). Reflection question (closing): InSAR can measure ground motion under buildings, under crops, under sacred sites — without anyone consenting to be measured. Where's the ethical line between scientific observation and surveillance? Quiz: Q1. SAR sends what to the ground? A. Radio waves (active microwave) * B. Visible light C. Infrared D. Gamma rays Q2. Sentinel-1 wavelength is: A. C-band, ~5.6 cm * B. X-band, ~3 cm C. L-band, ~24 cm D. P-band, ~70 cm Q3. Dual-pol means: A. Two polarization channels transmitted and received (e.g. VV+VH) * B. Two satellites C. Two passes D. Two phases Q4. InSAR coherence is: A. A measure of phase stability between two SAR images, 0–1 * B. A radar gain C. A wavelength D. A signal-to-noise ratio in time Q5. InSAR can measure ground deformation at: A. Centimeter to millimeter scale (depending on processing) * B. Always meter scale only C. Never below a kilometer D. Only above ground --- ### Week 24: Geodesy: ellipsoid vs geoid, EGM2008 Track: Space GIS Architect URL: https://launchdetect.com/academy/week/24/ Summary: GPS gives you 'height above ellipsoid'. Maps want 'height above mean sea level' (i.e. geoid). The two differ by up to ~100 m. This week is the science of fixing that. Objectives: - Distinguish reference ellipsoid (geometric) from geoid (physical) - Apply EGM2008 corrections to elevations - Understand why GPS altitudes need geoid correction for orthometric height - Use proj-egm for vertical datum transformation Opening question (place-based hook): When the tide gauge at Honolulu says sea level rose 4 mm last year, was that the ocean rising — or was the land sinking? Geodesy answers that question. Distinguishing ellipsoid from geoid from orthometric height is how scientists know what's actually changing. This week you'll learn the three heights. Connecting to Hawaiʻi — Sea-level rise and geodesy in Hawaiʻi: Hawaiʻi has continuous tide-gauge records back to the early 1900s for Honolulu, Hilo, Kahului, and Nawiliwili. The records show sea-level rise of roughly 1.5 mm/year for most of the 20th century, accelerating in recent decades. Distinguishing 'sea level rose' from 'land subsided' requires geodetic measurements (GPS at the gauge site) combined with geoid models like EGM2008. The University of Hawaiʻi Sea Level Center, headquartered on Oʻahu, is one of the world's main repositories of this data. Hint: uhslc.soest.hawaii.edu publishes the data. Sea-level rise is happening fast at Hawaiian coastal communities. Knowing the math is part of knowing what to do about it. Lab: GPS-to-orthometric conversion — Take a set of GPS-measured points along the Sierra Nevada. Compute their ellipsoidal height. Apply EGM2008 geoid correction. Compare with USGS-published orthometric heights. Lab notebook: https://github.com/ops-sketch/academy-labs/blob/main/week-24/lab.ipynb Run in Colab: https://colab.research.google.com/github/ops-sketch/academy-labs/blob/main/week-24/lab.ipynb Primer: Geodesy is the science of Earth's exact size, shape, and gravity field. For everyday GIS, the simplification "Earth is a sphere" is fine. For sub-meter accuracy — which space GIS routinely needs — the simplification fails. This week is the geodesic primer: ellipsoids, geoids, height systems, and the EGM2008 model that maps between them. The ellipsoid: a geometric approximation Earth is not a sphere. It bulges at the equator and flattens at the poles, by roughly 21 km (the equatorial radius is 6,378 km; the polar radius is 6,357 km). The mathematically tractable approximation is an ellipsoid of revolution, defined by two parameters: equatorial radius (a) and flattening (f). The WGS84 ellipsoid is the GPS-standard model: a = 6378137.0 m (equatorial radius) f = 1 / 298.257223563 (flattening) b = a × (1 - f) = 6356752.3 m (polar radius) Almost every modern coordinate system you'll touch uses the WGS84 ellipsoid (or its near-identical GRS80 cousin). Some legacy systems (Clarke 1866, Bessel 1841, Airy 1830) used different ellipsoids that differ by tens of meters — relevant when working with pre-1980s topographic maps. The geoid: a physical surface The ellipsoid is purely geometric. The geoid is physical: it's the equipotential surface of Earth's gravity field that, on average, coincides with mean sea level over the oceans. Imagine extending this gravity surface through the continents — that's the geoid. Water flows downhill relative to the geoid, not relative to the ellipsoid. Critically, the geoid is not smooth. Earth's gravity field is bumpy because the mass distribution under the surface is uneven (mountain roots, dense ocean basins, sedimentary basins). The geoid undulates by ±100 meters relative to the WGS84 ellipsoid: The geoid is up to ~85 m above the ellipsoid over parts of the North Atlantic (Iceland's mass excess). The geoid is up to ~106 m below the ellipsoid south of India (a poorly-understood mantle-density anomaly known as the Indian Ocean Geoid Low). Three heights, three meanings For any point on Earth, you have three "height" values: Ellipsoidal height (h) — height above the WGS84 ellipsoid. This is what GPS reports natively. Useless for "altitude" in everyday English. Geoidal height (N) — the geoid's offset from the ellipsoid at that location. Comes from a geoid model (e.g. EGM2008). Orthometric height (H) — height above the geoid, ≈ height above mean sea level. This is what maps and elevation profiles use. H = h - N. A 5,000 m mountain on a topo map has H = 5000 m. Its GPS receiver, however, might report h = 5040 m or 4970 m depending on the local geoid undulation. The 70 m discrepancy is geoid offset. EGM2008 EGM2008 (Earth Gravitational Model 2008, by NGA) is the standard global geoid model, derived from satellite gravity, ground gravity, and altimetry. It's a spherical harmonic expansion to degree and order 2,159, with a typical accuracy of ±15 cm. It's the default in pyproj and almost every GIS library. from pyproj import Transformer # Ellipsoid height (from GPS) → orthometric height (above EGM2008 geoid) to_orthometric = Transformer.from_crs('EPSG:4979', 'EPSG:4326+5773', always_xy=True) lon, lat, h = -118.6, 36.5, 4421 # Mt. Whitney, GPS-measured lon, lat, H = to_orthometric.transform(lon, lat, h) print(f"Orthometric height: {H:.1f} m") # ≈ 4421 - N(36.5N, -118.6) ≈ 4393 m Why this matters for space GIS For most space-domain applications, ellipsoidal height is fine. The exceptions: Comparing satellite-altitude-derived sub-satellite positions to terrain elevation (orthometric). Computing line-of-sight blocked by terrain: the terrain DEM is orthometric; the satellite altitude is ellipsoidal. You must convert one to match. Reporting "altitude" to non-technical users — they expect mean-sea-level. Use orthometric and label it. Precise positioning of ground stations (sub-meter), where the 100 m geoid offset would otherwise be a meaningful error source. The lab You'll take a set of GPS-measured points along the Sierra Nevada (provided), compute their ellipsoidal heights, apply EGM2008 geoid correction to get orthometric heights, and compare with USGS-published orthometric heights at the same coordinates. The agreement should be within ±2 m if the GPS measurements were of survey quality. By the end, you'll never confuse the three heights again. Reflection question (closing): When sea level rises 1 m by 2100 (a likely scenario), which Hawaiian places that you love are at risk? How do you want them mapped — and by whom? Quiz: Q1. The reference ellipsoid is: A. A geometric approximation to Earth's shape (WGS84 ellipsoid is the default) * B. Earth's actual physical shape C. Mean sea level surface D. The geoid Q2. The geoid is: A. The equipotential surface of Earth's gravity that best matches mean sea level * B. A mathematical sphere C. A reference ellipsoid D. The Earth's center of mass Q3. Ellipsoidal vs geoidal height can differ by: A. Up to ~100 meters globally * B. Always 0 C. Never more than 1 m D. Always 30 m Q4. EGM2008 is: A. A global geoid model from gravity data * B. An ellipsoid C. A datum D. A projection Q5. Orthometric height is: A. Height above the geoid (close to mean sea level) * B. Height above the ellipsoid C. GPS-reported height directly D. Always zero --- ### Week 25: AR for satellites: sky-direction overlays and az-el math Track: Space GIS Architect URL: https://launchdetect.com/academy/week/25/ Summary: Augmented reality for the sky. LaunchDetect's mobile app puts ISS, Starlink, and Hubble into your viewfinder. This week is the math + code that powers it. Objectives: - Compute azimuth and elevation from observer to satellite - Project that direction into a smartphone camera frame - Use device orientation sensors to align the overlay - Build a working AR overlay for satellite spotting Opening question (place-based hook): Tonight at sunset, where in the sky should you look to see the ISS? Could your phone show you? Augmented reality answers exactly that. Point your phone at the sky; the ISS appears as a moving dot at the right pixel. This week you'll build the math — and connect it to traditions older than the ISS. Connecting to Hawaiʻi — Wayfinding and the sky overhead: Polynesian wayfinders read the sky by knowing dozens of stars and their rising/setting positions on the horizon. AR satellite-spotting apps work on the same principle: convert a celestial object's altitude and azimuth into a pixel position on your screen. The math is more recent; the spatial-thinking habit is ancient. Many young Hawaiian wayfinders today carry both the traditional knowledge and the modern tools. Hint: On Mauna Kea or Haleakalā at sunset, the ISS pass and the visible stars are both spectacular. Both navigable. Both connected. Lab: AR satellite spotter (browser) — Build a browser-based AR demo using the device orientation API. Show a moving dot at the correct azimuth/elevation for the ISS overlaid on the camera view. Lab notebook: https://github.com/ops-sketch/academy-labs/blob/main/week-25/lab.ipynb Run in Colab: https://colab.research.google.com/github/ops-sketch/academy-labs/blob/main/week-25/lab.ipynb Primer: Augmented reality for satellite spotting: point your phone at the sky and see exactly where the ISS is, where Starlink trains are about to streak, where a recent SpaceX upper stage is decaying. LaunchDetect's mobile app ships this as a core feature; this week you build the math and the code that powers it. The geometry From any observer on Earth, a celestial object's position is described by two angles: azimuth (compass direction, 0° = north, increasing clockwise) and elevation (degrees above the horizontal). Week 9 covered how to compute these from an observer's lat/lon and a satellite's ephemeris. AR is the inverse problem: given the satellite's azimuth and elevation right now, where should the satellite's icon appear in the camera's viewport? Three coordinate systems involved: Earth frame — the satellite's position relative to true north and the local horizontal. Phone frame — the phone's current orientation in space (which way is it pointing? how tilted?). Camera frame — the camera's pixel grid, where (0, 0) is one corner and (W, H) is the opposite. DeviceOrientation API The browser's DeviceOrientation API tells you the phone's orientation: alpha — rotation around the vertical (Z) axis, 0–360°. Approximately the compass direction the phone's back is pointing (subject to calibration). beta — rotation around the X axis (front-back tilt), -180° to 180°. Phone flat on table = 0°; phone vertical, screen toward user = 90°. gamma — rotation around the Y axis (left-right tilt), -90° to 90°. window.addEventListener('deviceorientation', (event) => { const alpha = event.alpha; // compass, 0–360 const beta = event.beta; // tilt forward/back const gamma = event.gamma; // tilt left/right updateOverlay(alpha, beta, gamma); }); iOS quirk: Apple requires user permission via DeviceOrientationEvent.requestPermission(). Without that gesture, no events fire. Build the UX accordingly. Magnetic vs true north The DeviceOrientation alpha is referenced to magnetic north, not true north. The two differ by the local magnetic declination, which varies from ~0° in much of South America to ~20°+ at high latitudes and can flip sign over a few hundred kilometers. To overlay satellite positions accurately, convert from true north (what skyfield gives you) to magnetic north (what alpha gives you) using a magnetic declination model — the World Magnetic Model (WMM) or its successor IGRF. JavaScript libraries: geomagnetism on npm wraps WMM and gives declination for a (lat, lon, date) in milliseconds. Projecting to the viewport Given the phone's orientation and the satellite's (azimuth_true, elevation), compute the angular offset between the camera's center direction and the satellite's direction. If that offset is within the camera's field of view (typically ~67° horizontal, ~52° vertical for a modern smartphone), place the icon at the corresponding pixel. // Pseudocode const dAz = satAzimuth - phoneAzimuth; // wrapped to [-180, 180] const dEl = satElevation - phoneElevation; const fovH = 67, fovV = 52; if (Math.abs(dAz) < fovH/2 && Math.abs(dEl) < fovV/2) { const x = (W/2) + (dAz / (fovH/2)) * (W/2); const y = (H/2) - (dEl / (fovV/2)) * (H/2); placeIcon(x, y); } WebXR vs CSS-3D vs DIY Three implementation approaches: DIY (recommended for satellites) — use DeviceOrientation + raw camera <video> stream + CSS-positioned icons. ~100 lines of code, works on every modern browser. WebXR — the standardized AR API. More capable but limited browser support (Chrome Android, Safari iOS still rolling out). CSS 3D transforms — for simple overlays, CSS transform: translate3d(...) on iconlets is enough and works universally. The lab You'll build a minimal browser-based AR satellite spotter: request DeviceOrientation permission, get the user's GPS lat/lon, compute the ISS's current azimuth and elevation with satellite.js, convert to magnetic-north reference, project onto the camera viewport, and overlay a moving dot. By the end you'll have the same core experience as LaunchDetect's mobile AR feature, in 200 lines of vanilla JavaScript. Reflection question (closing): AR overlays modern data onto a place. Traditional wayfinding overlays cultural knowledge onto the same place. Can the two coexist on the same screen — or in the same mind? Quiz: Q1. DeviceOrientation API gives you: A. alpha (compass), beta (front-back tilt), gamma (left-right tilt) * B. GPS only C. Time only D. Camera frame only Q2. Azimuth in AR overlay must be referenced to: A. True north (or convert from magnetic via declination) * B. Always magnetic north C. User's facing direction D. GPS heading Q3. Why elevation matters in AR sky overlay: A. Vertical position of the satellite icon in the viewport * B. It's the only useful angle C. It's irrelevant D. Just for naming Q4. Field-of-view for typical smartphone camera is: A. ~60-70 degrees horizontal * B. ~10 degrees C. ~120 degrees always D. ~180 degrees Q5. WebXR vs CSS-3D for AR overlays: A. WebXR is more capable but limited browser support; CSS-3D is universal and good for simple overlays * B. WebXR is universal C. CSS-3D doesn't exist D. Same thing --- ### Week 26: Cloud-native: COG, Zarr, STAC catalogs Track: Space GIS Architect URL: https://launchdetect.com/academy/week/26/ Summary: The old way: download a 10 GB GeoTIFF. The new way: range-request just the bytes you need from S3. COG, Zarr, and STAC make this possible. Objectives: - Understand COG (Cloud-Optimized GeoTIFF) structure - Use Zarr for multi-dimensional gridded data - Build and query a STAC catalog - Range-request a tile out of a COG without downloading the file Opening question (place-based hook): If you have to download an entire Sentinel-2 scene to use it, fetching takes minutes. What if you could just grab the bytes you need? Cloud-optimized formats (COG, Zarr) and catalogs (STAC) make that possible. This week, you'll fetch just the pixels you want — for any place on Earth — in seconds. Connecting to Hawaiʻi — STAC catalogs and Hawaiian data: Microsoft Planetary Computer's STAC catalog includes Sentinel-2 over Hawaiʻi (every scene back to 2015), Landsat (back to the 1970s), VIIRS night-lights, and many others — all queryable with one line of code, all free, all served as COGs you can range-request. The Hawaiʻi Statewide GIS Program has begun publishing some of its own datasets in STAC-compatible formats. This is the future: open standards, open access, partial downloads. Hint: Try Planetary Computer's STAC search box at planetarycomputer.microsoft.com — type 'Hawaiʻi' and see what's available. Lab: Pull a single tile from a COG without downloading the file — Identify a COG-formatted GOES product on AWS Open Data. Use rio-tiler to fetch just a single tile via HTTP range request. Time it vs downloading the whole file. Lab notebook: https://github.com/ops-sketch/academy-labs/blob/main/week-26/lab.ipynb Run in Colab: https://colab.research.google.com/github/ops-sketch/academy-labs/blob/main/week-26/lab.ipynb Primer: The traditional way to use satellite imagery: download a 5 GB scene, unzip it, load it into desktop GIS. The cloud-native way: range-request just the bytes you need from a file living on S3, never download the whole thing. This week is the three formats and one spec that make that possible. Cloud-Optimized GeoTIFF (COG) COG isn't a new file format. It's a particular way of writing a regular GeoTIFF so that HTTP-range-request access is efficient. Three requirements: Internal tiling — the image is divided into ~256×256 or 512×512 pixel tiles, stored in row-major order. The file header has an index of where each tile begins. Internal overviews — the file also contains downsampled versions of the image (typically at 2×, 4×, 8×, ... resolution) for fast low-zoom rendering. Header at the beginning — TIFF allows the IFD (image file directory) to live anywhere; COG mandates the beginning so a small range request can read the structure first. With those properties, a client can: (1) range-request the first ~64 KB to read the header and tile index, (2) compute which tiles cover the area of interest at the right zoom level, (3) range-request only those tiles. Total bytes transferred: kilobytes, not gigabytes. from rio_tiler.io import COGReader # Read just a small window from a COG on S3 — no full download with COGReader('https://noaa-goes18.s3.amazonaws.com/.../foo.tif') as cog: img = cog.part(bbox=(-100, 30, -80, 40), max_size=512) Zarr Zarr is a format for chunked, compressed, multi-dimensional arrays. Where COG is for 2D rasters, Zarr is for the (time × lat × lon × band × ...) hypercubes that modern Earth-observation analysis often needs. The data is stored as a directory tree on S3, with each chunk a separate object — so parallel reads of different chunks can fan out across many concurrent workers. import xarray as xr ds = xr.open_zarr('s3://my-bucket/era5-temperature.zarr', storage_options={'anon': True}) # Now ds is a lazy xarray Dataset; reading a slice triggers parallel chunk fetches slice = ds.air_temperature.sel(time='2024-01-15', lat=slice(30,40), lon=slice(-100,-80)) slice.load() # actually fetches the chunks Zarr is the standard for cloud-native climate, reanalysis, and time-series gridded data. The Pangeo community runs a free public collection of huge Zarr datasets at catalog.pangeo.io. STAC: SpatioTemporal Asset Catalog You have a COG or a Zarr. How do you tell users about it? How do they discover that you have a frame over Florida on January 15? Enter STAC, the SpatioTemporal Asset Catalog spec. STAC defines a small set of JSON schemas: Item — one asset (e.g. one Landsat scene). Has geometry, time range, properties, and asset URLs (the actual COG / Zarr / etc.). Collection — a homogeneous group of items (e.g. "Landsat 9 Level-2 surface reflectance"). Catalog — a hierarchy of collections. STAC API — a standardized REST interface for searching across catalogs. Endpoints: /search, /collections, /items. Major STAC catalogs (all free to query): Microsoft Planetary Computer — Landsat, Sentinel-1, Sentinel-2, NAIP, ESA WorldCover, and dozens more. AWS Earth Search — Sentinel-2, Landsat, NAIP via Element 84. Radiant Earth MLHub — labeled training datasets for ML. from pystac_client import Client cat = Client.open('https://planetarycomputer.microsoft.com/api/stac/v1/') search = cat.search(collections=['sentinel-2-l2a'], bbox=[-81, 28, -80, 29], datetime='2024-01-01/2024-02-01') items = list(search.items()) print(f"{len(items)} matching scenes") The lab You'll identify a COG-formatted GOES product on AWS Open Data, use rio-tiler to fetch just a single map tile from it via HTTP range request, and time it against downloading the whole file. The speedup is typically 50–500×. Then you'll query the Microsoft Planetary Computer STAC API for Sentinel-2 scenes over Cape Canaveral in 2024 — a one-line search that returns dozens of cloud-free items, each with COG asset URLs you can immediately range-request. This is the architecture every modern production geospatial pipeline uses, including LaunchDetect's. You no longer download data; you query catalogs and range-request the bytes you need. Reflection question (closing): If anyone can fetch any byte of Earth observation data they want, what shifts? Who benefits from democratized data? Who loses gatekeeping power? Whose responsibility is it to use the access wisely? Quiz: Q1. COG is: A. A GeoTIFF with internal tiling + overviews + correct byte ordering for HTTP range reads * B. A new format separate from GeoTIFF C. A vector format D. A compression scheme only Q2. Zarr is best for: A. Multi-dimensional gridded data (e.g. time × lat × lon × band), chunkable, parallelizable * B. Vector data C. 1D time series only D. Single static rasters Q3. STAC is: A. SpatioTemporal Asset Catalog — a spec for cataloging geospatial assets * B. A file format C. A query language D. A database Q4. HTTP range request lets you: A. Fetch a byte range of a file rather than the whole file * B. Run faster C. Authenticate D. Compress Q5. STAC API standard endpoints include: A. /search, /collections, /items * B. /users, /posts only C. /login, /logout D. GraphQL only --- ### Week 27: Production pipelines: S3 → Lambda → EventBridge → DDB Track: Space GIS Architect URL: https://launchdetect.com/academy/week/27/ Summary: Production geospatial isn't a notebook — it's a pipeline. This week is the real AWS architecture LaunchDetect runs in production: S3 ingest, Lambda compute, EventBridge schedule, DynamoDB state. Objectives: - Wire S3 PutObject events to Lambda triggers - Use EventBridge for scheduled and event-driven orchestration - Persist detection records to DynamoDB - Reason about cost and latency in serverless geospatial pipelines Opening question (place-based hook): When something important happens — a launch, an eruption, a hurricane — how does the alert get from the satellite to the people who need it? Through a pipeline of cloud services. This week you'll wire one up: S3 ingests data, Lambda processes it, DynamoDB stores it, EventBridge routes the alert. Same architecture LaunchDetect uses; same architecture you could use to build community-grade alerts. Connecting to Hawaiʻi — Community alerting infrastructure: The State of Hawaiʻi Emergency Management Agency's HI-EMA Alert system runs on cloud-native infrastructure not unlike what you'll learn this week. Pacific Disaster Center, based on Maui, runs the DisasterAWARE platform serving 90+ countries on similar AWS architecture. Knowing how this works means you can build the same kind of system for your own community: a flood-alert pipeline for a single watershed, a beach-closure alert for a stretch of coast, a coral-bleaching alert for a specific reef. Hint: Pacific Disaster Center has internships. So does HI-EMA. Knowing the cloud-native geospatial stack opens those doors. Lab: Mini detection pipeline — Build a Lambda triggered by S3 PutObject. The Lambda reads a small GOES NetCDF, threshold-detects hotspots, writes records to a DynamoDB table. Deploy with AWS CDK. Lab notebook: https://github.com/ops-sketch/academy-labs/blob/main/week-27/lab.ipynb Run in Colab: https://colab.research.google.com/github/ops-sketch/academy-labs/blob/main/week-27/lab.ipynb Primer: Production geospatial isn't a notebook. It's a pipeline: data lands somewhere, code runs, results land somewhere else, the pipeline is monitored, alerts fire when something breaks. This week is the AWS architecture that LaunchDetect actually runs in production — minus a few enterprise-specific layers. The core stack: S3 + Lambda + EventBridge + DynamoDB Four services, each doing one thing well: S3 — object storage. Every GOES NetCDF, every detection JSON, every static page lives in S3. Cheap (~$0.023/GB/month), durable (11 9s), event-emitting. Lambda — serverless compute. Function as a service. You pay for invocations and execution duration; no servers to manage. Cold start is the main pitfall. EventBridge — event bus. Routes events between AWS services and your own consumers. Replaces ad-hoc SNS/SQS/CloudWatch Events combinations. DynamoDB — NoSQL key-value/document store. Single-digit-millisecond reads. Pay for storage + read/write units. The detection pipeline LaunchDetect's flow: NOAA writes a new GOES Band 7 mesoscale NetCDF to s3://noaa-goes18/.... NOAA's bucket emits an S3 event; an EventBridge rule fans it out to LaunchDetect's scorer Lambda. The scorer Lambda fetches the NetCDF (range-requested for just the geographic window of interest), converts radiance to brightness temperature, threshold-detects hotspots, applies parallax correction, runs the Layer 3 ML classifier, and writes detection candidates to DynamoDB. A DynamoDB stream triggers a publisher Lambda that decides whether the candidate is a real launch (vs fire / glint / industrial source), writes the public detection JSON to S3, and emits a "launch detected" event to EventBridge. Subscribers (web dashboard, push-notification service, blog generator) receive the event and update their own state. Total latency from NOAA file landing to a push notification on a user's phone: typically 30–90 seconds. DynamoDB partition key design DynamoDB's #1 footgun is hot partitions. Every item has a partition key (PK) and optionally a sort key (SK). DynamoDB hashes PK and routes the item to a physical partition. If 90% of your writes go to a single PK, you bottleneck on that one partition's WCU/RCU limit (3,000 reads / 1,000 writes per second). Good PK choices spread writes evenly across partitions. For launch detections, a natural PK is DETECTION#{ulid} — ULIDs are time-ordered but have enough entropy that they distribute evenly. Bad PK: DATE#{yyyy-mm-dd} — all today's writes go to one partition. Lambda cold starts When Lambda receives a request and has no warm container available, it cold-starts: provision a sandbox, download the function code, initialize the runtime, run the handler. Cold start can be 200 ms (Python 3.13 lightweight) to 3+ seconds (heavy Java / large dependency tree). For latency-sensitive request paths (API endpoints), cold start matters and you mitigate with: provisioned concurrency, smaller deployment packages, lighter runtimes, lazy imports. For event-driven batch (which is most space-GIS pipelines), cold start is fine — a launch detection that takes 90 seconds doesn't care about 500 ms cold start. AWS CDK AWS CDK (Cloud Development Kit) is infrastructure-as-code in real programming languages — TypeScript, Python, Java, Go. You write classes that instantiate AWS resources; CDK synthesizes them to CloudFormation templates; CloudFormation deploys them. import * as cdk from 'aws-cdk-lib'; import { Bucket } from 'aws-cdk-lib/aws-s3'; import { Function, Runtime, Code } from 'aws-cdk-lib/aws-lambda'; import { S3EventSource } from 'aws-cdk-lib/aws-lambda-event-sources'; import { Table, AttributeType } from 'aws-cdk-lib/aws-dynamodb'; export class DetectionStack extends cdk.Stack { constructor(scope: cdk.App, id: string) { super(scope, id); const bucket = new Bucket(this, 'IngestBucket'); const table = new Table(this, 'Detections', { partitionKey: { name: 'pk', type: AttributeType.STRING }, sortKey: { name: 'sk', type: AttributeType.STRING } }); const scorer = new Function(this, 'Scorer', { runtime: Runtime.PYTHON_3_13, handler: 'handler.handler', code: Code.fromAsset('lambda/scorer') }); scorer.addEventSource(new S3EventSource(bucket, { events: [s3.EventType.OBJECT_CREATED] })); table.grantWriteData(scorer); } } The lab You'll build a mini detection pipeline: a Lambda triggered by S3 PutObject, that reads a small GOES NetCDF, threshold-detects hotspots, and writes detection records to a DynamoDB table. Deploy with AWS CDK. This is the architecture of LaunchDetect's artgis-cluster-scorer Lambda in production, minus the ML scoring layer and parallax correction. Reflection question (closing): Cloud infrastructure makes powerful systems cheap to build. It also concentrates control in three big cloud providers. What does that tradeoff mean for community-scale tools? Quiz: Q1. S3 → Lambda trigger is configured via: A. S3 event notification to Lambda function ARN * B. Polling C. SNS only D. EventBridge only Q2. EventBridge is best for: A. Decoupled event routing, scheduled rules, cross-service orchestration * B. Database C. Just cron D. File storage Q3. DynamoDB partition key choice impacts: A. Distribution and hot-partition behavior * B. Cost only C. Nothing D. Display order Q4. Lambda cold start matters for: A. Latency-sensitive endpoints; less for event-driven batch * B. Always C. Never D. Only TypeScript Q5. AWS CDK is: A. Infrastructure-as-code in TypeScript / Python / Java / Go * B. Just a CLI C. A database D. A managed service --- ### Week 28: Privacy + ethics: MGRS, sub-meter, ITAR Track: Space GIS Architect URL: https://launchdetect.com/academy/week/28/ Summary: Geospatial intelligence isn't neutral. Sub-meter imagery, plume signatures, ground tracks — all carry real-world implications for privacy, security, and law. This week is the ethical frontier. Objectives: - Understand MGRS (Military Grid Reference System) - Reason about export-controlled (ITAR) imagery - Apply privacy considerations to sub-meter-resolution data - Identify dual-use risk in geospatial systems Opening question (place-based hook): Hawaiian elders carry knowledge about places — sacred sites, fishing grounds, burial sites — that they sometimes choose not to share. Should that data be in a public GIS database? Geospatial work involves ethical decisions every day. This week is about MGRS, ITAR, privacy — and about the deeper question: whose data is it? Connecting to Hawaiʻi — Sovereignty, ITAR, and the right to be unmapped: Indigenous data sovereignty is a growing field — the principle that data ABOUT a people (or about their sacred places) belongs to that people, not to whoever collected it. In Hawaiʻi, multiple organizations practice this: the Office of Hawaiian Affairs has data-sharing protocols, some traditional fishing grounds are deliberately kept out of public databases, certain heiau are marked on public maps with deliberately vague coordinates. ITAR is the US law side of this conversation — the formal rules. Indigenous data sovereignty is the deeper question: who decides what gets seen? Hint: OHA's resources on data sovereignty are worth reading. So is the CARE Principles for Indigenous Data Governance. Both reshape how you think about 'open data.' Lab: ITAR compliance self-audit — Given a hypothetical product (e.g. an orbital-tracking SaaS), produce a written ITAR compliance assessment: what's covered, what's exempt, where the lines are. Lab notebook: https://github.com/ops-sketch/academy-labs/blob/main/week-28/lab.ipynb Run in Colab: https://colab.research.google.com/github/ops-sketch/academy-labs/blob/main/week-28/lab.ipynb Primer: Space GIS is not ethically neutral. Sub-meter satellite imagery, real-time launch detection, ground-track prediction — all carry legitimate dual-use concerns: every capability that helps an emergency responder also helps an adversary. This week is the responsible-practitioner primer, written for engineers who will at some point have to make a judgment call. MGRS and military coordinate systems The Military Grid Reference System (MGRS) is a global coordinate system used by NATO militaries. It's a friendlier-to-humans version of UTM: instead of "326.4521 N, 47.821 E in UTM zone 33", an MGRS coordinate looks like "33TWN8345621324" — a compact alphanumeric string with built-in zone, 100-km square identifier, and easting/northing. MGRS precision is variable: trailing digits = meters of precision. 33T alone is a 6° UTM zone. 33TWN is a 100 km × 100 km square. 33TWN8362 is 1 km. 33TWN83456213 is 10 m. 33TWN8345621324 is 1 m. Why this matters for civilian work: you may receive coordinates in MGRS from operational partners or open-source intelligence (OSINT) sources. Know how to convert. mgrs on PyPI handles it cleanly. ITAR and export control The International Traffic in Arms Regulations (ITAR) is a US law restricting the export of defense-related articles and services. Some satellite imagery is on the US Munitions List, and some isn't. The lines: Imagery resolution: Per the Kyl-Bingaman Amendment, US-domiciled commercial imaging companies are limited to 0.5 m resolution over Israel. Other US restrictions are largely lifted; non-US providers have their own rules. Specific intelligence products: Military reconnaissance imagery and derived products remain controlled. Software: Image-processing software with specific defense capabilities (e.g. automated target recognition for weapons systems) is ITAR-controlled. Information: "Technical data" related to defense articles — including software documentation in some cases — is controlled even when the underlying data is public. For a civilian space-GIS product (LaunchDetect-style), the practical guidance is: Process only publicly available raw imagery (NOAA GOES, ESA Sentinel-2, etc.) — these are pre-cleared. Don't derive products specifically designed for military targeting (predicting individual rocket-stage debris landing locations, for example). Consult export-control counsel before any government / defense partnership. Privacy and sub-meter Sub-meter commercial satellite imagery (WorldView-3 at 0.3 m, BlackSky at 1 m, Planet SkySat at 0.5 m) raises privacy concerns that ~10 m imagery (Sentinel-2) does not: Individuals and vehicles are identifiable. Activities (gatherings, construction, military maneuvers) become observable. Patterns of life — when does this farm work? when does this base have shift change? — emerge from time-series. Practitioner responsibilities: Don't publish sub-meter imagery of identifiable private individuals or properties without consent. EU GDPR and US state-level privacy laws (CCPA, BIPA) reach into this space. Beware aggregation: each sub-meter image is fine; a time-series of sub-meter images over one address is surveillance. Implement appropriate access controls when the imagery is sensitive (humanitarian work in active conflict zones, for example). Dual-use awareness Every capability in this course is dual-use. SGP4 propagation (Week 8) helps amateur astronomers spot the ISS and helps an adversary plan a satellite-blinding attack. Plume detection (Week 14) helps journalists confirm launches and helps competitors timing intelligence on rivals. Real-time SAR change detection helps disaster response and helps military targeting. The practical position: build openly, but think clearly. When a capability has clear malicious applications and limited civilian value, reconsider building it. When it has broad civilian value and constrained malicious uplift, build it openly. When in doubt, talk to legal — they don't bite, and they will tell you what you can ship. The lab You'll produce a written ITAR compliance assessment for a hypothetical orbital-tracking SaaS product. Cite the controlling statutes. Identify the in-scope and exempt features. Document the controls (data minimization, access restrictions, geographic restrictions) you'd implement. This is the kind of memo a real product team produces before launch — and the kind of thinking the responsible space-GIS engineer practices proactively. Reflection question (closing): You can build a thing. Should you build it? Who is the thing for? Who's it not for? When 'science would benefit' isn't the only criterion that matters, what's a better question to ask? Quiz: Q1. MGRS is: A. A military coordinate system using a global grid with kilometer to meter precision * B. A civilian-only system C. A datum D. An ellipsoid Q2. ITAR covers: A. US-export-controlled defense-related articles and services, including some satellite imagery * B. All satellite imagery C. Nothing relevant to GIS D. Only nuclear material Q3. Sub-meter imagery raises privacy concerns because: A. Individuals, vehicles, and activities become identifiable * B. It's expensive C. It's slow D. Resolution is irrelevant Q4. Dual-use technology means: A. Can be used for civilian and military purposes * B. Used by two people C. Two regions only D. Marketing term Q5. A good GIS engineer should: A. Surface ethical/legal concerns to product/legal teams proactively * B. Ignore them C. Hide them D. Defer entirely --- ### Week 29: Geospatial APIs: PostGIS + FastAPI + spatial REST Track: Space GIS Architect URL: https://launchdetect.com/academy/week/29/ Summary: Most geospatial work ends with: someone wants a JSON API. This week you build it. Objectives: - Build a FastAPI app that wraps a PostGIS database - Design REST endpoints for spatial queries - Return GeoJSON FeatureCollections - Add OpenAPI documentation Opening question (place-based hook): If you built a tool that detects red-tide blooms along Hawaiian coastlines, how would you let other people use your tool? Through an API. This week you'll learn the patterns — same patterns the State Climate Data Portal uses, same patterns NOAA uses. By the end, your work can serve other people's work. Connecting to Hawaiʻi — Hawaiian data APIs and reciprocity: The Pacific Islands Ocean Observing System (PacIOOS) at the University of Hawaiʻi publishes real-time Pacific oceanographic data via APIs — wave height, sea surface temperature, currents, ocean acidification — all free, all open, all available to any developer. The Department of Land and Natural Resources has been steadily moving its datasets into accessible APIs. When you build an API for your own geospatial work, you're joining this network. Knowledge flows both ways: you use their data, your tools can give back. Hint: PacIOOS at pacioos.hawaii.edu has dozens of free APIs. Try one. Then build something that uses it. Lab: Spatial REST API for launch detections — Build a FastAPI app with bbox / radius / id endpoints over a PostGIS detections table. Returns GeoJSON. Lab notebook: https://github.com/ops-sketch/academy-labs/blob/main/week-29/lab.ipynb Run in Colab: https://colab.research.google.com/github/ops-sketch/academy-labs/blob/main/week-29/lab.ipynb Primer: The endpoint where space GIS meets the rest of the world is almost always a REST API. Someone downstream — another team, a paying customer, a frontend app — wants geospatial data over HTTP, with predictable URLs, ergonomic parameters, and JSON responses. This week you build that endpoint properly. The shape of a spatial REST endpoint The four query patterns that show up everywhere: By ID — single feature by its identifier. By bounding box — features inside a lon_min, lat_min, lon_max, lat_max box. Useful for map tiles. By point and radius — features within radius_km of a lat, lon. Useful for near-me queries. By polygon — features inside an arbitrary GeoJSON polygon. POST it as the request body. Common parameters across all of them: time range, pagination, attribute filtering, sorting. FastAPI plus PostGIS The minimal stack: from fastapi import FastAPI, Query from sqlalchemy import create_engine, text engine = create_engine("postgresql://...") app = FastAPI(title="LaunchDetect API") @app.get("/detections") def detections(bbox: str | None = None, limit: int = 50): if bbox: lon_min, lat_min, lon_max, lat_max = map(float, bbox.split(",")) sql = text("SELECT id, ST_AsGeoJSON(position) AS geom FROM detections " "WHERE position && ST_MakeEnvelope(:l1,:l2,:l3,:l4,4326) " "ORDER BY detected_at DESC LIMIT :lim") with engine.begin() as conn: rows = conn.execute(sql, dict(l1=lon_min,l2=lat_min,l3=lon_max,l4=lat_max,lim=limit)).all() return rows OpenAPI for free FastAPI auto-generates a full OpenAPI 3.0 specification from your function signatures and Pydantic models. Visit /docs for interactive Swagger UI, /redoc for ReDoc. Other teams can generate client SDKs in any language from the OpenAPI spec. Pagination Spatial endpoints often return many features. Two approaches: Offset pagination — easy to implement but degrades with large offsets and breaks if features are inserted or deleted between pages. Cursor pagination — a cursor parameter encodes the last item seen. Stable under writes, scales to any depth. Use the Link header per RFC 5988. For high-traffic endpoints, cursor is the right default. Caching Spatial responses cache well. Set Cache-Control: public, max-age=60 on bbox queries. Use a CDN in front. For Lambda-fronted APIs, this can drop your origin load by 95 percent or more. The lab You will build a FastAPI app that wraps a PostGIS detections table and exposes the four endpoint patterns above. Returns GeoJSON. Documented via auto-generated OpenAPI at /docs. This is the architecture of launchdetect.com/space-data-api/ in production. Reflection question (closing): An API turns your work into a building block other people can use. What's something you'd want to expose as an API — and what's something you'd hold back? Quiz: Q1. FastAPI generates what for free? A. OpenAPI spec and interactive Swagger UI at /docs * B. Database migration C. Frontend D. Hosting Q2. A bbox query in PostGIS uses: A. ST_MakeEnvelope to build the bbox geometry, then ST_Intersects or && operator * B. Just SQL LIKE C. JSON parsing D. Random sampling Q3. GeoJSON FeatureCollection structure is: A. A FeatureCollection containing an array of Feature objects * B. Just a list of coordinates C. A binary blob D. XML Q4. Pagination for spatial endpoints typically uses: A. Cursor-based with cursor parameter or HTTP Link header * B. Offset only C. No pagination D. Random Q5. Returning EWKB vs GeoJSON tradeoff: A. GeoJSON is human-readable and web-friendly; EWKB is more compact * B. EWKB is always better C. Same thing D. Neither matters --- ### Week 30: Capstone defense and synthesis (Capstone 5 week) Track: Space GIS Architect URL: https://launchdetect.com/academy/week/30/ Summary: Track 5 culminates here: a complete end-to-end pipeline from raw GOES NetCDF to served REST endpoint, visualized on a Cesium globe. This is the bar for the highest LaunchDetect Academy credential. Objectives: - Synthesize learnings across all 30 weeks - Present a working production-grade pipeline - Defend design choices - Identify the next problem in space GIS Opening question (place-based hook): You've come 29 weeks. You can build a satellite-imagery pipeline that detects events on Earth in real-time. What will you point it at? Capstone 5 is the synthesis. Everything you've learned, in one pipeline. The technical work is teachable. The harder question is the one you've been carrying: what would you build, and who would it be for? Connecting to Hawaiʻi — What you carry forward: You started Week 1 with a place-based question: how do your kupuna give directions? You've finished with the tools to detect rocket plumes from geostationary thermal imagery. Both are forms of knowing where you are. Both came from people who knew the sky. The course was built by LaunchDetect — a production space-GIS company — but the curriculum was always for you. Whatever you build next, build it with the kuleana (responsibility) the place you come from has given you. That's not extra; that's the work. Hint: If you want to keep going: PacIOOS, Pacific Disaster Center, OHA's GIS program, NOAA Coral Reef Watch — all hire interns and entry-level GIS staff. Your capstone is your portfolio. Show them. Lab: End-to-End Detection Pipeline (capstone start) — Ingest 10 frames of GOES-18 Band 7 from S3. Georeference. Threshold-detect hotspots. Cluster across frames. Score. Persist to PostGIS. Expose /detections REST endpoint. Render on Cesium globe. Public GitHub repo + 5-min video. This is the deliverable for Capstone 5. Lab notebook: https://github.com/ops-sketch/academy-labs/blob/main/week-30/lab.ipynb Run in Colab: https://colab.research.google.com/github/ops-sketch/academy-labs/blob/main/week-30/lab.ipynb Primer: Thirty weeks. Five tracks. You've moved from "what is a coordinate system" to building a production AWS pipeline that ingests real geostationary thermal imagery and serves geocoded detections over a REST API. This week is the synthesis: stitch everything together into one end-to-end deliverable that demonstrates expert-level competence in space GIS. What you've learned Track 1 (Weeks 1–4): GIS foundations — coordinate systems, vector vs raster, QGIS, mapping global launch sites. Spatial literacy. Track 2 (Weeks 5–10): Spatial analysis + orbital mechanics — PostGIS, SGP4, ground tracks, ground-station visibility, spaceport-to-orbit matching. Geometric reasoning. Track 3 (Weeks 11–15): Remote sensing — EM spectrum, Landsat / Sentinel-2 / GOES-R, thermal IR, plume detection, parallax correction. Sensor physics. Track 4 (Weeks 16–20): Web + real-time — Leaflet / MapLibre / OpenLayers, vector tiles, CesiumJS, WebSockets, change detection. Delivery. Track 5 (Weeks 21–29): Production + expert — multi-sensor fusion, ML for raster, SAR, geodesy, AR, cloud-native formats, AWS pipelines, ethics, geospatial APIs. Production scale + responsibility. The capstone deliverable Capstone 5: End-to-End Detection Pipeline. A complete production-style pipeline running every layer of the course: Ingest — 10 frames of real GOES-18 ABI Band 7 NetCDF from the NOAA AWS Open Data bucket, spanning a known launch event. Georeference — convert fixed-grid scan angles to lat/lon (Week 15) with parallax correction applied. Detect — convert radiance to brightness temperature, threshold-detect hotspots (Week 14), apply morphological cleaning (Week 20). Cluster — group hotspot pixels across consecutive frames into plume tracks. A real plume appears in 3–5 consecutive frames; isolated single-frame hotspots are noise. Score — apply a simple confidence heuristic: spatial coincidence with a known spaceport (within 50 km), temporal pattern matching (the track rises then falls), brightness profile. Persist — write final detections to PostGIS with proper GIST indexes (Week 6). Serve — expose /detections REST endpoints via FastAPI (Week 29) with OpenAPI docs. Visualize — render detections on a Cesium globe (Week 18) loaded directly from the REST API. The deliverables Public GitHub repo — clear README, setup instructions, license, working code. Anyone with Docker should be able to docker compose up and see it run. 5-minute video — walk through the architecture: what each component does, why you made the design choices you did, what would change at 100× scale. Detection log on a real launch — sample output JSON showing your pipeline correctly identified one known launch event. Why 5 minutes The video constraint is deliberate. Five minutes is enough to explain the architecture and the key design decisions; it's not enough to dwell on every detail. The skill is communication under constraint — a skill every senior engineer needs. What comes next You've completed the LaunchDetect Academy. What's the next problem in space GIS? The honest answer: many. A non-exhaustive list: Orbital traffic management — as the LEO catalog grows past 100,000 objects, conjunction analysis at scale is unsolved. Climate monitoring from GEO — using GOES-R for sub-daily climate variables, not just weather. Autonomous Earth observation — onboard ML deciding what to image, when, with what bands. Open data infrastructure — keeping STAC catalogs and COG/Zarr archives sustainable as data volumes 10×. SAR for climate — InSAR-derived deformation as a climate-change indicator (subsidence, sea-level rise impacts). Mars and lunar GIS — coordinate systems, datums, basemap layers for off-Earth bodies. The instinct you've built — "this is a spatial problem; I know how to set it up rigorously, run it, serve it, verify it" — is the most portable thing in the curriculum. Apply it everywhere. The capstone Week 30 is the start of Capstone 5: End-to-End Detection Pipeline, the final credential. The full rubric is on the capstone page; finishing it earns the Certified Space GIS Architect credential — the highest LaunchDetect Academy designation, and a real signal to peers and employers that you can build production-grade space-domain geospatial systems end-to-end. Ship it. Tag us when you do. Reflection question (closing): A Space GIS Architect can build production systems. What system, if built, would your community truly benefit from? Who needs it? Who has been overlooked? What's one thing you'd build first? Quiz: Q1. An end-to-end production pipeline includes: A. Ingest, processing, persistence, serving, visualization, monitoring * B. Just processing C. Just visualization D. Just storage Q2. Why a 5-minute video deliverable? A. Forces concise explanation of design decisions and architecture * B. Required by spec C. Hard to make D. Easier than writing Q3. Monitoring a geospatial pipeline includes: A. Throughput, error rates, latency, data quality (false positive rate, etc.) * B. Just uptime C. Just disk space D. Nothing Q4. Public GitHub repo enables: A. Independent verification + portfolio + reproducibility * B. Marketing only C. Required by law D. Nothing Q5. The next problem in space GIS is likely: A. Open-ended — emerging areas include orbit congestion, climate monitoring, autonomous decisions * B. All solved C. Only debris D. Only Mars --- ## Capstone Projects ### Capstone 1: Global Launch Site Atlas Track: Ground Station Operator (earns Certified Ground Station Operator) URL: https://launchdetect.com/academy/capstone/1/ Tagline: Map every active orbital launch pad on Earth. Build a GeoJSON FeatureCollection of all currently-active orbital launch pads worldwide (~20 sites). Each feature has properties: name, country (ISO 3166), operator, status (active/proposed/retired), vehicles flown, first orbital launch year, latest orbital launch year, latitude, longitude. Style the layer in QGIS by operator and country. Compose a publication-quality print layout: title, legend, scale bar, north arrow, attribution. Export the final map as a PDF at A2 size. Rubric: - GeoJSON contains ≥18 active orbital launch pads with all required properties populated - Property values are correct and citable (UN Office for Outer Space Affairs / launch-provider primary sources) - Coordinate precision is at least 4 decimal places for latitude/longitude - QGIS layout includes title, legend, scale bar, north arrow, data-source attribution - PDF is publication-quality (300 dpi, no overlapping labels) Deliverable: launch-sites.geojson + atlas.pdf, both committed to your personal GitHub fork of academy-labs Dataset: https://github.com/ops-sketch/academy-labs/tree/main/capstones/01-launch-site-atlas --- ### Capstone 2: Ground-Track Coverage Tool Track: Orbital Analyst (earns Certified Orbital Analyst) URL: https://launchdetect.com/academy/capstone/2/ Tagline: Given any TLE, produce ground track + coverage + country-overflight table. Build a Python tool that, given any TLE as input, outputs (1) the 24-hour ground track as a GeoJSON LineString with timestamped vertices, (2) a coverage polygon assuming a 1000-km swath sensor, (3) a country-overflight table listing each country overflown with total dwell time in seconds. The country mapping must use the Natural Earth admin-0 boundary dataset (provided). Visualize the ground track on a Folium map, color-coded by altitude. Rubric: - Tool runs from CLI: `python coverage_tool.py --tle path/to/tle.txt` - Ground track GeoJSON is correctly geodesic (no straight-line shortcuts across the pole) - Country dwell table is consistent with the ground track (per-country dwell ≤ total propagation time) - Coverage polygon area is consistent with 1000-km swath × ground-track length - Folium HTML map renders correctly with altitude-colored ground track Deliverable: Python project (CLI + library) + sample outputs for ISS, Hubble, and a Starlink satellite Dataset: https://github.com/ops-sketch/academy-labs/tree/main/capstones/02-coverage-tool --- ### Capstone 3: Thermal Plume Detector Track: Remote Sensing Specialist (earns Certified Remote Sensing Specialist) URL: https://launchdetect.com/academy/capstone/3/ Tagline: Detect a real rocket plume from a real NOAA GOES NetCDF. Build a Python tool that, given a GOES-18 ABI Band 7 NetCDF file and a known launch event (date, location, vehicle), outputs detection records `(timestamp_UTC, lat, lon, brightness_temp_K, area_km²)` for each detected plume pixel cluster. Apply parallax correction. Apply false-positive filtering (mask known wildfires from the FIRMS dataset). Produce a Folium heatmap visualization showing detected hotspots overlaid on a basemap, with hover-popups showing the (t, T_b) for each detection. Rubric: - Tool runs on the provided sample NetCDF and produces ≥1 valid plume detection record for the known launch - Detection timestamps are within 30 seconds of ignition time published by the launch operator - Coordinates are within 5 km of the known launch pad (after parallax correction) - Brightness temperatures are physically plausible (> 320 K for plume, > 290 K for background) - False-positive filtering rejects ≥90% of FIRMS-overlapping pixels in the same scene - Folium heatmap is interactive and correctly rendered Deliverable: Python tool + Jupyter notebook walkthrough + Folium HTML output for the test event Dataset: https://github.com/ops-sketch/academy-labs/tree/main/capstones/03-plume-detector --- ### Capstone 4: Real-Time Satellite Tracker Track: Mission GIS Engineer (earns Certified Mission GIS Engineer) URL: https://launchdetect.com/academy/capstone/4/ Tagline: Cesium globe + live ISS/Starlink + visible passes + 24h replay. Build a Cesium-based web application that shows real-time positions of the ISS and a configurable Starlink shell on a 3D globe. The user supplies a latitude/longitude; the app shows the next 5 visible passes (elevation > 30°) at that location. Click any satellite to inspect orbital elements (TLE epoch, inclination, mean motion, period). A time slider lets the user replay the last 24 hours of positions. Rubric: - Cesium globe loads and renders without console errors - ISS position updates at ≥1 Hz with no visible stuttering - Visible-pass calculation matches a reference (e.g. n2yo.com) within 30 seconds - Click-to-inspect popup shows all required orbital elements correctly - Time-slider replay covers 24 hours and animates smoothly - Code is deployable as a static site (no backend required) Deliverable: Static web app (HTML+JS+CSS) deployable to GitHub Pages or S3, plus a 2-minute demo video Dataset: https://github.com/ops-sketch/academy-labs/tree/main/capstones/04-realtime-tracker --- ### Capstone 5: End-to-End Detection Pipeline Track: Space GIS Architect (earns Certified Space GIS Architect) URL: https://launchdetect.com/academy/capstone/5/ Tagline: Raw NetCDF → georeferenced detections → PostGIS → REST → 3D globe. Build a complete production-grade space-GIS pipeline: ingest 10 frames of GOES-18 ABI Band 7 NetCDF from the NOAA AWS Open Data bucket; georeference each frame (parallax-corrected); threshold-detect hotspots; cluster hotspot pixels across consecutive frames into plume tracks; score each track for confidence (geometric coherence, brightness consistency, motion); persist results to PostGIS with proper indexes; expose a FastAPI /detections REST endpoint with bbox, time-range, and confidence filters; render results live on a Cesium globe served from the same FastAPI app. Deliverable is a public GitHub repository + a 5-minute video walking through the pipeline. Rubric: - Pipeline runs end-to-end on the provided 10-frame test dataset - Detections match the known launch ground truth within 5 km and 30 seconds - False-positive rate < 10% on the test dataset - PostGIS schema includes appropriate GIST indexes for spatial and time queries - REST API is documented via OpenAPI, all endpoints respond within 200 ms - Cesium globe renders detections with hover-popups - GitHub repo is public with clear README, setup instructions, license - 5-minute video clearly explains architecture decisions Deliverable: Public GitHub repo (Python + JavaScript) + 5-minute video on YouTube/Vimeo + signed-off README Dataset: https://github.com/ops-sketch/academy-labs/tree/main/capstones/05-end-to-end-pipeline --- ## Glossary **WGS84**: World Geodetic System 1984. The geodetic datum used by GPS and most satellite-derived coordinates. EPSG:4326 is its geographic (lat/lon) form. **EPSG**: European Petroleum Survey Group code — a registry of coordinate reference system identifiers. EPSG:4326 = WGS84 lat/lon. EPSG:3857 = Web Mercator. **Ellipsoid**: A geometric approximation of Earth's shape: bulged at the equator, flattened at the poles. WGS84 ellipsoid: equatorial radius 6378137 m, flattening 1/298.257223563. **Geoid**: The equipotential surface of Earth's gravity field that best matches mean sea level. Differs from the ellipsoid by up to plus or minus 100 m globally. **EGM2008**: Earth Gravitational Model 2008. The standard global geoid model published by NGA, accurate to about 15 cm. **Mercator (Web Mercator)**: EPSG:3857. The conformal projection used by Google Maps, Mapbox, and most slippy web maps. Distorts area near the poles. **UTM**: Universal Transverse Mercator. Earth divided into 60 6-degree-wide zones. Conformal and nearly equidistant within a zone. Cape Canaveral is UTM Zone 17N (EPSG:32617). **Equirectangular**: Plate carree projection where latitude and longitude map directly to y and x. Cheap, common for global imagery. Distorts near the poles. **MGRS**: Military Grid Reference System. NATO-standard coordinate notation using a global grid with variable precision (km to m). **Vector data**: GIS data representing the world as discrete geometric objects (points, lines, polygons) with attributes. GeoJSON is the most common format. **Raster data**: GIS data as a grid of cells, each holding a value. Satellite imagery is raster. GeoTIFF is the standard format. **GeoJSON**: An open standard format for encoding geographic features in JSON. Used universally on the web. **GeoTIFF**: A TIFF image with embedded georeferencing metadata. The standard raster format in GIS. **COG**: Cloud-Optimized GeoTIFF. A regular GeoTIFF organized so HTTP-range-request access fetches just the needed bytes without downloading the whole file. **Zarr**: A format for chunked, compressed, multi-dimensional arrays. Standard for cloud-native gridded data (climate reanalysis, time series). **STAC**: SpatioTemporal Asset Catalog. A spec for cataloging geospatial assets with searchable APIs. Major STAC catalogs: Microsoft Planetary Computer, AWS Earth Search. **PostGIS**: The spatial extension to PostgreSQL. Adds geometry and geography types, hundreds of ST_ functions, GIST spatial indexes. **GIST index**: A PostgreSQL index type using an R-tree for spatial queries. Essential for performance on PostGIS geometry columns. **Spatial join**: An operation that attaches attributes from one layer to another based on a spatial relationship (within, intersects, etc.) rather than a key match. **TLE**: Two-Line Element set. NORAD's plain-text format encoding a satellite's Keplerian orbital elements plus drag and perturbation terms in 70 characters per line. Distributed by Space-Track.org and CelesTrak. **SGP4**: Simplified General Perturbations 4. The standard orbital propagation algorithm for use with TLEs. Accurate to about 1 km for a fresh TLE. **Keplerian elements**: The six orbital parameters that uniquely describe an orbit shape and orientation: semi-major axis, eccentricity, inclination, RAAN, argument of periapsis, true anomaly. **Ground track**: The path on Earth's surface traced by the sub-satellite point of a satellite over time. **Sub-satellite point**: The point on Earth's surface directly below a satellite. **LEO**: Low Earth Orbit, under 2000 km. ISS, Starlink, Hubble, Landsat, Sentinel-2 all operate here. **MEO**: Medium Earth Orbit, 2000 to 35786 km. GPS, GLONASS, Galileo operate here. **GEO**: Geostationary Orbit, 35786 km altitude over the equator. GOES, Himawari, most communications satellites. **Sun-synchronous orbit**: A nearly polar orbit at about 98-degree inclination where the satellite passes the equator at the same local solar time each orbit. Used for Earth observation. **ABI**: Advanced Baseline Imager. The primary instrument on GOES-R series satellites. 16 spectral bands from visible to longwave IR. **GOES-R**: NOAA's geostationary weather satellite series. GOES-18 (West, 137.2W) and GOES-19 (East, 75.2W) are the operational satellites. **Himawari-9**: JMA's geostationary weather satellite at 140.7E. Covers East Asia and the western Pacific. **Band 7**: GOES-R ABI Band 7 at 3.9 micrometers — the mid-wave infrared band used for thermal hotspot detection, including rocket plumes and wildfires. **Brightness temperature**: Temperature of a perfect black body that would emit the observed radiance. Computed via the inverse Planck function. Standard unit for thermal IR analysis. **NDVI**: Normalized Difference Vegetation Index, computed as (NIR - Red) / (NIR + Red). High for healthy vegetation, low for bare soil. **SAR**: Synthetic Aperture Radar. Active microwave imaging that sees through clouds and works day or night. Sentinel-1 is the workhorse civilian C-band SAR. **InSAR**: Interferometric SAR. Phase-difference analysis between two SAR acquisitions, capable of measuring ground deformation to the millimeter. **Parallax in remote sensing**: Apparent shift of a high-altitude feature (e.g. rocket plume at 50 km) as seen from a satellite, compared to its true ground position. Must be corrected for accurate geolocation. **CesiumJS**: The open-source 3D globe library. Industry standard for serious web-based 3D GIS. **MapLibre GL JS**: The community fork of Mapbox GL JS. WebGL-based vector tile renderer for web maps. **Vector tiles**: Pre-indexed pyramid of small geographic data tiles served as Protocol Buffers. Smaller, smoother, and more flexible than raster tiles. **PMTiles**: A single-file alternative to MBTiles that can be served directly from S3 via HTTP range requests — no tile server needed. **ITAR**: International Traffic in Arms Regulations. US law controlling export of defense-related articles and services, including some satellite imagery. **NOTAM**: Notice to Air Missions (FAA). Pre-launch safety advisory defining airspace exclusion for launches. **FIRMS**: Fire Information for Resource Management System. NASA's near-real-time wildfire hotspot data, used to filter out fire false-positives in plume detection. **AIS**: Automatic Identification System. Maritime vessel tracking transmitted on VHF and aggregated globally. **ADS-B**: Automatic Dependent Surveillance-Broadcast. Aircraft transponder data broadcasting position and identity. **ʻĀina**: Hawaiian for land — more specifically, that which feeds and sustains. In Hawaiian thought, place is not a passive backdrop but a living relative. ʻĀina-based mapping (used in Hawaiʻi by community organizations) puts the well-being of the land at the center of the question, not just at the legend. **Ahupuaʻa**: Traditional Hawaiian land division running from mauka (mountain) to makai (sea), encompassing a complete watershed and the social/ecological unit organized around it. Used as a planning unit in modern Hawaiʻi for restoration, fisheries, and water management. **Kuleana**: Hawaiian for responsibility — particularly the responsibility one has by virtue of relationship to a place, a people, or a tradition. Often invoked in conversations about how to use powerful tools (satellite imagery, GIS, data) ethically. **Wayfinding**: The Pacific tradition of long-distance ocean navigation using stars, swells, bird flight, and other natural signs — preserved and revived through the Polynesian Voyaging Society and the canoe Hōkūleʻa. A complete coordinate system that operated for millennia without instruments. **Hōkūleʻa**: The Polynesian Voyaging Society's double-hulled voyaging canoe, sailed since 1976 to reawaken traditional Pacific navigation. Has completed multi-year worldwide voyages using traditional wayfinding alongside modern safety systems. **Mauka / Makai**: Hawaiian directional pair meaning 'toward the mountain' and 'toward the sea.' A coordinate system rooted in place rather than abstract compass directions — Hawaiian addresses and conversations often use this pair instead of N/S/E/W. **Mauna Kea**: The 4,205-meter summit on the island of Hawaiʻi sacred to Native Hawaiian tradition; also home to several major astronomical observatories. The site of important ongoing conversations about science, sacredness, and stewardship. **Kīlauea**: Active shield volcano on the island of Hawaiʻi, monitored intensively by the USGS Hawaiian Volcano Observatory. The 2018 lower East Rift Zone eruption was tracked extensively via satellite thermal IR (the same Band 7 LaunchDetect uses) and InSAR. **Indigenous data sovereignty**: The principle that data about Indigenous peoples and places belongs to those peoples — not to whoever collected it. A growing field shaping geospatial work in Hawaiʻi, Aotearoa, and across Native communities; informs decisions about what to publish openly and what to keep within community. --- ## FAQ ### What is LaunchDetect Academy? LaunchDetect Academy is a free 30-week online course in space-domain geographic information systems (GIS). The curriculum takes a learner from no GIS background to expert-level production-grade space GIS over 5 certification tracks, each anchored by a hands-on capstone using real geostationary thermal satellite imagery, real two-line element sets (TLEs), and real spaceport data. ### How much does the course cost? The entire 30-week curriculum is free and publicly available. Each week's primer, quiz, and hands-on lab notebook is open and downloadable. Certificate issuance (verifiable credential URLs at launchdetect.com/academy/verify/{certId}/) is gated to the LaunchDetect Gold tier at $9.99/month. ### What background do I need to start? No GIS background. The first track (Ground Station Operator, weeks 1–4) assumes only that you are comfortable with basic computing tasks (opening files, installing software). Some Python helps but is not required for Track 1. By Track 2 you should be comfortable with basic Python, and by Track 4 with basic web development (HTML, JavaScript). Each track lists its prerequisites on the cert track page. ### How are the labs delivered? Each week has a downloadable Jupyter notebook on GitHub (github.com/launchdetect/academy-labs) plus a one-click 'Open in Colab' button that runs the notebook in Google Colab with the data pre-loaded. You can also clone the repo and run locally with Python 3.11+. ### How long does the full course take? Self-paced. The course is structured as 30 weeks, with each week's content (primer + quiz + lab) taking roughly 3–6 hours. The capstones at weeks 4, 10, 15, 20, and 30 take longer (8–20 hours each depending on the track). ### What are the 5 certifications? Ground Station Operator (after week 4 capstone), Orbital Analyst (after week 10), Remote Sensing Specialist (after week 15), Mission GIS Engineer (after week 20), and Space GIS Architect (after week 30). Each is a separate verifiable certificate. ### Why is this course space-themed? LaunchDetect is a real production space-GIS platform that detects rocket launches from geostationary thermal imagery (NOAA GOES-18, GOES-19, JMA Himawari-9). The course is the educational counterpart: every concept is grounded in a real space-domain application, every dataset is real satellite data, and every capstone produces something deployable. This makes the GIS skills directly applicable to the most rapidly growing application of geospatial work. ### Do I get a certificate that employers recognize? The certificate has a public verifiable URL at launchdetect.com/academy/verify/{certId}/, so anyone (recruiters, employers) can independently verify the credential. The course is developed by LaunchDetect (a real production space-GIS company) and aligns with industry-standard tools (QGIS, PostGIS, CesiumJS, Python, AWS). ### What if I'm already a GIS professional? Skip ahead. Each track page lists prerequisites and outcomes — start at the track where the outcomes are new to you. Track 3 (Remote Sensing Specialist) and Track 5 (Space GIS Architect) cover specialized domains (satellite imagery, multi-sensor fusion, ML for raster, SAR, cloud-native formats) that few traditional GIS programs cover. ### Can I contribute or correct content? Yes. The academy-labs repository (github.com/launchdetect/academy-labs) accepts pull requests for lab corrections, additional exercises, and translations. Curriculum-level changes can be requested via GitHub issues. --- All content (c) 2026 LaunchDetect. Curriculum is freely usable for self-study. Verifiable certificates are issued via https://launchdetect.com/academy/verify/ to subscribers of LaunchDetect Gold ($9.99/month). For corrections, pull requests welcome at https://github.com/ops-sketch/academy-labs.