Methodology
How the engine computes and renders an analemma overlay
1. Solar Position Computation
The engine uses Astropy's get_sun() function,
backed by JPL's DE440 development ephemeris, to query the sun's geocentric ICRS
coordinates at each day of the year for a given clock time. It then transforms those
into local horizon coordinates for the observer.
The two quantities that matter most are solar declination (which oscillates roughly +/-23.44 degrees over the year due to axial tilt, setting the analemma's vertical extent) and the Equation of Time, the offset between mean and apparent solar time. The EoT can swing from about -16.4 to +14.3 minutes across the year, pushing the sun east or west of its expected position and creating the figure-eight's horizontal width.
To compute the EoT, we take the difference between a mean solar longitude and the true right ascension from the ephemeris:
n is days since J2000.0. The EoT in minutes is (L0/15 - RA_sun) * 60, normalized to +/-720 minutes to handle the RA wraparound.
2. Horizon Coordinate Transform
Converting from declination and EoT to local altitude and azimuth is standard spherical trig. The hour angle tells you how far the sun is from the observer's meridian, measured westward:
t is observation time in decimal hours. EoT/4 converts minutes to degrees. The longitude term corrects for the observer's offset from the timezone's central meridian.
From there, altitude and azimuth follow directly:
Azimuth is normalized to [0, 360) clockwise from North.
Getting the timezone right matters more than you'd expect. The engine tries three
approaches in order: an explicit UTC offset if the user provides one, then IANA
auto-detection via timezonefinder (which handles
DST correctly), and finally a round(longitude/15) fallback. The IANA path exists
because the naive formula fails in places like Hawaii (UTC-10, but
round(-157.8/15) gives -11) or China, which uses a single timezone across a 60-degree
span of longitude.
3. Camera Model and Projection
We use a tangent-plane (pinhole) camera model to go from sky coordinates to pixel coordinates. The field of view comes from the focal length and physical sensor dimensions, both of which the app extracts from EXIF data when available:
FOV_v = 2 * arctan(sensor_height / (2 * focal_length))
Dividing the image's pixel dimensions by the FOV gives pixels-per-degree in each axis. The projection from sky separation to pixel offset then looks like this:
dy = -d_alt * px_per_deg_alt
The cos(mean_alt) factor is doing real work here. At high altitudes, lines of constant azimuth converge (the same way longitude lines converge near the poles), so one degree of azimuth covers fewer linear degrees of sky. Without this correction the overlay stretches horizontally whenever the sun is high. The negative sign on dy just accounts for image coordinates having y increase downward.
Everything is projected relative to the anchor point, which is the detected (or manually selected) sun position. Because we know both its pixel location and its sky coordinates, we can compute pixel offsets for every other point without ever needing to know where the camera was actually pointing in absolute terms.
4. Sun Detection
Finding the sun in a photograph turns out to be trickier than "find the brightest pixel." Lens flare, specular reflections, and JPEG compression artifacts all produce isolated bright spots that aren't the sun. The detection pipeline handles this with progressive thresholding:
- First, apply EXIF orientation tags so the image matches what the user actually sees (rotation, mirroring).
- Convert to grayscale by taking max(R, G, B) per pixel. Luminance-weighted averages would undercount the sun, which saturates all three channels.
- Starting at 99.9% of the image's peak brightness, threshold the image and look for connected blobs of at least 20 pixels. If nothing qualifies, lower the threshold to 99.5%, then 99.0%, and so on down to 96%. The minimum-size requirement filters out single-pixel glare artifacts that pass high thresholds but are too small to be the actual sun disc.
- Once a qualifying blob is found,
scipy.ndimage.label()identifies connected components and the largest one is selected. - The sun center is the brightness-weighted centroid of that blob, which gives sub-pixel accuracy.
When none of that works (overcast sky, sun behind a cloud, unusual exposure), the app falls back to the manual picker.
5. Overlay Rendering
The overlay is an SVG layer composited over the original photograph. All 365 analemma points get projected to pixel coordinates through the camera model, then rendered as connected line segments with dots at each sun position.
One subtlety: the analemma sometimes exits the frame and re-enters at a distant point. A naive single polyline would draw a diagonal across the image connecting those re-entry points. The renderer detects these gaps (any jump larger than 4x the median point spacing) and breaks the path into separate segments.
A date-based color gradient runs from January through December, so you can visually trace the sun's seasonal progression along the curve.
6. Limitations
The tangent-plane projection works well for normal and telephoto lenses but starts to break down with ultra-wide-angle glass or fisheye lenses, where barrel distortion becomes significant. We don't currently model lens distortion profiles.
Atmospheric refraction isn't modeled either. Near the horizon, refraction lifts the apparent sun position by roughly half a degree, which can noticeably shift points at very low altitudes.
The overlay scale depends on getting the sensor dimensions right. If you're shooting with a crop-sensor body but enter full-frame sensor values, or if the image has been cropped after capture, the projection won't match. The app tries to pull these from EXIF, but not all cameras write sensor dimensions into metadata.
Sun detection works best with a clearly visible sun disc against sky. Overcast conditions, the sun partly behind clouds, or strong reflections off water/glass can all confuse the detector.
7. Technology Stack
Computation
Astropy + JPL DE440, NumPy, SciPy
Image Processing
Pillow, EXIF transpose, brightness-weighted centroid
Backend
FastAPI, ThreadPoolExecutor, slowapi rate limiting
Frontend
SvelteKit 5, Svelte 5 runes, TailwindCSS v4