Cross-Disciplinary Data Handoffs Between Geologists and Map Operators

cross-disciplinary handoff, geologist map operator, science ops handoff, analog team handoff, geology operator mapping

The Problem

A PANGAEA analog field campaign at Lanzarote had a field geologist identifying a structurally interesting basalt facies transition inside a lava tube, then verbally relaying its significance to a teleoperation console operator who was supervising a rover-mounted EchoQuilt capture. The operator captured the requested patch but missed the geologist's concurrent note about vesicularity change that would have affected patch interpretation. Two sols later, a JPL-based mission planner reviewing the quilt flagged an anomaly in the same patch and spent half a day re-deriving the significance the geologist had already verbally communicated. By the time the loop closed, the campaign had already moved on to a different target.

This is the handoff gap that BASALT's mission elements paper explicitly ranked as one of the highest-risk components of analog operations. The paper documented daily science-and-operations handoffs as the failure-prone link where information decays between roles, particularly across time-zone splits and shift changes. ESA's PANGAEA astronaut training directly targets the geologist-operator-astronaut interface because the training community has recognized that verbal handoffs fail at the scales and delays of planetary missions.

The software precedents exist. NTRS's Distributed Operations for MER documented how the Science Activity Planner formalized distributed science-ops handoffs, giving scientists and operators a shared representation of planned activities. Frontiers' planetary analog field operations analysis surveyed multiple analog campaigns and found that coordination failure modes are well-characterized enough to be engineered against. The gap is that most analog campaigns still run handoffs through mixed channels (voice, chat, paper notebooks) that do not map cleanly onto the scientific products the campaign is producing.

The annotation problem ties directly into multi-sol science work, where handoffs accumulate across sols and sol drift affects clarity. A multi-sol campaign generates more annotations per sol than any single operator can process, and the structured-annotation discipline becomes the only way to keep the annotation backlog from becoming the campaign's bottleneck.

The decay rate of unstructured handoffs is well-documented. BASALT's analysis showed that verbal annotations made in the field had a half-life of roughly 36 hours before they began to be misremembered or lost, and even short-form chat messages decayed within 4-7 sols when nobody was actively curating them. The decay is not because operators forget specific facts; it is because context that was implicit at the time of the original communication becomes ambiguous as the campaign moves on, and that ambiguity compounds with every handoff. By the time a JPL-based mission planner reviews a quilt patch from sol 4 during a sol 12 supratactical session, the original geologist's verbal context is effectively lost, and the planner is reconstructing intent from the patch metadata alone.

PANGAEA training explicitly addresses this decay by drilling structured-annotation habits into astronaut-geologist teams during cave training, but the training does not transfer if the campaign infrastructure does not support structured capture.

The Solution: Geology-Annotation as a Quilt Layer

EchoQuilt treats geology annotation as a first-class layer on the quilt. Field scientists, teleoperation operators, and mission planners all read from and write to the same annotation layer, which is stitched against the quilt patches so each annotation is anchored to a specific geometry tile. When a field geologist at Lanzarote marks a facies transition, the annotation attaches to the quilt patches that span the transition, and any downstream viewer sees the annotation together with the geometry.

The layer supports annotation types aligned with the science lifecycle. Observations are raw descriptions anchored to patches. Interpretations are derived explanations that reference observations and cite external priors. Action items are specific requests for follow-up captures or reviews. Decisions are commits that close out an interpretation. Each type carries required metadata so partial handoffs (observation without interpretation, interpretation without decision) are visible in the layer rather than hidden in voice chat.

EchoQuilt handoff interface showing geology-annotation layer shared between field scientists and JPL map operators

Acceptability scoring matters too. BASALT's assessment of science ops concepts for human Mars EVA rated handoff approaches by how acceptable crews and operators found them in practice. EchoQuilt's annotation layer inherits that rating discipline: each annotation carries an acceptability score from its recipient, so if a JPL operator marks an incoming geology annotation as low-clarity, the originating field scientist sees that feedback during the next supratactical review rather than discovering it weeks later. This closes the learning loop that voice-channel handoffs leave open.

The layer is structured around cave-specific science questions. PMC's framework for planetary cave exploration laid out fundamental science and engineering questions that shape cave mission priorities, and EchoQuilt's annotation schemas map to those questions directly. A geologist annotating a tube segment can tag the annotation as relevant to "thermal regime characterization", "volatile preservation", or "structural stability", and downstream consumers can filter accordingly. This prevents the common failure where geology annotations accumulate faster than they can be consumed, because operators and planners filter to annotations matching their current task.

Cross-disciplinary handoffs feed directly into agency federation when handoffs cross agency boundaries and data formats. The same annotation schema that supports intra-team handoffs also supports cross-agency handoffs once the federation layer translates between agency-specific archive conventions, which means analog teams that adopt structured annotations from the start get cross-agency benefits without additional work later.

Advanced Tactics

Separate observation capture from interpretation capture. A field geologist working inside a Lanzarote tube should be able to drop raw observations quickly without being forced to interpret them on the spot; interpretation is a calmer-context task that benefits from later review of the full patch context. EchoQuilt's annotation UI supports observation-only mode for field use, with interpretation prompts queued for desk-review time.

Run weekly annotation audits during multi-sol campaigns. An audit pass reviews recent annotations for missing metadata, unresolved action items, and decisions that never closed. This catches the slow-decay failure mode where annotations accumulate faster than they close, which is what the PANGAEA campaign described above was experiencing. The audit cadence fits naturally into BASALT-style supratactical review windows, so it adds process weight without inventing a new meeting.

For multi-agency analog campaigns, agree on annotation taxonomies up front. ESA, JPL, and JAXA partner campaigns sometimes use different geological vocabularies, and reconciling them mid-campaign destroys handoff throughput. EchoQuilt ships with a cross-agency reference taxonomy aligned to the planetary cave science framework, but campaigns can customize with agency-specific extensions without forking the core schema. Agreed taxonomies at campaign start are the cheapest way to keep cross-agency handoffs interpretable for the life of the campaign.

A cross-domain analogue from terrestrial conservation extends the annotation problem. Our team coordination work shows how terrestrial multi-disciplinary teams face analogous annotation-handoff challenges when biologists, surveyors, and conservation officers all need to work from the same map without synchronous communication. The conservation community has developed structured-annotation patterns that EchoQuilt's annotation layer has borrowed for the planetary context.

Build annotation onboarding into the first sol of any new campaign. New crew members joining a multi-week campaign need to learn the annotation conventions in use, and the cheapest moment to teach them is during initial training rather than mid-campaign when they are also absorbing the science context. EchoQuilt's annotation interface ships with a guided tutorial mode that walks new operators through observation, interpretation, action, and decision annotations using sample quilt patches from prior campaigns. Crews that complete the tutorial during sol 0 produce annotations that pass acceptability review at roughly twice the rate of crews that learn the system as they go, which has measurable downstream effect on campaign throughput.

Treat annotation density as a campaign health metric. A multi-sol campaign that produces fewer than 0.5 annotations per quilt patch is probably losing context that downstream consumers will need. A campaign producing more than 5 annotations per patch is probably accumulating noise that filters cannot fully suppress. EchoQuilt's annotation dashboard surfaces the per-sol annotation density alongside acceptability scores, and science leads use the metric to flag campaigns that are drifting toward either failure mode before the next supratactical review. The metric is especially useful for cross-campaign comparisons, where a science lead wants to know whether the current campaign's annotation discipline is consistent with prior campaigns at the same site or with parallel campaigns at other analog sites.

Ready to Formalize Your Campaign Handoffs?

Planetary analog researchers, JPL operators, and ESA mission planners running cross-disciplinary campaigns gain measurable throughput when geology annotations sit in a shared quilt layer rather than scattered across voice channels and notebooks. EchoQuilt's annotation layer and acceptability scoring are built for exactly that coordination problem. Each pilot ships with a cross-agency reference taxonomy aligned to the planetary cave science framework, a sol-zero guided tutorial mode that walks new operators through observation, interpretation, action, and decision annotations using sample patches from prior campaigns, an annotation density dashboard that flags campaigns drifting outside the 0.5-5 annotations-per-patch healthy range, and an observation-only field-capture mode optimized for in-tube use during PANGAEA-style deployments. Pilot teams shape the acceptability rating categories and the cross-agency taxonomy extensions that the 2027 reference release will adopt for NASA, ESA, and JAXA federation.

Priority goes to JPL operators running BASALT follow-on analog campaigns, ESA PANGAEA instructors coordinating astronaut geological field training cycles, NIAC PIs preparing multi-agency concept proposals in the 2026 cycle, JAXA-collaborated lunar analog campaign leads, and CHILL-ICE alumni planning Surtshellir return campaigns. Join the Waitlist for Planetary Analog Researchers to pilot the layer on your next analog campaign and help us refine the taxonomy against your field vocabularies.

Interested?

Join the waitlist to get early access.