Building a Grant-Worthy Evidence Base With Flow Data
The Evidence Gap in Museum Grant Applications
A children's museum submits a renewal application for its NSF Advancing Informal STEM Learning grant. The original award covered the $180K Water Cycle puzzle installation. The renewal requires documentation of impact: how many students engaged with the station, for how long, and with what measurable outcome. The education team has post-visit surveys, docent observation notes, and a rough count from the front-desk clicker. None of these constitute a robust evidence base by NSF evaluator standards.
This is the evidence gap that museum exhibit designers running active grant programs face: the intervention (the exhibit) exists, the capital was spent, but the behavioral evidence that the exhibit is doing what the grant promised is anecdotal or thin. Post-visit surveys capture self-reported intent. Docent notes are inconsistent across staff. Clicker counts at the door don't distinguish between a student who engaged with the Water Cycle puzzle for four minutes and a student who walked past it without stopping.
Outcome Based Evaluation Basics – IMLS is clear on what funders require: outcomes are measurable changes in knowledge, skills, attitudes, or behavior. Flow data translates directly into that framework. Station engagement rate is a behavioral measure. Dwell time at an interactive station is a behavioral measure. Bypass rate reduction over time is a behavioral change measure. All three are outcomes that IMLS evaluators recognize.
NSF AISL Program requires applicants to demonstrate both broadening participation impact and strategic impact for the informal STEM learning field. Station engagement data from a PressurePath deployment addresses both: it shows which student populations (by grade, school type, and group size) are reaching and engaging with funded exhibits, and it provides methodology that other institutions can replicate.
Structuring Flow Data as Grant Evidence
Before describing the three data streams, it's worth addressing the question of timing: when in the grant cycle does flow data become most valuable? For initial applications, pre-deployment flow data establishes the need—your baseline bypass rates at the stations the grant will fund. For mid-grant progress reports, monthly or semester session data documents engagement trajectory. For renewals, full-cycle longitudinal data (the 200-field-trip-day data set described elsewhere in this series) provides the before/after evidence structure that renewal reviewers require. The data serves different functions at different grant stages, but the collection infrastructure—once in place—generates evidence continuously without additional cost per report.
PressurePath's pressurized-water-in-pipes model generates data that's inherently structured for grant documentation. When a 30-kid school wave enters your floor as a high-pressure burst and moves through the pipe network of corridors, each station either captures flow or allows it to pass. Capture events are engagement data. Pass-through events are bypass data. The pipe network's pressure differential—which branches receive wave pressure and which don't—is the structural context that explains why specific stations show specific engagement rates. That causal structure is exactly what grant evaluators need to see: not just "engagement went up" but "this is why it went up and this is why it will stay up."
PressurePath produces three data streams that map to standard grant evaluation frameworks. Engagement rate data—the percentage of school-group visitors who stop and interact with a specific station—maps to the IMLS outcome indicator for behavioral engagement. Dwell time distribution—the duration histogram of engagement events at a station during school-group visits—maps to the depth-of-engagement measure that NSF AISL evaluators use to distinguish genuine learning interactions from cursory contacts. Wave-bypass frequency—the rate at which school groups pass the station without engaging—is the inverse of capture rate and maps to the accessibility and reach indicators in both IMLS and NSF frameworks.
Framework for Evaluating ISE Projects (informalscience) establishes NSF's six-category ISE impact framework—awareness, knowledge, skills, engagement, attitudes, and behavior. Flow-based station data directly addresses the engagement and behavior categories, and can be correlated with survey data to support the knowledge and attitude categories. Having automatic, session-level engagement data for every school-group visit eliminates the inconsistency problem of docent observation notes.
Evaluation methodology research in PMC documents sensor-based and visitor-trajectory-linked data collection methods that grant reviewers recognize as reference-standard. PressurePath's data collection methodology—sensor-based, session-level, visitor-trajectory-linked—fits within the evaluator-accepted methods category, which means you're not asking a grant reviewer to accept a novel data collection approach. You're presenting standard methodology with automated collection replacing manual observation.
The station-level evidence structure for a grant submission looks like this: baseline engagement rate at the Water Cycle puzzle before the pacing intervention (from historical sensor data), engagement rate at the same station after the intervention (from current session data), and the delta documented across a specific number of school-group visits. For a renewal application, that delta is your impact evidence. For an initial application, the baseline establishes the need.
A key documentation discipline: record not just the overall engagement rate but the engagement rate disaggregated by group type. IMLS and NSF reviewers scrutinize whether funded exhibits are reaching underserved populations—specifically, low-income students and students from schools with limited access to informal learning environments. If your flow data shows that third-grade groups from Title I schools bypass the Water Cycle puzzle at 81% versus 47% for groups from more affluent schools, that disparity is both a problem statement for a new grant application and the specific target for an NSF broadening-participation argument.
Children's Museum Research Network – IMLS funded a cross-institutional evidence base to document children's museum learning value across multiple institutions. PressurePath data structured according to that network's methodology positions your museum as a contributor to field-wide evidence rather than a single-site anomaly—a distinction that National Leadership Grant applications specifically reward.
ACM Trends: Data-Driven Outcomes (Knology) reports that 70% of caregivers observed child learning during museum visits—but observational data from caregivers isn't the same as sensor-based behavioral data. ACM Trends shows how observational approaches are being formalized; PressurePath's flow data is the next step in that formalization, producing consistent measures across every visit rather than sampling from a subset of sessions.
The 10-room case study from escape room franchise operations provides a model for how operational flow data can be systematically structured to demonstrate intervention ROI—the same documentation discipline that saves staff hours in that context produces the before/after evidence structure that grant programs require in the children's museum context.

From Operational Data to Grant Narrative
The evidence base is most powerful when it connects quantitative flow data to the grant's stated theory of change. For an NSF AISL grant centered on improving STEM engagement for Title I school groups, the grant narrative structure is: (1) baseline flow data shows that 78% of Title I school-group students were bypassing the Earthquake Simulator during morning school-wave arrivals; (2) PressurePath identified the structural bypass route and the pacing interventions required to address it; (3) post-intervention flow data shows bypass dropped to 31% for the same demographic over the following semester; (4) dwell time data at the station confirms engagement depth improved from a median of 47 seconds to 3.8 minutes.
That narrative is strengthened when it includes the intervention cost alongside the outcome: $4,200 in staff time and partition repositioning produced a 47-percentage-point bypass reduction for the specific population the grant was designed to serve. Grant reviewers understand capital efficiency arguments—a demonstrably cost-effective intervention has higher replication potential than an expensive one, which directly serves the "advancing the field" criterion that NSF AISL and IMLS NLG-M both require.
Research on scalable and replicable informal learning models (ACM Digital Library) demonstrates that documented, reproducible deployment protocols are what National Leadership Grants specifically reward—and a documented PressurePath deployment protocol is a replicable model. When your grant application can describe the data collection methodology, the intervention logic, and the outcome measures in terms that another institution could implement, you're presenting exactly what NLG-M reviewers are looking for.
The connection to retrofit decisions is direct: grant-funded evidence of bypass at specific stations also creates the documented justification for capital retrofit requests. If your flow data shows a structurally bypassed $180K station, that data supports both a grant renewal case (we need continued funding to address a demonstrated engagement gap) and a board capital request (we have behavioral data showing that a specific intervention will recover the intended impact of an existing capital investment).
The 200 field trip days data set is the longitudinal version of this evidence structure: once you have flow data across a full school visit season, you have not just point-in-time bypass rates but trend data showing how bypass patterns evolve across different group types, seasons, and exhibit configurations. That longitudinal evidence base is what transforms a single grant application into a multi-year funded research program.
Turn Your Ops Data Into Your Next Grant Application
There's one more evidence structure that PressurePath's longitudinal data supports: comparative benchmarking against peer institutions. IMLS National Leadership Grants and NSF AISL program officers frequently ask applicants whether their outcomes are above or below norms for comparable informal learning settings. If your flow data shows that your school-group engagement rate at the Water Cycle puzzle is 41%—and comparable children's science museums running sensor deployments report a median engagement rate of 28% for similar stations—that comparison is concrete evidence of above-average performance. Conversely, if your data shows below-median engagement, it's a documented gap that justifies the proposed intervention.
The cross-institutional comparison is only possible when multiple institutions are collecting flow data using compatible methodology—which is why the shared data collection framework from cross-institutional museum research (ScienceDirect) is directly relevant. PressurePath's data structure was built for compatibility with cross-institutional comparison, positioning early adopters to participate in field-wide evidence networks as those networks develop.
If PressurePath is running on your floor, every school-group visit is generating grant-worthy evidence. The question is whether that data is being structured to speak the language of IMLS and NSF reviewers. Join the waitlist to see how children's museum exhibit designers are using PressurePath flow data to build evidence bases that support both operations improvement and grant documentation simultaneously.