Post-Event Surveys and Flow Data: Measuring What Actually Worked
Two Types of Measurement
Post-event measurement serves two purposes:
Client satisfaction (did they like it?). Measured through surveys, direct feedback, and repeat business. This tells you whether to celebrate.
Flow performance (did it work?). Measured through timing data, observation logs, and operational metrics. This tells you what to improve.
Most event companies collect satisfaction data and ignore flow data. This means they know if events are well-received but don't know why — or how to make good events great.
Collecting Flow Data
Flow data should be collected during the event by staff, not reconstructed afterward from memory.
Data Point 1: Actual Timing
What to record:
- Actual start time of each rotation (vs. planned)
- Actual end time of each rotation (vs. planned)
- Actual transition time between each rotation
- Total event start and end time
How: The event coordinator uses a simple spreadsheet or timing app. At each rotation signal, they record the timestamp. After the event, they compare actual times to the planned schedule.
What it reveals:
- Which rotations ran over (and by how much)
- Which transitions took longer than planned
- Whether the event ended on time
- Cumulative schedule drift (the gap between planned and actual grows larger each rotation if individual overruns aren't recovered)
Data Point 2: Activity Completion Rates
What to record: For each team at each station:
- Did they complete the core task? (Yes/No)
- Did they reach bonus challenges? (Yes/No)
- How much time remained when they finished? (Minutes)
How: Station facilitators record this on a simple form after each team's rotation.
What it reveals:
- Activities where most teams don't finish (time box is too short or activity is too hard)
- Activities where most teams finish early (time box is too long or activity is too easy)
- Team performance variation (are certain teams consistently fast or slow?)
Data Point 3: Energy Observations
What to record: At the start of each rotation, the facilitator rates the arriving team's energy on a 1-5 scale:
- Depleted (yawning, disengaged, phone-checking)
- Low (quiet, slow to start, minimal interaction)
- Moderate (engaged but not energetic, steady work pace)
- High (active discussion, enthusiastic, competitive)
- Peak (cheering, laughing, physical energy, fully engaged)
How: Facilitators note the rating on their form. Takes 2 seconds per team.
What it reveals:
- Energy patterns across the event (do teams arrive at Station 4 consistently low?)
- The post-lunch dip depth (energy ratings before and after lunch)
- Whether the activity sequence maintains or drains energy
- Which station positions in the rotation receive the lowest-energy teams
Data Point 4: Transition Observations
What to record: During each transition, a flow observer notes:
- Were teams moving promptly or lingering?
- Were there bottlenecks at doorways, stairs, or corridors?
- Did any team go to the wrong station?
- Was the next station ready when the team arrived?
How: One dedicated flow observer (can be the event coordinator) watches transitions and makes quick notes.
What it reveals:
- Transition problems that are fixable (wrong turn signage, slow elevator, unready station)
- Whether the transition time allocation is sufficient
- Whether teams know where to go (wayfinding effectiveness)
Designing the Participant Survey
Send the survey within 24 hours of the event. Response rates drop dramatically after 48 hours.
Essential flow-relevant questions:
-
"How would you rate the event pacing?" (1-5 scale)
- Too slow / About right / Too fast
- This directly measures participant perception of flow
-
"Was there too much, too little, or the right amount of time between activities?"
- Too much waiting / Just right / Too rushed
- Measures perceived dead time
-
"Which activity was your favorite?" (Select from list)
- Identifies high-performing modules
-
"Which activity was your least favorite?" (Select from list)
- Identifies modules that need redesign or replacement
-
"Was the event the right length?"
- Too short / Just right / Too long
- Measures overall format duration fit
-
"How engaged did you feel throughout the event?" (1-5 scale)
- Measures sustained engagement (a flow quality indicator)
-
"Any other comments?" (Open text)
- Captures specific issues: "The walk between Station 2 and 5 was too long" or "We finished the puzzle early and had nothing to do"
Keep the survey under 10 questions. Longer surveys get lower response rates and less thoughtful answers.
Analyzing the Data
Cross-reference timing data with satisfaction data:
- If the activity that ran 5 minutes over is also the highest-rated activity, the time box may be too short (teams want more time because the activity is great)
- If the activity that ran over is the lowest-rated, the activity is both too long and not engaging (redesign needed)
Cross-reference energy data with activity sequence:
- Plot energy ratings across rotation positions. If energy drops consistently at Position 4, the activity at Position 4 may need to change — or a physical energizer should precede it
- If energy drops after lunch regardless of the activity, the post-lunch slot needs a consistently high-energy module
Identify flow patterns across multiple events:
After 5+ events with the same format:
- Are the same transitions consistently problematic? (Venue-independent flow issue)
- Are energy dips at the same rotation position? (Sequence issue)
- Are the same activities consistently rated lowest? (Module quality issue)
The Improvement Cycle
- Collect data at every event (timing, completion, energy, transitions)
- Survey participants within 24 hours
- Analyze cross-referenced data within 1 week
- Identify top 3 improvement opportunities
- Implement changes in the next event
- Measure whether the changes improved the identified metrics
- Repeat
What Good Flow Data Looks Like
After 10 events with the same format, you should know:
- Average total event duration: 178 minutes (planned: 180)
- Average transition time: 3.2 minutes (planned: 3.0)
- Average schedule drift by closing: +4 minutes
- Average energy rating at each rotation position: 4.2, 3.8, 3.5, 3.1, 3.6, 3.9
- Activity completion rates: Module A: 95%, Module B: 82%, Module C: 71%
- Participant pacing rating: 4.3/5.0
This data tells you exactly where to invest improvement effort. Module C's 71% completion rate needs investigation. The energy dip at Position 4 needs a sequence adjustment. The +4 minute drift is acceptable but could be reduced.
Client Reporting
Include flow metrics in your post-event report to clients:
What to share:
- Event timeline (planned vs. actual) — shows professionalism
- Activity completion rates — shows engagement
- Participant satisfaction scores — shows value delivery
- Improvement actions for next time — shows continuous improvement
What not to share:
- Individual facilitator performance ratings
- Detailed operational problems that were resolved during the event
- Internal flow failure analysis (save this for your internal improvement process)
Simulating Improvements From Data
When your data identifies a flow problem (transition too long, activity too short, energy dip at Position 4), simulation lets you test proposed solutions before implementing them at the next client event. Change the time box, swap the activity sequence, or modify the transition plan in simulation and verify the improvement.
Want to turn event data into measurable improvements? Join the FlowSim waitlist and simulate your proposed changes against real performance data.