Use case: Research teams
For research teams coding interviews together
If you've ever tried to code 30 interviews with two researchers using shared NVivo licences and a Dropbox folder, you know why this exists. Real-time collaboration, audit trails, and code-agreement reports built in.
The problem this solves
Multi-coder qualitative research has always been a coordination problem. Whose codes are canonical? When two researchers disagree on a passage, how do you resolve it? When the methods reviewer asks "show me the audit trail", do you actually have one?
QualCanvas was built specifically for this: every coding decision is timestamped, attributed to a researcher, and recoverable. Inter-rater reliability is automatic, not a manual calculation.
How research teams use it
- Open coding phaseTwo researchers code the same transcript independently. QualCanvas shows them each other's codes side by side after the fact, with a one-click "merge or keep both" decision per passage.
- Theme refinementThe team gets together (in person or remote) and works on the canvas live. Themes get re-arranged spatially as the analytical structure emerges.
- Member checkingRead-only canvases shareable with participants. They review their own quotes and themes; comments come back into the canvas.
- Final reportingTheme maps export directly into the manuscript. Quote tables export to docx for the appendix. The audit trail is one click for the methods section.
Compliance
- IRB-compliant audit trailEvery decision logged. Exportable to PDF for ethics review.
- GDPR / data residencyEU-hosted. Data Processing Agreement available for institutional contracts.
- Participant data controlsPseudonymisation built in. Auto-redact names from transcripts on import.
More from JMS Dev Lab
Repair Desk — customer repair tracking
PitchSide — grassroots football coaching
StaffHub — team scheduling
PitchSide — grassroots football coaching
StaffHub — team scheduling