Overview
If you’re planning field data collection, the fastest way to improve data quality and speed is to adopt a computer‑assisted interview workflow. A computer‑assisted interview (CAI) uses software on a phone, tablet, computer, or telephony system to guide interviews. It enforces skip logic and data validation and syncs responses securely to a server. It matters now because modern mobile devices support offline collection, GPS verification, audio notes, and strong encryption. Regulations like GDPR also demand better consent and data governance.
This guide clarifies the CAI family (CAPI, CASI/ACASI, CATI, CAWI), weighs benefits and drawbacks, and shows how to choose the right mode for your study. You’ll also get a pragmatic CAPI runbook, a vendor‑neutral checklist for selecting survey software, and budgeting and compliance essentials. The goal is to help you move from planning to deployment with confidence.
What is a computer assisted interview?
A computer‑assisted interview is any survey or interview in which a computer or mobile device administers questions and captures responses using programmed routing and validation. The CAI family includes interviewer‑administered methods such as computer‑assisted personal interviewing (CAPI) and computer‑assisted telephone interviewing (CATI). It also includes self‑administered methods such as computer‑assisted web interviewing (CAWI) and computer‑assisted self‑interviewing (CASI), including audio (ACASI) and video (Video‑CASI) variants. Compared with paper‑assisted personal interviewing (PAPI), CAI reduces manual errors, enables complex skip logic, and supports faster data delivery. See the American Association for Public Opinion Research (AAPOR) for definitions and standards (https://www.aapor.org/).
In practice, a CAPI household visit uses a tablet to route only eligible questions. A CATI study dials respondents and records answers in a call‑center system. A CAWI link lets participants complete online themselves. ACASI plays questions through headphones so respondents tap or speak answers privately. Be mindful that different modes can produce different answers—so‑called “mode effects”—which require planning and mitigation.
CAI methods and when to use each
Choosing between CAI modes depends on sensitivity, complexity, literacy, connectivity, timeline, and budget. The following subsections define each method and show where it fits best. Map mode to mission rather than defaulting to habit or tooling constraints.
Computer-assisted personal interviewing (CAPI)
CAPI is an interviewer‑administered, face‑to‑face method. Questions are scripted into a mobile app that handles routing, validations, media capture, and GPS. It excels when you need complex instruments, visual aids, identity verification, or reach into low‑connectivity areas using offline sync. Large public surveys routinely use in‑person computerized instruments. For example, the U.S. Census Bureau’s American Community Survey conducts personal interviews with computerized tools as part of its data collection operations (https://www.census.gov/programs-surveys/acs/methodology/data-collection.html).
Device considerations include ruggedness, battery life, and secure storage. A mobile device management (MDM) policy should lock down installations and enforce encryption. Well‑designed CAPI reduces callbacks via on‑device checks and speeds delivery with near‑real‑time dashboards once synchronized.
Computer-assisted self-interviewing (CASI, incl. Audio- and Video-CASI)
CASI lets respondents self‑complete on a device. This increases privacy and reduces social desirability bias on sensitive topics. Audio computer‑assisted self‑interviewing (ACASI) adds recorded audio and headphone use so respondents can hear questions and answer independently. It is ideal for low‑literacy contexts or when privacy is paramount. Evidence shows ACASI yields higher reporting of sensitive behaviors than interviewer‑administered methods, improving completeness in areas like sexual health and substance use (https://pubmed.ncbi.nlm.nih.gov/9703491/).
Video‑CASI can add sign language or visual prompts for accessibility. Plan for device preparation, quiet spaces, and stronger consent scripts. Respondents control the pace and disclosure, so design accordingly.
Computer-assisted telephone interviewing (CATI)
CATI uses trained interviewers over the phone, guided by a scripted instrument on screen. It’s fast to deploy, supports complex routing, and reaches dispersed geographies without travel. This is helpful for business‑to‑business (B2B) or rare‑population screening. Trade‑offs include limited visual stimuli, potential coverage bias due to phone ownership or call screening, and shorter attention spans. Keep instruments tight and schedule callbacks thoughtfully.
Computer-assisted web interviewing (CAWI)
CAWI delivers self‑administered online surveys via a link. It is cost‑effective at scale, especially for panels or customer lists. It supports multimedia and complex logic but is sensitive to device differences, screen sizes, and internet coverage. Extensive testing and responsive design are essential. When mixing CAWI with CAPI, monitor for mode effects and consider harmonization steps. Use identical question wording, consistent visual design, and post‑collection weighting.
Paper-assisted personal interviewing (PAPI) and hybrid workflows
PAPI is the traditional interviewer‑administered paper questionnaire, sometimes scanned or keyed later. It can be a fallback where devices are unavailable or for very short intercepts. It adds data entry costs and error risk.
Many organizations run hybrid transitions. They use paper for a pilot or limited regions while ramping to CAPI. They then digitize remaining paper with strict double‑entry and reconciliation before merging datasets.
Benefits and drawbacks you should weigh
Selecting a mode is a trade‑off across data quality, privacy and accessibility, logistics, and budget. Modern CAI offers stronger validation, faster timelines, and better governance. You must still plan for device security, training, and connectivity realities. The subsections below unpack these factors so you can balance rigor with practicality.
Data quality, validation, and mode effects
CAI improves completeness through required fields, range checks, and real‑time validations. These checks prevent impossible or contradictory answers. Skip logic and dynamic routing shorten interviews and reduce respondent fatigue by showing only pertinent questions. Paradata (timestamps, GPS, durations) supports audit and supervision.
However, responses can differ by mode. For example, self‑administered online surveys sometimes yield different levels of reported sensitive behaviors versus interviewer modes. This is a documented “mode effects” phenomenon (https://www.pewresearch.org/methods/2015/05/13/mode-effects-in-public-opinion-surveys/). Mitigate by standardizing wording, randomizing within sections consistently across modes, and using mixed‑mode weighting or calibration where appropriate.
Respondent privacy, sensitivity, and accessibility
For sensitive topics, self‑administered options like CASI or ACASI often outperform in honesty. Removing the interviewer from the disclosure moment reduces social desirability bias. ACASI, which reads questions via headphones, also reduces literacy barriers. It has been shown to increase reporting of sensitive behaviors relative to interviewer‑administered surveys (https://pubmed.ncbi.nlm.nih.gov/9703491/).
For disability support, consider video prompts, larger fonts, screen readers, and sign‑language videos. Schedule extra time so accessibility aids don’t rush respondents.
Cost, logistics, and timeline trade-offs
CAI reduces downstream costs by eliminating separate data entry. It cuts back‑and‑forth callbacks through on‑device checks. Fieldwork in low‑connectivity settings works best with an offline‑first CAPI setup. Queue cases locally and sync securely when a signal is available.
Budget for devices, software licenses, training, supervision, and data plans. Recoup value through faster delivery, fewer errors, and the ability to monitor progress and intervene early.
Selection framework: how to choose the right CAI mode for your study
A sound choice starts with your research problem, not your tool. Map sensitivity, instrument complexity, literacy and language needs, sample coverage, timeline, and budget to the strengths of CAPI, CASI/ACASI, CATI, or CAWI. Be ready to mix modes with safeguards when a single mode can’t deliver coverage and quality together.
Decision criteria checklist
Start by answering these questions to narrow options before piloting.
- How sensitive are the key questions, and would privacy (CASI/ACASI) reduce social desirability bias?
- How complex is the instrument (rosters, loops, grids) and does it require interviewer assistance (CAPI/CATI)?
- What are literacy and language needs, and will audio or visual aids materially improve comprehension (ACASI/Video‑CASI)?
- What is connectivity like in target areas, and do you need offline data collection and delayed sync (CAPI)?
- What supervision and quality‑control capacity do you have for GPS verification, audio audits, and recontacts (CAPI/CATI)?
- What are your budget and timeline constraints, and where can you trade device spend for scale (CAWI) or speed (CATI)?
Use the answers to select a primary mode. Then test a small hybrid (e.g., CAPI with an ACASI module for sensitive blocks) to validate assumptions before scaling.
Common scenarios and recommended modes
Large household or agricultural surveys in mixed‑connectivity regions typically favor CAPI with offline sync. Add an ACASI module for sensitive health or income questions.
Health monitoring and evaluation (M&E) projects often blend CAPI for clinic interviews with short CAWI follow‑ups sent by SMS. This reduces return visits. Adjust for mode effects in analysis.
B2B studies can start with CATI for screening and appointments. Then email a CAWI link for self‑completion of longer technical sections to respect respondent schedules.
Retail or transport intercepts work well with CAPI on tablets for quick routing and barcode or photo capture. Plan battery charging and device security.
For youth sexual health or gender‑based violence (GBV) research, prefer ACASI. Privacy and literacy challenges can undermine interviewer‑administered candor.
CAPI in practice: step-by-step field workflow
Moving from plan to field requires a well‑rehearsed CAPI workflow. Cover instrument prep, training, offline operations, and QA. These steps mirror how high‑stakes projects structure fieldwork, including national statistical programs that conduct computerized personal interviews.
Preparing instruments and devices
Script the questionnaire with clear skip logic, validations, soft and hard checks, and paradata capture. Translate and back‑translate with on‑device testing to verify layout and right‑to‑left or special character rendering.
Run cognitive interviews and a small technical pilot to catch routing gaps or performance issues. Then lock a versioned release for field use.
Decide between provisioned devices and bring your own device (BYOD). If BYOD, enforce MDM policies, full‑disk encryption, screen lock, remote wipe, and app‑only whitelisting.
Preload maps, media, and language packs to support offline operation. Verify batteries, chargers, and protective cases.
Training and supervising interviewers
Train interviewers on research ethics, informed consent, probing and neutral tone, device handling, and troubleshooting. Include hands‑on practice and mock interviews.
A typical first deployment includes 1.5–2.5 days of training plus a pilot day with observed interviews and debriefs. Plan ongoing spot checks and refreshers.
Establish supervision rhythms. Hold daily progress check‑ins, review flagged cases, and set recontact protocols for 10–15% of households or respondents to validate key fields. Use supervisor apps or dashboards to allocate workloads, monitor durations, and resolve quality alerts swiftly.
Running fieldwork offline and syncing securely
Adopt an offline‑first plan. Queue assignments on devices and collect data without a signal. Schedule sync windows at safe, connected locations.
Configure automatic encryption on device and TLS for transmission. Set conflict‑handling rules so reassignments and case edits don’t overwrite confirmed records. Offline‑first mobile CAPI is a recommended approach in low‑connectivity areas and is widely used in large field surveys (https://mysurvey.solutions/).
After each sync, run integrity checks for duplicates, missing geotags where required, and outlier durations.
Quality assurance: GPS, audio notes, back-checks, and dashboards
Set GPS verification expectations. For interviews requiring location validation, aim for a fix within ~50–100 meters in urban areas and ~150–250 meters in rural areas. Document a waiver process when satellite lock is impossible.
Use brief, consented audio notes or 5–10% targeted audio audits. Review introductions, consent, and complex sections without recording full interviews. Store and access these securely.
Plan 10–15% back‑checks via phone or revisit to confirm identity and selected answers. Add automated alerts for straight‑lining, improbable speeds between cases, or unusually short or long durations.
Use real‑time dashboards to track completions, refusal rates, average durations, QC status, and map coverage. Guide daily supervision with these insights.
Tools and features to look for in CAI software
Your platform should keep interviewers productive, protect respondents, and give supervisors control without locking you into proprietary formats. Evaluate tools by piloting your instrument and verifying that offline, validation, QA, and export workflows match your reality, not just a demo.
Must-have features (skip logic, validations, multilingual, offline
Confirm that your short‑listed tools support these essentials and that they work reliably on your target devices.
- Complex skip logic and routing (rosters, loops, fills)
- Robust data validation (range, consistency, regex, soft/hard checks)
- Multilingual instruments with on‑device language switch
- Offline data collection with secure, resumable sync
- Paradata capture (timestamps, GPS, device ID) and audit trails
- Supervisor dashboards for assignments, QC flags, and reassignments
- Flexible exports and integrations (CSV, Stata/SPSS/R, API, webhooks)
After verifying core features, test performance on long rosters and media capture (photos/audio). Check failure modes like interrupted syncs or device loss.
Security and compliance essentials (GDPR, encryption, MDM)
Security and privacy should be baked in. Encrypt data at rest on the device and server. Use TLS in transit and enforce role‑based access with least privilege.
Minimize personally identifiable information (PII) in instruments. Separate keys from responses where possible, and implement granular consent management—especially for GPS and audio. Pair software controls with MDM policies that restrict installations, enforce screen locks and remote wipe, and manage OS patching. NIST’s mobile device guidance is a practical baseline (https://csrc.nist.gov/publications/detail/sp/800-124/rev-2/final).
Align governance with GDPR principles on lawful basis, purpose limitation, data minimization, and data subject rights (https://gdpr.eu/). Consult industry guidance for research‑specific data protection norms.
Budgeting and ROI
A clear cost model helps you defend choices and avoid surprises. While CAPI introduces device and licensing costs, it removes data entry, reduces callbacks and field revisits via validation, and accelerates delivery. This usually yields net savings on medium‑to‑large studies.
Cost components (software, devices, connectivity, staffing)
List these line items to build a total cost of ownership and spot savings opportunities.
- Software: licenses, hosting, premium modules, support
- Devices: tablets/phones, cases, power banks, spares, MDM
- Connectivity: SIMs/data plans, Wi‑Fi for sync sites, VPN
- Staffing: interviewers, supervisors, trainers, QA staff
- Travel and logistics: per diems, transport, secure storage
- Security and compliance: data protection officer (DPO) time, consent materials, audits
Expect one‑time setup (scripting, translations, training) plus recurring per‑month or per‑user fees. BYOD can reduce device spend but increases security variability and support load.
To close your ROI plan, quantify eliminated data entry, reduced error rates and rework, shorter field timelines, and fewer return visits due to on‑device checks.
Ways to reduce cost without sacrificing quality
Pilot with a focused sample to optimize routing and checks before you scale devices and licenses. Use targeted QA sampling—e.g., 10% back‑checks overall with higher rates for new interviewers—to control supervision hours while maintaining detection power.
Stagger device purchases by wave and adopt offline‑first sync to cut data fees. Centralize updates via MDM so you don’t burn supervisor time on manual installations. Finally, standardize exports and automate dashboards to reduce analyst hours between waves.
Data governance, ethics, and respondent protection
Strong governance protects people and your study’s credibility. Build privacy‑by‑design into your instruments. Obtain informed consent that matches your data uses. Maintain documentation and audit trails that withstand sponsor and regulator scrutiny. ESOMAR’s data protection guidance is a useful reference (https://esomar.org/resources/data-protection).
Consent, anonymity, and handling sensitive data
Use layered consent that explains purpose, voluntary participation, risks and benefits, data sharing, and retention in clear language. Include separate consents for GPS, audio, or recontact.
Prefer anonymization or pseudonymization by separating PII from response data. Store linkage keys securely and restrict access to a small, approved group. Apply retention limits tied to contractual or legal requirements and document deletion procedures. Provide respondents with contact details to exercise rights of access, correction, or withdrawal.
Documentation, audit trails, and reproducibility
Version every questionnaire and store release notes, translations, and change logs. This ensures analyses are reproducible.
Capture paradata and system logs (user, device, timestamp, action) to support QC reviews and incident investigations. Maintain standard operating procedures (SOPs) for instrument changes, device loss or theft, data incident response, and QA escalation. Archive final instruments and codebooks alongside analysis scripts to enable audits and future reuse.
FAQs
What are the minimum device and OS specs for reliable CAPI fieldwork? Aim for modern Android or iOS with at least 3–4 GB RAM, 32–64 GB storage, GPS, a camera, and batteries lasting a full shift. Keep OS versions within vendor support windows and patch regularly via MDM.
How do BYOD policies compare with provisioned devices for data quality and security in CAPI? BYOD lowers capital expense and speeds scale‑up but increases heterogeneity, security risk, and support complexity. Provisioned devices cost more upfront but improve control, consistency, and QC, which often pays off on high‑stakes or long‑running studies.
How do I choose between CAPI, CAWI, CATI, and CASI for a study with sensitive questions and low internet coverage? Prioritize CAPI with an embedded ACASI module for sensitive sections. Run offline‑first with scheduled sync. Consider limited CAWI or CATI for follow‑ups where connectivity allows, then monitor mode effects in analysis.
What QA targets should I set for GPS verification, audio audits, and back‑checks in CAPI projects? Common starting points are GPS validity on all location‑required cases with 50–250 m thresholds by context, 5–10% targeted audio audits focused on consent and complex sections, and 10–15% back‑checks. Use higher sampling for new staff or outlier patterns.
What are the typical training hours per interviewer for a first CAPI deployment? Plan 12–20 hours across ethics, instrument, device skills, and mock interviews. Add a pilot day with observed interviews and feedback before full launch.
How can I mix CAPI with CAWI or CATI without introducing problematic mode effects? Keep identical question wording and visual design where possible. Randomize consistently, segment samples by mode, and use calibration or weighting to adjust for differential response patterns observed in the pilot.
How do I implement GDPR‑compliant consent when collecting GPS coordinates and other paradata? Provide a separate, explicit consent for GPS and paradata explaining purpose and retention. Offer a no‑GPS path where feasible. Minimize precision or storage duration and document access controls and deletion procedures.
Which encryption standards and device management controls are appropriate for CAI platforms? Use full‑disk encryption on devices, TLS for data in transit, and server‑side encryption at rest. Enforce role‑based access and MDM controls for remote wipe, patching, and app whitelisting. Align with NIST mobile guidance and GDPR principles.
What line items drive total cost of ownership for CAPI and where can I optimize? Major drivers are software, devices and MDM, connectivity, staffing (interviewers, supervisors, QA), travel, and compliance. Optimize via targeted QA sampling, offline‑first sync to cut data costs, staged device procurement, and automation of exports and dashboards.
When should I prefer ACASI over interviewer‑administered CAPI for sensitive topics? Choose ACASI when privacy and literacy barriers threaten candor—e.g., sexual behavior, intimate partner violence (IPV), or substance use. Embed ACASI inside an otherwise interviewer‑administered CAPI workflow to retain coverage and control.


%20(1).png)
.png)
%20(1).png)