Abstract: Linguistic-Based Methods for Understanding Survey Response Vs Non-Response: Finding Patterns in Text Messages from Crisis Support Delivered through Mobile Devices (Society for Prevention Research 27th Annual Meeting)

347 Linguistic-Based Methods for Understanding Survey Response Vs Non-Response: Finding Patterns in Text Messages from Crisis Support Delivered through Mobile Devices

Schedule:
Thursday, May 30, 2019
Pacific N/O (Hyatt Regency San Francisco)
* noted as presenting author
Carlos Gallo, PhD, Research Assistant Professor, Northwestern University, Chicago, IL
Anthony R Pisani, PhD, Associate Professor, University of Rochester, Rochester, NY
Jaclyn Weiser, MS, Senior Data Scientist, Crisis Text Line, New York, NY
Bob Filbin, MS, Senior Data Scientist, Crisis Text Line, New York, NY
Madelyn Gould, PhD, MPH, Professor, Columbia University, New York, NY
Background: Survey response rates in Internet-based surveys are often quite low. It is difficult to know what distinguishes those who take surveys from those who do not. In this paper, we report our use of linguistic data from text message to understand missing responses on quality improvement surveys conducted by Crisis Text Line (CTL), the largest US provider of crisis support services via text message. This motivating context is highly significant because (a) it exemplifies circumstances where participants might have a range of reasons for completing or not completing a survey, and (b) because of the public health importance of technology-based platforms for addressing mental health and crisis needs.

Methods: CTL developed a survey with consultation from the research team, to assess the needs of people in crisis who texted CTL to measure their satisfaction, mental health outcomes, and reasons for texting. Texters from 251,260 conversations were invited to respond to the survey from July 1 2017 to November 1 2017. Survey response rate was strong: 15-19% depending on survey item. As a first step toward understanding the subsample of individuals who responded to the survey (responders) versus those who did not (non-responders), we extracted linguistic features from text messages. Our eventual goal is to assess the quality of the overall delivery for non-responders from an inference process of available data.

Results: We extracted length of message, delay in response in between messages, and linguistic variables such as positive and negative emotions, function words (e.g. articles, prepositions), verb tenses, among others. Differences in these linguistic features are highlighted for the survey responder and non-responder groups. For example, survey responders seem to use more positive emotion words in their text messages than non-responders (mean=93 versus 129, sd=77 versus 83 respectively).

Conclusions: Mobile technology is increasing the reach of crisis support, yet it also requires a scalable method to assess quality of delivery. Internet-based evaluations help assess this quality, however, many users do not respond to questionnaires. Linguistic-based analysis and machine-based methods can complement the tool set of evaluation methods and can improve crisis support delivery.