Research

Recent research, grouped by stream.

See my vita or my google scholar profile for a full list of my publications.

Stream: Behavioral Cybersecurity

This research seeks to understand predictors of why individuals disregard security messages, and to develop and test interventions for mitigating the same. It applies theories and methods from both psychology and neuroscience.

1. Do Security Fear Appeals Work when they Interrupt Tasks? A Multi-Method Examination of Password Strength

With: Anthony Vance, Dennis Eggett, Detmar Straub, Kirk Ouimet

Accepted for forthcoming open-access publication at MISQ.

This paper is a follow-up to the wildly popular “Enhancing Password Security through Interactive Fear Appeals: A Web-Based Field Experiment”, HICSS 2013.

The original data for this paper was collected through a deception protocol on a website Socwall.com, with password tooltip treatments designed and implemented by Kirk Ouimet. Later versions of the paper required collecting additional data, including running a focus group. I re-implemented the password tooltip treatments in several other website shells – first for BYU, then for Temple. We didn’t end up using the BYU one to collect more data, but we did use the Temple one during a focus group run by Tony with students from there. I also re-implemented the Socwall one – all three on Heroku. I initially used the social engineering toolkit to clone the sites, because I’m cool.

MISQ forthcoming
Vance, A., Eargle, D., Eggett, D., Straub, D., Ouimet, K. “Do Security Fear Appeals Work When They Interrupt Tasks? A Multi-Method Examination of Password Strength,” MIS Quarterly, forthcoming.
HICSS 2013
Vance, A., Eargle, D., Ouimet, K. and Straub, D. “Enhancing password security through interactive fear appeals: A web-based field experiment.” In 2013 46th Hawaii International Conference on System Sciences (HICSS): (2013), pp. 2988-2997.

Links to resources:

See below for links to live demonstrations of some of the tooltip portals. Be warned though, the README’s there are “research notes,” which means they are messy.

2. More harm than good? How security messages that interrupt make us vulnerable

Examinations of the impact of dual-task interference on security message disregard, and tests a timing-based intervention to discover the best times to present security messages in online browsing contexts. Uses fMRI and field study methodologies.

Citation
Jenkins, J., Anderson, B., Vance, A., Kirwan, B. and Eargle, D. “More harm than good? How security messages that interrupt make us vulnerable.” Information Systems Research, 27, 4 (2016), 880-896. Awarded ISR’s “Best Published Paper” for 2016. doi: 10.1287/isre.2016.0644

3. The Fog of Warnings: How Non-essential Notifications Blur with Security Warnings

With: Anthony Vance, Bonnie Anderson, Brock Kirwan, Jeff Jenkins

Through a series of lab and field experiments, the impact of exposure to system notifications of varying degree of visual similarity to security messages is assessed using objective methods such as reaction times and fMRI response data.

Targeting MISQ Submission October 2021

Conference version
A Vance, D Eargle, JL Jenkins, CB Kirwan, BB Anderson. (2019) “The Fog of Warnings: How Non-Essential Notifications Blur with Security Warnings.” In Fifteenth Symposium on Usable Privacy and Security (SOUPS 2019). Santa Clara, CA: USENIX Association, 2019. https://www.usenix.org/conference/soups2019/presentation/vance

Resources:

  • Symposium on Usable Privacy and Security (SOUPS’19) submission (abstract, USENIX pdf)
  • A testing page for some of the modals I made for use during the task, and A portal for testing treatment conditions and full or piecemeal task protocol. I built the whole experimental task from scratch using javascript and the PsiTurk python framework.

4. How much is your security worth? Applying a risk tradeoff paradigm to explain the bimodal nature of user elaboration over interruptive security messages

With: Dennis Galletta

Why do employees disregard computer security messages, opening the organization to potential information security breaches? One research perspective assumes that humans who fall prey to such attacks solely use automatic information processing, and therefore, user interfaces (such as Google Chrome browser security popups and overlays or Microsoft Word security dialogs) must be better designed to capture and hold attention, and to educate users, to the end that users more carefully consciously evaluate their information security decisions. However, this research project takes the view that employees also make monetary cost-benefit approaches to adhering to or disregarding security messages. It gathers data using a series of online deception-protocol website experiments, wherein users are exposed to security messages that interrupt an ostensible primary task. Psychometric measures of attention, including mouse-cursor tracking and reaction times, are captured and used to predict security behaviors. The monetary “cost” of disregarding a security message is experimentally varied, and its impact on prompting attention and security behaviors is examined. Survey data and focus group data is also captured to probe users’ thought processes.

Stream: Political biases in Online News Consumption

This stream tests the degree to which political ideological confirmation bias influences individual’s reactions to online news. It tests elements such as reader-source and reader-content ideological alignment, in addition to predictors of perceptions of comments posted related to online news. It seeks mitigations that can help address online news-related societal divides.

1. A Spoonful of Sugar: Blending Online News Source and Content to Counter Ideological-Alignment News Biases and Encourage Political Group Depolarization

With: Valerie Bartelt, Zlatana Nenova, Dennis Galletta

Anecdotes suggest that political group polarization may impact readers’ perceptions of news articles so strongly that readers may call articles “fake news” solely based on their ideological alignment with the publication source, regardless of the article’s content. While researchers have explored confirmation bias in social media, studies have not yet teased out the differential effects of reader ideological alignment with article content (“content-friendliness”) and source (“source-friendliness”) on attitudes, beliefs, and intended behaviors. Using a mixed design, 133 MTurk participants read and reacted to polarizing news articles, with article-content being presented as if from random sources.

Resources:

Stream: Identifying the informaiton systems research Nomological Network via Machine learning

This stream applies methods from machine learning and topographical data analysis to explore the nomological network of constructs used in information systems research, and to create tools to improve academic literature review and construct-creation processes.

1. Creating Construct Distance Maps with Machine Learning: Stargazing Trust

With: Kai Larsen, David Gefen, Stacie Petter

A design-science approach to creating a tool to graph the nomological space of all survey items used in information systems literature. Applies methods from the domain of topological data analysis to visually graph the nomological space, based on predicted “distances” between item pairs generated by a machine learning predictive model trained on a sampling of survey item-pair relationships (distances) coded by domain experts. Besides leading to insights into already-used IS constructs, the resulting tool can be used to identify placement of new survey items in context in the nomological space.

Ongoing research.

AMCIS Citation
Larsen KR, Gefen D, Petter S, Eargle D. (2020) “Creating Construct Distance Maps with Machine Learning: Stargazing Trust.” In Conference of the Association for Information Systems (AMCIS 2020). Online. Awarded AMCIS’ “Best Completed Paper” for 2020. 60% acceptance rate.

Links:

Stream: Using crowdsourcing platforms for research data collection

This research is related to developing and using open-source code to collect data on crowdsourcing platforms. It stems from collaborations that have arisen from my open-source code contributions to code projects used to facilitate collecting experimental design data on online crowdsourcing platforms, such as psiTurk.

1. When Bots Attack: Threat Modeling and Mitigations of Attacks Against Online Behavioral Experiments

With: Todd M. Gureckis, Jordan W. Suchow

Psychology and behavioral data is increasingly shifting to being collected online, instead of in brick-and-mortar lab rooms. However, panic has arisen about the degree to which such data is impacted by “bots”, or by malicious actors gaming the system in order to maximize participation payouts. This paper applies models from cybersecurity – specifically, the NIST Cybersecurity Framework’s Five Functions – to systematically evaluate the threat of bots, and to show the process by which controls can be developed to mitigate identified threats. Several cross-industry controls are suggested, including the development of machine learning models to detect anomalous participant behavior, aggregated across participating researchers’ data. The behavioral research community can use these models to defend collected data, and to argue for cross-industry grants to develop novel approaches.

Ongoing research