Commonwealth Cyber Initiative (CCI) researchers address multidisciplinary challenges

In This Story

Body

Whether you are an experienced software developer, a teen texting on a smartphone, or an older adult checking a bank statement, cybersecurity is part of your life. Humans and computers interact every minute of every day and cybersecurity is there to keep information safe and actions private. But normal human behavior can compromise safety and privacy.

For the next 12 months, researchers funded by the Commonwealth Cyber Initiative’s (CCI) Northern Virginia Node (NoVa Node) will be exploring the impact of human behavior on cybersecurity systems. Divided into six teams, the researchers will seek to leverage the power of their academic expertise in the social sciences, and related fields. The teams include faculty from the Colleges of Engineering and ComputingHumanities and Social Sciences, Education and Human Development, and the School of Business. Each team will explore a different aspect of the problem as they aim to translate those understandings into solutions or areas for additional investigation that can impact the welfare of Virginians.

 

“Human-Centric Training for Privacy and Security Controls: Bridging the Awareness Gap for Diverse Populations”

PI: Vivian Genero Motti, College of Engineering and Computing (CEC), George Mason University; Co-PIs: Samy El-Tawab, and Ahmad Salman, College of Integrated Sciences, James Madison University

If you retired from the workforce 25 years ago, before Wi-Fi, online shopping, banking, or smartphones, you are likely more vulnerable to cyberattacks. In fact, older adults face a disproportionate risk of suffering cyberattacks; still, they do not have access to resources and educational materials suitable to meet their needs related to human behavior and privacy protection.

Vivian Motti and her team want to do something about that. They plan to reach out to underrepresented users and characterize their level of awareness about cybersecurity. Motti and her team believe that gaining a better understanding of these populations will help inform educational content development, providing content, language, and design aspects that are all suitable to their specific user profiles.

“By adopting a user-centric design approach, this project will ensure that cybersecurity training meets users' needs for minority groups. By involving older adults front and center in the research agenda, we will establish training contents that are appropriate to their level of understanding,” says Motti.  Also, besides following the training contents and retaining what they learn, they will be able to act and prevent potential attacks that could pose privacy risks.

 

“Impact of Human Behavior in a Mixed Traffic Environment”

PI: Linghan Zhang, CEC; Co-PIs: Nirup Menon, School of Business, Nupoor Ranade, College of Humanities and Social Sciences (CHSS),

As autonomous vehicles become more prevalent and mingle with human-driven vehicles this mixed traffic environment may comprise both. In mixed traffic, the behaviors of human drivers are unpredictable and can lead to situations that confuse autonomous vehicles and cause adverse events for both.

The CCI NoVa Node’s research in autonomous vehicles (AVs) has already garnered attention from vehicle manufacturers such as Ford, Cadillac, and Daimler-Benz. Linghan Zhang and her team aim to extend that research by studying their use in mixed traffic.

According to Linghan, the team’s goal is to reflect driving reality through a multi-vehicle simulation in mixed traffic, using driving conditions that have led to real-world collisions in the past. She says, “Prior research only focuses on a single user’s behavior, and the data collected is mainly limited to surveys and interviews. With objective driving data missing, prior experiments did not reflect on-road driving reality.”  

This project could achieve valuable and meaningful data on how human driver behaviors affect other components in mixed driving environments, especially in security- and safety-critical contexts when human errors are inevitable as well as uncover what humans need to know while driving alongside AVs. The team expects that the results will be significant for autonomous vehicle implementation and policymaking. 

 

“Towards Building Cyber-Security Resilience in a COVID-Induced Virtual Workplace”

PI: Amitava Dutta; Co-PI: Pallab Sanyal, School of Business, George Mason University

Before COVID-19 rocked our world, individuals and businesses were already increasing their online presence. The pandemic accelerated the speed forcing a change. People who were not comfortable in the online environment were made to go online and people who were already comfortable expanded their online presence to areas that they had previously conducted in person.

“In short, COVID-19 has caused a shift from organizational ecosystems to a virtual workplace for employees, which has opened multiple vectors for cyberattacks,” says Amitava Duta, professor at the School of Business. “Our research focuses on the behavioral and organizational aspects of cybersecurity and is motivated by the ongoing transformations following the onset of the COVID-19 pandemic.”

In their project, the team will investigate the significant changes in online behavior following the onset of the COVID-19 pandemic are. They expect their insights will help organizations build greater cyber-security resilience in a virtual workplace.  

Because Washington, D.C. and Northern Virginia are home to prominent financial services organizations these businesses would have a strong interest in strengthening their cybersecurity posture to address its behavioral aspects. Soon, Amazon will also have a significant presence and retail online sales is another area frequently targeted by cybercriminals. If organizations would be willing to provide data on customer behavior on their website, the models developed from the team’s work could be refined and tailored for an important application domain.

 

“Characterizing and Countering User Security Fatigue in Password Enhancement through Deep Learning”

PI: Gerald Matthews, CHSS, George Mason University; Co-PIs: Giuseppe Anteniese and Daniel Barbará, CEC, George Mason University

If you already have a demanding job, you might think maintaining security is an additional burden, and not keep up with cybersecurity best practices such as updating or changing your passwords.

Professor Giuseppe Ateniese has designed a tool for enhancing password strength, based on a deep learning approach, but psychological factors may limit the adoption and impact of the tool. Everyone can be vulnerable to security fatigue and lax cybersecurity practices can have major societal consequences—threats to national security, financial losses to individuals and organizations, and invasion of privacy.

Introducing security tools powered by Artificial Intelligence, when successful, will counteract typical human fallibilities and promote safety in computer systems across government, industry, and personal use. This project investigates the effect of security fatigue on the use of Anteniese’s tool. It will also explore strategies for mitigating fatigue and supporting user engagement.

The team believes that enhancing employees' ability and motivation to maintain effective security protocols has immediate economic benefits and the research has the potential to suggest design features of security tools that can support commercialization as well as training protocols.

 

Enabling Invisible Security and Privacy for Resilient Human-Centric Cybersecurity Systems

PI: Eric Osterweil, CEC, George Mason University; Co-PI: Matt Canham, CHSS, George Mason University

For decades, cryptography has been one of cybersecurity’s most essential tools. While its utility is certain, its complexity limits its use for non-experts. The result—non-experts fall prey to cybercriminals for many reasons including lack of knowledge, incorrect thought processes, and the inability to invest adequate time and resources to implement proper data protection.

Eric Osterweil and his co-investigator Matt Canham hope to change that through their work with the CCI NoVa Node. “This project will seed a critical foundation for adaptive cybersecurity protections for human users’ end-to-end encryption (E2EE) needs. The results from this project will be used as foundations for enhancing a core staple of Internet communications (email) and future advances in prescriptive protections for Cybersecurity Threat Intelligence (CTI) information sharing,” says Osterweil.

The CTI industry continues to grow, with companies, federal agencies, and international communities relying on CTI. In Virginia, where federal agencies and their partners routinely conduct transactions over email, this is especially true. Their view is that building human usable E2EE protections and extending those to adaptive CTI will be directly relevant to operational cybersecurity projects and needs throughout the industry and public sectors in Virginia.

The pair believes that a key benefit to the Commonwealth will include course-related exposure of this material to the students at George Mason University. “Students will be able to showcase both the results of this work and their own derived qualifications to benefit their entry into local industry and jumpstart their ascension to professional careers,” says Osterweil. 

 

"Characterizing Biases in Automated Scam Detection Tools for Social Media to Aid Individuals with Developmental Disabilities" 

PI: Hemant Purohit, CEC; Co-PIs: Géraldine Walther, CHHS; Matt Peterson, CHHS; YooSun Chung, CEHD 

Designers of scam detection tools often focus on improving the computational accuracy of the methods, especially those with state-of-the-art Natural Language Processing (NLP) and Machine Learning (ML)-based techniques, but their understanding of the diverse human behavior can be limited. This project aims to build a foundation for inclusive cybersecurity technologies to protect individuals with disabilities from online scams using a unique interdisciplinary collaborative approach between computing and non-computing researchers.

Specifically, the team’s objective is to uncover the biases in the existing scam detection techniques for social media using NLP and ML methods. “We will conduct Eye Tracking analyses using a labeled scam dataset of social media posts from existing literature on online cybersecurity and study the differences between the attention patterns of individuals with and without developmental disabilities when perceiving scam posts,” says Hemant Purohit.

The project hopes to gain insights that will support cybersecurity training development for reducing online fraud for individuals with special education needs. At the same time, the researchers want to identify limitations in automated scam detection tools and help create more effective cybersecurity tools that can protect user groups in our communities.