One of the most exciting areas in usable security is in modeling how social influences affect security and privacy behaviors, and in designing new systems that better take into account social design dimensions. Our lab is at the forefront of this area. Social influences strongly affect security and privacy behaviors, and yet end-user security and privacy systems rarely incorporate social design principles. A helpful framing comes from architecture, which defines the oppositional concepts of “sociopetal” (designed for group social interactions) versus “sociofugal” (designed to separate individuals into their own spaces). The social cybersecurity project explores the question: how can we design security and privacy systems that are more sociopetal? Specifically, how can we make systems that are more observable (i.e., it’s easier to see what others do), cooperative (i.e., strength aggregates by people working towards mutually beneficial security outcomes), and stewarded (i.e., experts can act on behalf of others’ security and privacy)?
We are looking for students interested in social cybersecurity who have strong web / mobile programming experience as well as students with strong qualitative research skills and an ability to design and run randomized, controlled experiments.
The near future of IoT promises a fully connected and automated physical environment. However, the ability to keep up with the security and privacy of an exponentially increasing number of internet-connected and computationally-enabled devices is often beyond the capacity of most people. AI and automation seems to be the only tractable solution, but security and privacy require some level individual agency as well: many decisions are a matter of individual preference, and people need to be able to know how to access their data even if the AI/automation tool malfunctions. Social AI assistants seem like a promising way forward that offers the best of automation and individual agency: the AIs can help identify and solve security and privacy vulnerabilities for users, but can keep them in the loop through user-friendly conversational interactions.
We are looking for students interested in AI-assistants for security and privacy who have strong web / mobile programming experience AND prior experience with AI / ML.
End-users are not the only users that matter. To ensure that good security and privacy practices are incorporated into the next generation of software applications that are created, it is imperative that we create usable platforms and APIs for developers. Developers often work under tight deadlines and with limited resources. Accordingly, they have little time to devote to securing the software they write. A second problem is that advancements in research are rarely reflected into day-to-day development pipelines of programmers. How can we create development tools, APIs and platforms that makes writing secure code easy and that make it simple to incorporate research advancements into day-to-day development pipelines?
We are looking for students interested in usable developer tools for security and privacy who have strong web / mobile programming experience. These students may be assisting myself and more senior students directly with the creation of large open-source development platforms.
Some of my project ideas don’t cleanly fit into any of the other categories. These projects explore fundamentally new ways of thinking about or designing security and privacy systems that are high-risk and high-reward. Examples include AR games to improve social accountability, tangible / wearable interfaces to increase the viscerality of security and privacy dangers and protections, and systems to align users’ primary non-security related goals with actions they perform for security and privacy.
We are looking for students interested in new frontiers in usable security who have strong web / mobile programming experience, hardware prototyping experience or experience with AR / VR.
As security researchers, our ethos is to mitigate any abusive uses of computing systems. As usability security researchers, we must go a step forward and help end-users deal with and recover from these abusive uses of computing when they do occur. Combatting online harassment and misinformation falls into this category.
We are looking for students interested in combating online hate, harassment and misinformation who have strong quantitative or qualitative data analysis skills, as well as students with prior experience with AI / ML and web programming experience.
[P17] Jason Wiese, Sauvik Das, John Zimmerman and Jason Hong. Evolving the Ecosystem of Personal Behavioral Data. HCI Journal Special Issue on The Examined Life: Personal Uses for Personal Data (2017).
[P16] Sauvik Das, Gierad Laput, Chris Harrison and Jason I. Hong. Thumprint: SociallyInclusive Local Group Authentication Through Shared Secret Knocks. In Proceedings of the 35th SIGCHI Conference on Human Factors in Computing Systems (CHI), 2017. Best Paper honorable mention (top 4% of submissions)
[P15] Sauvik Das. Social Cybersecurity: Understanding and Leveraging Social Influence to Increase Security Sensitivity. German Journal of it – Information Technology Special Issue on Usable Security and Privacy, 2016. Invited paper
[P14] Sauvik Das, Jason Wiese and Jason I. Hong. Epistenet: Facilitating Programmatic Access & Processing of Semantically Related Personal Mobile Data. In Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI), 2016. (Acceptance Rate: 23%).
[P13] Alexander de Luca, Sauvik Das, Iulia Ion, Martin Ortlieb and Ben Laurie. Expert and NonExpert Attitudes towards (Secure) Instant Messaging. In Proceedings of the 10th International Symposium on Usable Privacy and Security (SOUPS), 2016.
[P12] Haiyi Zhu, Sauvik Das, Yiqun Cao, Shuang Yu, Aniket Kittur and Robert Kraut. A Market in Your Social Network: The Effects of Extrinsic Rewards on Friendsourcing and Relationships. In Proceedings of the 34th SIGCHI Conference on Human Factors in Computing Systems (CHI), 2016. (Acceptance Rate: 23%) Best Paper honorable mention (top 4% of submissions)
[P11] Sauvik Das, Jason I. Hong and Stuart Schechter. Testing Computer-Aided Mnemonics and Feedback for Fast Memorization of High-Value Secrets. In Proceedings of the NDSS Workshop on Usable Security (USEC), 2016.
[P10] Sauvik Das, Alexander Zook, and Mark Riedl. Examining Game World Topology Personalization. In Proceedings of the 33rd SIGCHI Conference on Human Factors in Computing Systems (CHI), 2015. (Acceptance Rate: 23%)
[P9] Sauvik Das, Adam Kramer, Laura Dabbish and Jason I. Hong. The Role of Social Influence in Security Feature Adoption. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work (CSCW), 2015. (Acceptance Rate: 28.3%)
[P8] Sauvik Das, Adam Kramer, Laura Dabbish and Jason I. Hong. Increasing Security Sensitivity with Social Proof: A Large Scale Experimental Confirmation. In Proceedings of the 21st Conference on Computer and Communications Security (CCS), 2014. (Acceptance Rate: 19.5%). Honorable mention for NSA best scientific cybersecurity paper in 2014 (Top 3 out of 50 anonymous nominations)
[P7] Sauvik Das, Tiffany Hyun-Jin Kim, Laura Dabbish and Jason I. Hong. The Effect of Social Influence on Security Sensitivity. In Proceedings of the 8th International Symposium on Usable Privacy and Security (SOUPS), 2014. (Acceptance Rate: 26.5%)
[P6] Eiji Hayashi, Sauvik Das, Shahriyar Amini, Jason Hong and Ian Oakley. CASA: ContextAware Scalable Authentication. In Proceedings of the 7th International Symposium on Usable Privacy and Security (SOUPS), 2013. (Acceptance rate: 27%)
[P5] Sauvik Das, Eiji Hayashi, and Jason Hong. Exploring Capturable Everyday Memory for Autobiographical Authentication. In Proceedings of the 2013 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp), 2013. (Acceptance rate: 23%). Best Paper Award (top 1% of all submissions)
[P4] Sauvik Das and Adam Kramer. Self-Censorship on Facebook. In Proceedings of the 7th International AAAI Conference on Weblogs and Social Media (ICWSM), 2013. (Acceptance rate: 20%)
[P3] Manya Sleeper, Rebecca Balebako, Sauvik Das, Amber McConohy, Jason Wiese, and Lorrie Cranor. The Post That Wasn’t: Examining Self-Censorship on Facebook. In Proceedings of the 16th annual ACM Conference on Computer Supported Cooperative Work and Social Computing (CSCW), 2013. (Acceptance Rate: 35.6%)
[P2] Emmanuel Owusu, Jun Han, Sauvik Das and Adrian Perrig. ACCessory: Keystroke Inference using Accelerometers on Smartphones. In Proceedings of the 12th annual ACM/SIG International Workshop on Mobile Computing Systems and Applications (HotMobile), 2012. (Acceptance rate: 20.6%)
[P1] Ken Hartsook, Alexander Zook, Sauvik Das, and Mark Riedl. Toward supporting storytellers with procedurally generated game worlds. In Proceedings of the 2011 IEEE Conference on Computational Intelligence in Games (CIG), 2011.