Siren

Refining Safety in Dating Apps

Role

User Interface Design

User Interviews

Prototyping

Team

Jasmine To (Team Lead)

Wineury Almonte

Anderson Fisher

Bibilomo Sanni

Timya Harden

Tools

Figma

Miro

Discord

Teams

Google Slides

Timeline

4 Weeks

Sept 2023 - Nov 2023

Wineury Almonte

User Experience Designer

Overview

Siren is a safety-first dating app prototype designed to prioritize user safety and foster confident

connections.

Inspired by social media slang about "red flags" in dating, the idea was pitched by team leader Jasmine To in August 2023.

Developed for our Interactive Design II class, we spent four weeks using the Lean UX Design method to craft a functional prototype.

The Challenge

The current state of safety in online dating has focused mainly on quick matches without proper analysis of potentially dangerous users.

Lean UX Methodology

We used Jeff Gothelf's Lean UX method to guide our approach, focusing on running experiments to evaluate user assumptions.


The method involves an eight-box canvas, where we documented our initial answers and solutions for each prompt.


Lean UX is ideal for dynamic environments, making it well-suited for university projects.

Business Problem

What Business have you identified that needs help?

1

Users & Customers

What types of users and customers should you focus on first?

3

Solutions Ideas

List product, feature, or enhancement ideas that help your target audience achieve the benefits they’re seeking.

5

Hypotheses

Combine the assumptions from 2, 3, 4, & 5 into the following templat statement:

We believe that [business outcome] will be achieved if [user] attains [benefit] with [feature] .”

6

What’s the most important thing we need to learn first?

For each hypothesis, identify the riskiest assumption. This is the assumption that will cause the entire idea to fail if it’s wrong.

7

What’s the least amount of work we need to do to learn the next most important thing?

Brainstorm the types of experiments you can run to learn whether your riskiest assumption is true or false.

8

User Benefits

What are the goals your users are tying to achieve?

What is motivating them to seek out your solution? (e.g., do better at my job or get a promotion)

4

Business Outcomes

(Changes in customer behavior)

What changes in customer behavior will indicate you have solved a real problem in a way that adds value to your customers?

2

The assumption is…

Users seek safety and meaningful connections, with success defined by consistent use and a focus on quality over quantity in matches.

Meet our Prototype Persona: Aubrey

Aubrey is 25, single, and inexperienced in the dating scene. She’s busy but eager to form meaningful connections—with safety as her top priority. We created Aubrey based on our initial assumptions of who might benefit most from our product, acknowledging that his profile will evolve as we gather real user data.

Obstacles: Skepticism toward strangers, uncertainty about mutual intentions, risk of catfishing, and fear of online harassment.

Needs: Help finding authentic connections, sense of safety, and control over her dating pool.

Solutions - Low FIdelity

Our team defined a solution to address the business problem while meeting our proto-persona’s needs.


We started by sketching possible interfaces to tackle our problem space, aiming to align on a single idea or merge multiple concepts.


Based on team input, we decided to combine 4 popular ideas into one cohesive solution.

High Perceived Value

Ship Now + Test Later

You dont believe it's risky so test "do it live." Validate it in production

When there is time (aka never)

Low Risk + Low Value

Discard Pile

Little value + high risk

Experiment Zone

These are both very valuable and very risky

High Risk

Low Risk

Low Perceived Value

Prioritizing Hypotheses with Lean UX

We kicked things off by ranking our hypotheses based on risk and value, laser-focusing on those that were high-risk and high-value.


Guided by the principle of “least amount of work to learn the most important thing,” we built quick, low-fidelity MVPs to test our biggest assumptions about user behavior.


Our experiments centered on two key features:


A report call-outs system and required video calls


With our hypotheses prioritized, we structured the work into two focused sprints to test, validate, and refine key features.

Sprint 1

What’s our focus?


In Sprint 1, our focus was to determine how users interpret and react to behavior flags on profiles.


We wanted to understand whether report callouts (visible indicators of how many times a user has been flagged) would make users feel safer—or if they would be too intrusive.


This sprint was conducted during the course of 2 weeks.

Generative Research and Concept Validation

Assessing Report Callouts

Key Activities

Create low-fidelity profiles in Figma

Conduct user testing with campus participants

Observe and test user behavior on profile evaluation

Week 2

Experimentation

Results of Experimentation

What did we discover?

Advantages

Green/yellow/red flags made users feel safer

Green flags provided clarity

Users were open to dating people with yellow flags

Disadvantages

Confusion about how flag colors were determined

Some believed red-flagged users should be banned entirely

Week 3

UI Testing

Testing Different Splash Screens

Medium-Fidelity

After identifying key pros and cons from our Week 2 experimentation, we used these insights to refine our design.


We developed medium-fidelity prototypes to test how behavior flags should appear within user profiles, ensuring they felt informative yet unobtrusive.


With these prototypes, we conduced user interviews.

Results for Sprint 1

Final Results

High-Fidelity

Users preferred discretion – Reports should be hidden rather than displayed prominently. (Siren Toggle)


Flags were useful, but placement mattered – Reports felt overwhelming when they were too visible outside of a profile.


Users took yellow/red flags into account but didn't always avoid them – Some were still open to matching depending on context.


Redesigned - 2.3.2025

Sprint 2

What’s our focus?


In Sprint 2, our focus was to validate whether a required video call after a set number of text exchanges would improve trust and reduce catfishing.


While video verification could enhance safety, we needed to determine if users would accept it or feel forced into an uncomfortable experience.


The main questions:

Would users accept the requirement or feel forced into an uncomfortable experience?


Would they recognize and react to deceptive` behavior during a video call?


How naturally could this feature fit into the dating experience?

Prototyping & Early Usability Testing

Required Video Call

Key Activities

Simulated text and video interactions

Observed user responses after the call

Final usability testing with the high-fidelity prototype

cont. Week 3

Experimentation

Catfish Pictures Used

Discord Message Conversation Example

catfished

11/02/2023 7:24PM

hi

catfish

11/02/2023 7:24PM

hey how are you!

catfished

11/02/2023 7:25PM

I’m good! I like your truck

catfish

11/02/2023 7:26PM

thanks! it took me forever to save up for it lol

so tell me, what are you into?

catfished

11/02/2023 7:26PM

I'm really into horror movies. I have a big bucket list full of ones I want to see. I also like to paint.

What about you?

catfish

11/02/2023 7:26PM

omg i love horror movies, i haven seen any recently because im so busy man

im realy into music and my truck

catfished

11/02/2023 7:27PM

i feel that. I'm a junior in college and I have een so stressed out this semester. What kind of music do you like?

catfish

11/02/2023 7:28PM

omg i love all kinds of music, my fav genre rn is alternative rap

!

Key Insights

Users noticed catfish behavior during the video call – The feature successfully helped participants identify deceptive users.


Increased trust in matches – Video verification gave users more confidence before deciding to meet in person.


Preferred as a safety net – While some users didn’t like being required to call, most saw it as a valuable security measure rather than an inconvenience.

Week 4

More Usability Testing
High Fidelity Prototypes

Redesigned - 2.3.2025

Results for Sprint 2

Final Results

Final Iteration: Unlocking Calls

Prevent users from calling too early – Calls can only be initiated after reaching the required text limit.


Introduce a notification system – Users are warned when they’re nearing the message cap, helping them prepare for the required video call.


Enable text-based scheduling – Instead of an automated system prompt, users can coordinate calls naturally through chat.

Retrospective

Throughout both sprints, our Lean UX approach allowed us to quickly test assumptions, gather feedback, and refine our designs based on real user insights. One of the biggest successes was how users responded to the flagging system—they felt safer knowing there was a way to gauge behavior at a glance.


The required video call also proved effective in reducing catfishing concerns, as users reported feeling more confident about their matches. These features reinforced trust and transparency, two key pillars of our design.


However...


Some were confused about how flag colors were determined, indicating a need for clearer communication around the reporting process and what even is a “red flag”.


Addressing these concerns in our high-fidelity prototype helped create a more intuitive and user-friendly experience, balancing safety with user autonomy.