Watching YouTube videos is a complex and potentially concerning experience for families. While kids want to watch videos for entertainment and education, it can be tough for parents to feel confident in what their children are actually watching because of unrated content, autoplay, and advertisements.


At Common Sense Media, I explored how AI-powered video analysis could help families make more informed decisions about what their kids watch on YouTube. I designed and prototyped user-facing tools that translated LLM-based video analysis into clear, actionable pre-screening insights for parents. Through rapid prototyping and user research with families, I validated strong demand for an experience that flags potentially concerning moments before a video is watched. This work directly shaped the UI, UX, and core capabilities of a beta product now being rolled out by the Common Sense team.

Role

AI Prototyping Intern

Overview

Timeline

3 months

Skills

Tools

Replit (AI-enabled coding platform)

Figma

TypeScript/Next.js

UXTweak (usability testing platform)

Defining the problem

Common Sense Media is known for its parent-facing reviews that help families make informed decisions about TV shows, movies, books, and games for their kids. However, popular media formats such as YouTube remain largely unexplored in terms of content evaluation, despite being heavily used by kids and a major source of anxiety for parents. With recent advances in AI and LLMs enabling content analysis at scale, this presented a clear opportunity to extend Common Sense’s mission to a newer, more complex media environment.

Further user interviews and desk research about how parents and kids use YouTube revealed that:

Parents are concerned about violence, profanity, and suggestive content, which can appear even with Restricted Mode. This can be due to clickbait thumbnails, autoplay, or a lack of age-gating.

Parents are also concerned about subtler risks in YouTube videos, such as product placement and influencer marketing that kids may not recognize.

Many young kids (5-10) watch YouTube videos with their parents, or require permission to watch certain videos.

Parents want to know specifics of what their older kids (10-17) are watching on YouTube but find it hard to stay in the loop because of lack of supervision and limited time.

Understanding the affordances of AI video analysis

The AI Prototyping team had developed a standalone video analysis tool that ingested a video URL, and used an LLM with a detailed rubric to evaluate it. The output was a list of moments of concern within the video.

The strength of AI video analysis using this approach was in pinpointing exact timestamps alongside detailed explanations of why each moment might be concerning.

I began exploring how these outputs could be translated into user-facing tools that better support families’ experiences around watching YouTube.

Ideation

While exploring potential applications for this technology, I considered a variety of users, from parents and kids to content creators.

Prototyping

I created interactive prototypes for many of these ideas using AI-assisted coding on Replit. While these speculative prototypes were not connected to the backend AI video analysis technology, they enabled rapid exploration of interactions, feature sets, and visual language. This form of prototyping helped surface the affordances and limitations of different approaches and helped me understand what might work in practice and what would not.

User Testing + Iteration

Because parents and educators were the most accessible group for user testing, I prioritized testing and iterating on the parent-facing prototypes from above.

14

moderated user studies on Zoom

8* parents of kids ages 5-17 (5 moms, 3 dads)

6* educators (teachers and librarians)

* Some individuals identified as both parents AND educators

3

unmoderated user studies on UX Tweak

3 parents of kids ages 9-16

Key insight: Parents and educators want to screen content before kids watch, and will make time to do so.

Parents were especially enthusiastic about a video pre-screening prototype that allowed them to preview potentially concerning moments in a YouTube video before their child watched it.

9 out of 11 surveyed parents that said they would use this tool in their parenting routine. This was a clear signal that this tool was worth narrowing on and exploring more.

I experimented with multiple iterations of this prototype, user-testing each version to refine the visual language of the information presented and improve the overall user experience and functionality of the tool.

I also sought feedback from senior product designers at Common Sense on how to improve the UI/UX and interactions with the tool. Above are my notes from the design crit sessions.

Outcome

The outcome of my internship was a validated product recommendation for a YouTube pre-screener tool, supported by user research insights and a series of proof-of-concept prototypes — including both looks-like and works-like versions demonstrating the experience and technical feasibility.

Works-like prototype

I built a lightweight frontend in Next.js that connected to the live video analysis backend. The goal wasn’t polished UI, but to demonstrate a working end-to-end proof of concept that I could hand off to the team as a starting point for the final product.

LOOKS-like prototype

BETA PRODUCT LAUNCH

After the conclusion of my internship, the AI Prototyping Squad launched a beta video pre-screener product. Look familiar?