AI-Supported Cyber Safety Curriculum for Youth: Design, Development, and Evaluation (Faculty/Junior Researcher Collaboration Opportunity)

AI-Supported Cyber Safety Curriculum for Youth: Design, Development, and Evaluation

PI: Ellen Wenting Zou (Educational Psychology, Counseling, and Special Education)

Apply as Junior Researcher 

Currently no tuition support from the applicant’s college & department, future support will be negotiated.

The rapid proliferation of artificial intelligence (AI) technologies has fundamentally transformed the digital landscape, creating both new opportunities for learning and increasingly complex cyber risks—risks that disproportionately affect adolescents. Adolescents face increasingly complex cyber risks shaped by AI technologies that exploit their developmental vulnerabilities, high online engagement, and social pressures. These include deepfake abuse, AI-powered phishing, and algorithmic amplification of harmful content like disordered eating, self-harm, or cyberbullying. AI systems also collect and infer sensitive personal data, often without adolescents’ full understanding or consent. Youth are disproportionately impacted due to limited digital literacy education, reduced parental oversight, and reliance on mobile-first, algorithm-driven platforms. These risks are subtle, scalable, and psychologically damaging, yet remain poorly addressed in existing curricula. Despite the pressing nature of these challenges, few schools—particularly in low-income or under-resourced communities—have implemented comprehensive, scalable cyber safety education. This project addresses that critical gap by designing and evaluating an AI-supported cyber safety curriculum tailored to the unique developmental and social needs of youth.

Most existing cyber safety programs rely on static, in-person instruction, limiting their reach and adaptability. To address the urgent need, this project aims to leverage the most recent advances in AI to create an interactive, personalized, and scalable curriculum that not only informs but actively engages students. The curriculum will focus on four key areas of emerging AI-driven threats: Deepfakes & Synthetic Media, helping youth identify and respond to manipulated content that can cause reputational harm or be used for blackmail; AI-Enhanced Phishing & Scams, which trains students to recognize increasingly sophisticated social engineering techniques including voice cloning and impersonation; Algorithmic Amplification, which builds awareness of how recommender systems can normalize dangerous behaviors such as disordered eating, self-harm, cyberbullying, and social comparison spirals; and AI Surveillance & Privacy, educating students about the risks of data profiling, facial recognition, and location tracking.

Beyond content, AI will play a dual role as a pedagogical tool. First, an AI “buddy” will serve as an always-available conversational learning companion, providing on-demand feedback, encouragement, and explanations in developmentally appropriate language. Second, we will use generative AI technologies to create immersive simulations—such as branching narratives, comics, and lightweight mini-games—that mimic real-world cyber threats and allow students to practice decision-making in safe, structured environments. These tools will create a rich, multimodal learning experience that resonates with digitalnative learners while building essential protective skills.

Project objectives include the design and prototyping of four interactive curriculum modules, the development of AI-powered learning scenarios, and initial user testing with middle and high school students to refine both content and interface. We will also develop assessment metrics for measuring learning gains, digital resilience, and user engagement. Our goal is to produce a publishable manuscript detailing the curriculum design and AI integration process, generate preliminary data for external grant applications, and create technical documentation for scaling and replication.

The broader impact of this project lies in its potential to democratize access to high-quality cyber safety education. By developing a mobile-accessible, AI-enhanced curriculum that can be implemented in diverse educational settings—including homes, schools, and afterschool programs—we aim to reach youth who are most at risk yet least served. With further validation, we plan to partner with PBS Learning Media, whose platform reaches more than one million monthly users, to significantly expand the reach of this work.

Finally, this project contributes to the field of AI in education by demonstrating how emerging technologies can be ethically leveraged to support digital citizenship, well-being, and critical thinking in youth. It aligns closely with the ICDS mission to foster interdisciplinary, socially impactful research, offering a scalable solution to one of today’s most urgent educational challenges.

I request graduate student support for two semesters at 25% Research Assistantship to contribute to both the curriculum design and AI integration components of the project. The ideal candidate will have a strong background in education, along with demonstrated experience in the development of AI-based educational applications. This graduate student will play a key role in co-developing interactive learning modules, assisting with user testing and iterative design, and contributing to the implementation and evaluation of AI-driven instructional tools. The student will also participate in the ICDS Fall 2025/26 Symposium, presenting preliminary outcomes, and will engage in ICDS’s fellowship and assistantship programs to further enhance interdisciplinary collaboration and professional development.