UROP Project
Model Extraction Attack and Defense
Deep learning, artificial intelligence, model extraction attack, large language models (LLMs), graph learning
Research Mentor: Yushun Dong,
Department, College, Affiliation: Computer Science Department, Arts and Sciences
Contact Email: yd24f@fsu.edu
Research Assistant Supervisor (if different from mentor):
Research Assistant Supervisor Email:
Faculty Collaborators:
Faculty Collaborators Email:
Department, College, Affiliation: Computer Science Department, Arts and Sciences
Contact Email: yd24f@fsu.edu
Research Assistant Supervisor (if different from mentor):
Research Assistant Supervisor Email:
Faculty Collaborators:
Faculty Collaborators Email:
Looking for Research Assistants: Yes
Number of Research Assistants: 6
Relevant Majors: Open to all majors
Project Location: On FSU Main Campus
Research Assistant Transportation Required: No, the project is remote Remote or In-person: Partially Remote
Approximate Weekly Hours: 10, Flexible schedule (Combination of business and outside of business. TBD between student and research mentor.)
Roundtable Times and Zoom Link:
Number of Research Assistants: 6
Relevant Majors: Open to all majors
Project Location: On FSU Main Campus
Research Assistant Transportation Required: No, the project is remote Remote or In-person: Partially Remote
Approximate Weekly Hours: 10, Flexible schedule (Combination of business and outside of business. TBD between student and research mentor.)
Roundtable Times and Zoom Link:
- Day: Tuesday, September 2
Start Time: 1:00
End Time: 2:00
Zoom Link: https://fsu.zoom.us/j/7153751215
Project Description
Machine learning models are becoming a key part of many everyday applications—from search engines and virtual assistants to healthcare and banking. However, as these models become more powerful and widely used, they also become more attractive targets for attackers. One serious threat is called a model extraction attack, where an outsider tries to “steal” a trained model by sending queries and analyzing the outputs. This stolen model can then be misused, duplicated, or reverse-engineered, leading to intellectual property theft, loss of competitive advantage, and serious privacy risks.This research project focuses on understanding how these attacks work and developing effective defenses. We want to study how attackers interact with machine learning services (often offered as APIs) and figure out what information they can extract. Then, we will design and test various protective strategies to make models more resistant—without significantly lowering their accuracy or slowing them down for legitimate users.
This project is a great opportunity for students interested in cybersecurity, artificial intelligence, or the ethical and legal implications of new technologies. No matter your background, if you’re curious about how smart systems can be tricked—and how to make them safer—this research will give you hands-on experience at the frontier of AI security.
Research Tasks: Students joining this project will help review existing work in model extraction and defense to build a strong foundation of knowledge. This will include reading papers and summarizing techniques and trends in a collaborative way, with support from the lead researcher. Together, we’ll identify gaps in the literature where new ideas or experiments can contribute to the field.
You will also assist in developing experiments using open-source machine learning models. This may involve training simple models, simulating extraction attacks, and evaluating how well different defenses perform. Depending on interest and skills, you may help write code for data collection, modify algorithms, or visualize attack/defense results in easy-to-understand formats.
Finally, we will document our findings and prepare materials for future presentations and publications. Students will be encouraged to contribute ideas and, if desired, co-author posters or papers. This is an interactive project where your contributions will directly shape how we understand and improve model security.
Skills that research assistant(s) may need: Required: Basic programming experience, ideally in Python. Familiarity with tools such as Jupyter Notebook, Google Colab, or similar platforms is important since most of our experiments will be coded and tested there.
Recommended: Interest in or prior exposure to machine learning concepts (e.g., through a course or self-study). Experience with libraries like scikit-learn, PyTorch, or TensorFlow will be helpful but not mandatory—training will be provided.
Recommended: Critical thinking and clear communication skills. Because we are working in a fast-evolving and interdisciplinary area, students who can ask thoughtful questions and explain technical concepts in plain language will thrive. All majors are welcome, and diverse perspectives are encouraged.