Major
Computer Science
Anticipated Graduation Year
2026
Access Type
Open Access
Abstract
The creation and training of deep learning models pose significant technical, financial, and environmental challenges, which can be addressed through the reuse of pre-trained deep neural network models (PTMs), which enables deep learning methodologies with little comparative cost. While previous research has focused on PTM applications in software engineering, our work investigates their growing role in computational Natural Science. Our ongoing work examines and longitudinally measures PTM reuse in computational Natural Science. Our efforts quantify PTM reuse methodologies, identifies trends regarding the reuse of PTM architectures across Natural Science, and provides insights into how PTMs are progressing science.
Faculty Mentors & Instructors
George Thiruvathukal, PhD, Computer Science Department Chair; Nicholas Synovic, PhD Student, Computer Science
Creative Commons License
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 License.
Exploring Deep Neural Network Reuse in Computational Natural Science
The creation and training of deep learning models pose significant technical, financial, and environmental challenges, which can be addressed through the reuse of pre-trained deep neural network models (PTMs), which enables deep learning methodologies with little comparative cost. While previous research has focused on PTM applications in software engineering, our work investigates their growing role in computational Natural Science. Our ongoing work examines and longitudinally measures PTM reuse in computational Natural Science. Our efforts quantify PTM reuse methodologies, identifies trends regarding the reuse of PTM architectures across Natural Science, and provides insights into how PTMs are progressing science.