Major

Neuroscience

Anticipated Graduation Year

2024

Access Type

Open Access

Abstract

Artificial Intelligence is becoming a larger part of daily life and a state-of-the-art method for neuroscientists. As more trust is placed in Artificial Intelligence, it must accurately represent the primate brain. Currently, Deep Convolutional Neural Networks (DCNNs) are the state-of-the-art in image classification but do not accurately represent the way primate brains perceive images. DCNNs take a far less holistic-object-shape-oriented approach, placing a heavier weight on the texture of images than their shape and causing inaccuracy. What remains unknown is whether this absence of shape-sensitivity is inevitable of DCNNs’ architecture or whether it depends on the data used during training.

Faculty Mentors & Instructors

Nicholas Baker, Assistant Professor, Department of Psychology; George K. Thiruvathukal, Department Chair, Department of Computer Science

Supported By

Silvio Rizzi, Argonne Leadership Computing Facility, Argonne National Laboratory

Creative Commons License

Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 License
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 License.

Share

COinS
 

How Do We Make Artificial Intelligence More Human-like?: Improving Object-Shape Sensitivity in DCNNs through Training with Non-Diagnostic Texture

Artificial Intelligence is becoming a larger part of daily life and a state-of-the-art method for neuroscientists. As more trust is placed in Artificial Intelligence, it must accurately represent the primate brain. Currently, Deep Convolutional Neural Networks (DCNNs) are the state-of-the-art in image classification but do not accurately represent the way primate brains perceive images. DCNNs take a far less holistic-object-shape-oriented approach, placing a heavier weight on the texture of images than their shape and causing inaccuracy. What remains unknown is whether this absence of shape-sensitivity is inevitable of DCNNs’ architecture or whether it depends on the data used during training.