Evaluation of Statistical and Machine Learning Approaches to Evaluating Creativity in the Alternatives Uses Task

Corin Cameron

Abstract

For the last 70 years, divergent thinking tasks have been important for measuring the creative process. Raters typically show high levels of interrater reliability; however, the task can be onerous. We compare alternative methods from human rating. Specifically, we recruited human raters from Amazon’s Mechanical Turk and the Loyola Psychology Subject Pool with two different sampling methods (i.e., Top 2 and Snap Shot) In addition, we will use SemDist, a computer algorithm that operationalizes creativity based on semantic distance. Measures from these three methods will be compared for data from approximately four hundred study participants.

 

Evaluation of Statistical and Machine Learning Approaches to Evaluating Creativity in the Alternatives Uses Task

For the last 70 years, divergent thinking tasks have been important for measuring the creative process. Raters typically show high levels of interrater reliability; however, the task can be onerous. We compare alternative methods from human rating. Specifically, we recruited human raters from Amazon’s Mechanical Turk and the Loyola Psychology Subject Pool with two different sampling methods (i.e., Top 2 and Snap Shot) In addition, we will use SemDist, a computer algorithm that operationalizes creativity based on semantic distance. Measures from these three methods will be compared for data from approximately four hundred study participants.