Poster B48, Sunday, March 25, 8:00-10:00 am, Exhibit Hall C
Predicting Reading Comprehension from Eye Movement Features using Deep Neural Network
Xiaochuan Lindsey Ma1, Jinlong Hu2, Xiaowei Zhao3, Ping Li1; 1Pennsylvania State University, 2South China University of Technology, 3Emmanuel College
Eye movements have been utilized as an index of attention and comprehension during reading from a wide range of literature. Highly skilled readers are found to show shorter fixations, more skips, and fewer regressions as compared with less skilled readers (Ashby, Rayner, & Clifton, 2005). In this study, participants read five science texts in the MRI, while their eye movements were recorded. By combining readers’ real-time eye movements with neuroimaging, we could identify lexical access and semantic integration processes on the basis of moment-by-moment information processing. Readers’ reading comprehension ability was also assessed by the Gray Silent Reading Test (Wiederholt & Blalock, 2000), which allowed us to classify readers’ comprehension into two levels (High vs. Low). We hypothesized that these two levels could be predicted given their eye movement features (word-level fixation duration, skip rate, and regression rate, etc.) during reading. A Deep Neural Network (DNN) model (Bengio, 2009) with three hidden layers was trained on the eye-movement features to learn to predict the two groups of participants. After applying L1 & L2 regularization as well as dropout layer to reduce over-fitting, the DNN model reached a 70% accuracy on classifying high vs. low reading comprehension groups. A similar DNN with three hidden layers was applied on participants' fMRI images. However, due to the high dimensionality of the fMRI data (34200 voxels), the model failed to classify the images into appropriate groups. Future studies need to consider feature selection methods to improve the model's performance based on fMRI data .
Topic Area: LANGUAGE: Other