Data Research and Development Center and IERI Research Cmmunity header

The IERI Research Community :: Projects

Modeling Across the Curriculum

PRINCIPAL INVESTIGATOR:
Paul Horwitz

CO-INVESTIGATORS:
Janice Gobert, Robert Tinker, Uri Wilensky

CATEGORIES:
Science

PROJECT OVERVIEW:
Background: The Modeling Across the Curriculum Project is exploring the potential of a powerful new approach to educational research that combines the collection of extremely fine-grained student performance data with virtually unlimited scalability. We offer students hypermodels manipulable computer models linked to text and embedded performance assessment materials, and log their actions as they solve problems and rise to challenges posed by the computer-based activities. The resulting log files provide a wealth of data bearing on students' specific content knowledge, as well as inquiry and model-based reasoning skills applicable across science domains (Genetics, Gas Laws, & Newtonian Mechanics).

Purpose: We are investigating the extent to which high school students who acquire model-based reasoning skills in the context of one science domain are subsequently able to transfer those skills when learning a different scientific discipline. Our software is freely available on the Web, and we have demonstrated the scalability of our approach by collecting valid and useful data from schools unassociated with the project who have discovered our website and registered their students with us. The number of such "contributing schools" is limited only by their motivation and interest. The data we collect from them compares favorably with that from the dozen or so schools who were originally recruited for the project.

Intervention: We developed approximately ten interactive learning activities in each of three topic areas pertaining to three different scientific disciplines: force and motion (physical science), genetics (biology), and gas laws (chemistry). We introduced these materials originally to 3 Partner Schools, added an additional cohort of 10 Member Schools, and were eventually joined by over 400 Contributing Schools from around the world who simply downloaded our software and registered with us. Each learning activity logs all relevant student actions answers to questions, use of representations, and manipulations of the computer models. We are analyzing the resulting log files to evaluate students' content knowledge and model-based inquiry skills (both domain-general and domain specific), as well as to shed light on how the implementation was conducted vis- -vis computer usage (every day, intermittent, etc.). Additionally, we collect data from teachers via a classroom communiqu , which yields data about whether the activities were used as an introduction to the content, or as a review, or inter-mingled with the teacher's standard pedagogy for the content.

Setting: Most of the schools (and all those that were originally recruited for the project) are in the United States, but we have registered additional schools in 35 foreign countries as well. The original sites include a broad mix of urban, suburban, and rural schools from eleven different states and representing a wide range of socioeconomic demographics. All have been with the project for a minimum of three years, starting in 2003. The project began October 1, 2001 and is expected to end September 30, 2006.

Research Design: In the 2005 2006 school year we collected useful data from a total of 50 schools, comprising 407 individual classes, 145 teachers, and 8270 students. A total of 14 U.S. schools were recruited to the project on the basis of (a) expressed interest in the research program, (b) contributing to an appropriate spectrum of socioeconomic, size, geographic, and other factors. The rest of the schools were "self-recruited" by way of downloading the software, sending periodic teacher communiqu s, and agreeing to share their logged data with us.

Because we collect all our data over the Internet, we cannot observe control groups of students who do not use our materials. Instead, we compare pre- and post-test scores of students who, as evidenced by their log files, use our materials differently. One obvious measure is how many learning activities a student completes, and over what duration compared to students who complete fewer activities. We acknowledge the possibility that those who complete more activities may also be more skilled at learning science than those who complete fewer activities, however by using pre-test content measures and other survey measures (epistemology of models and attitudes towards science learning) as covariates in the analyses, we are able to account for the effects of skill level. The availability of fine-grained data enables us to make extremely detailed distinctions. For instance, by carefully "mining" the logs, we can ascertain how many attempts students make before getting an answer, how many hints they ask for, which computer tools they use, and how systematic or haphazard they are in exploring the problem space. By this means we build up a "performance profile" of each student, and use it to examine the relationship between process measures and pre- to post-test gains. We are also examining the relationship between process variables and other measures, including students' epistemologies of models.

We measure learning gains in each domain by administering identical pre- and post-tests especially designed by us to assess model-based knowledge and reasoning in each of our three domains. We use the log files collected with each use of our learning activities to analyze student performance on selected tasks, in order to assess their models of domain-specific phenomena at various points along their learning trajectories. We use the log-file data to assess students' domain-general inquiry skills by aggregating within and across domains. We also use the logs to document classroom implementation variables, such as the number of activities used, the pattern of usage, and the elapsed time between pre- and post-tests. Additional information bearing on classroom implementations includes teacher and school demographic data, along with surveys administered to teachers before, during and after they use MAC activities in their classrooms. Finally, we measure students' epistemology of models using the SUMS (Student Understanding of Models Survey; Treagust, 2002), and their attitudes toward science, using domain-specific versions of the VASS (Views About Science Survey; Halloun, 2001) for biology, chemistry, and physics.

Findings:
Biologica Pre- and Post-Test Performance in Year 4
Students' learning gains, as evidenced by pre- to post-score comparisons varied by class level. On average across Member schools, Honors students earned the highest pre-test score (mean= 18.59), while the College Prep group earned the greatest gain scores (mean=8.27). Regular students, the largest constituency (n=402), earned an average pre-test score of 15.33 and average gain of 3.58. Paired t-tests revealed a significant difference between the biology pre- and post-test scores for students at each of the four class levels.

Dynamica Pre- and Post-Test Performance in Year 4
Students' learning gains, as evidenced by pre- to post-test score comparisons, varied by class level. On average, across Member schools, the Advanced Placement students earned the highest pre-test score (mean= 15.56) and greatest gain score (mean=6.24). Regular students, the largest constituency (n=403), earned an average pre-test score of 13.89 and average gain of 3.45. Paired t-tests revealed a significant difference between the Dynamica pre- and post-test scores for students at each of the four class levels.

Connected Chemistry Pre- and Post-Test Performance in Year 4
Students' learning, gains as evidenced by pre- to post-test score comparisons, improved across two versions of Connected Chemistry activities implemented in Year 3. In the earlier version, paired t-test comparisons (n=44; 27% "Honors", 73% "Regular" chemistry classes) revealed significant (p<0.01) gains overall, but most particularly in questions related to microscopic level phenomena. Paired t-test comparisons of data from 94 students in a later implementation revealed significant (p <0.01) gains not only on the overall and micro level scores, but also on students understanding of macro level phenomena and the causal connections between the micro and macro levels of the phenomena. We saw a large improvement in the students' gains between the two implementations. The gains in understanding of all aspects of the curriculum improved: the macroscopic, the microscopic and the combined emergent perspective.

Chemica Pre- and Post-Test Performance in Year 4
Students learning gains, as evidenced by pre- to post-score comparisons did not vary considerably by class level. Gains ranged between 2.75 and 3.33 points for the College Prep, Honors, and Regular students. Mean pre-test scores were also fairly consistent across class levels, ranging from 11.42 (Regular) to 12.36 (Honors). In terms of mean post-test scores, Honors students' scored the highest (15.69), followed by College Prep (15.21), and Regular (14.17). In terms of raw points, few differences were found among these groups. Paired t-tests revealed a significant difference between the Chemica pre- and post-test scores for students at each of the three class levels.

Analyses of Log Files and Post-Test Gains
We use students' log files on multiple inquiry hot spots across three domains to address how students inquiry skills are developing both within and across domains. One measure of inquiry skills is how systematic students are in manipulating models to achieve a goal. Systematicity has been found to be a reliable measure of students strategic learning and knowledge acquisition strategies (Gobert, 1994, 1999; Thorndyke & Stasz, 1980) and is a good measure with which to compare learners since it bears on their skill at estimating solutions (Paige & Simon, 1966). We are using students data on domain-specific inquiry tasks ( hot spots ) and domain-general inquiry spots ( DoGI spots ) in order to evaluate their relationship to conceptual learning measurements, e.g., pre-post content tests. We are also evaluating the relationship between both domain-specific inquiry and domain-general inquiry, as well as students epistemologies of models since these have been found to influence science learning (Gobert & Discenna, 1997; Songer & Linn, 1991). Using the inquiry data in this way requires that we aggregate the data across the activities in each of the domains. These analyses are currently underway.

PROJECT PUBLICATIONS:
Publications and Presentations
Buckley, B. C., Gerlits, B., Goldberg-Mansfield, A., & Swiniarski, M. J. (2004). The impact of BioLogica usage in classrooms on student learning outcomes. Paper presented at the National Association for Research on Science Teaching, Vancouver, BC.

Buckley, B., Gobert, J. & Horwitz, P. (2006). Using Log files to Track Students' Model-based Inquiry. To appear in the Proceedings of the Seventh International Conference of the Learning Sciences (ICLS).

Buckley, B. C., Gobert, J., Horwitz, P., & Dede, C. (2005). Invisible Dragons: An assessment game. Paper presented at the American Educational Research Association, Montreal, Canada.

Buckley, B. C., Gobert, J., Kindfield, A. C. H., Horwitz, P., Tinker, B., Gerlits, B., et al. (2004). Model-based Teaching and Learning with Hypermodels: What do they learn? How do they learn? How do we know? Journal of Science, Education and Technology, 13(1), 23-41.

Buckley, B. C., Gobert, J., Mansfield, A., & Horwitz, P. (2006). Facilitating and assessing genetics learning with BioLogica . Paper presented at the National Association for Research in Science Teaching, San Francisco.

Buckley, B. C., Gobert, J., Mansfield, A., Horwitz, P., & Dede, C. (2005). Computer-enabled Pedagogy, Research & Assessment in Genetics using BioLogica. Paper presented at the CAL'05: Virtual Learning?, Bristol, UK.

Buckley, B. C., Gobert, J. D., Gerlits, B., Goldberg, A., & Swiniarski, M. J. (2004). Assessing Model-Based Learning in BioLogica. Paper presented at the American Educational Research Association, San Diego, CA.

Gobert, J. (2005). Leveraging technology and cognitive theory on visualization to promote students' science learning and literacy. In Visualization in Science Education, J. Gilbert (Ed.), pp. 73-90. Springer-Verlag Publishers, Dordrecht, The Netherlands. ISBN 10-1-4020-3612-4.

Gobert, J., Buckley, B., & Clarke, J. E. (2004). Scaffolding model-based reasoning: Representations, cognitive affordances, and learning outcomes. Paper presented at the American Educational Research Association, San Diego, CA.

Gobert, J., Buckley, B., Dede, C., Horwitz, P., Wilensky, U., & Levy, S. (2004). Modeling Across the Curriculum (MAC): Technology, Pedagogy, Assessment, & Research. Paper presented at the American Educational Research Association, San Diego, CA.

Gobert, J., Buckley, B. C., & Horwitz, P. (2006). Technology-enabled assessment of model-based learning and inquiry skills among high school biology students. Paper presented at the American Educational Research Association, San Francisco.

Gobert, J., Horwitz, P., Buckley, B., Mansfield, A., Burke, E., & Markman, D. (2005). Logging Students Model-Based Learning and Inquiry Skills in Science. In the Proceedings of the American Association of Artificial Intelligence Technical Report WS-05-02, p. 67. AAAI Press, Menlo Park, CA.

Gobert, J., Horwitz, P., Tinker, B., Buckley, B., Wilensky, U., Levy, S., and Dede, C. (2003). Modeling Across the Curriculum: Scaling up Modeling Using Technology. In the Proceedings of the Twenty-fifth Annual Meeting of the Cognitive Science Society, July 31-August 2, Boston, MA.

Horwitz, P. (2003, Fall). The Concord Consortium Portal. @concord, 7.

Horwitz, P. (2003, Spring). Modeling Across the Curriculum. @concord, 7.

Horwitz, P. (2004). Log file analysis meeting. Concord, MA.

Horwitz, P. (2004, Spring). MAC reaches three million records... and is still growing. @concord, 8.

Other References Cited
Gobert, J. (1994). Expertise in the comprehension of architectural plans: Contribution of representation and domain knowledge. Unpublished doctoral dissertation. University of Toronto, Toronto, Ontario.

Gobert, J. (1999). Expertise in the comprehension of architectural plans: Contribution of representation and domain knowledge. In Visual And Spatial Reasoning In Design '99, John S. Gero and B. Tversky (Eds.), Key Centre of Design Computing and Cognition, University of Sydney, AU.

Halloun, I. A. (2001). Student Views about Science: A comparative Survey, Phoenix series. Beirut, Lebanon: Educational Research Center, Lebanese University.

Paige, J.M., and Simon, H. (1966). Cognitive processes in solving algebra word problems. In B. Kleinmuntz (Ed.), Problem solving. New York: Wiley.

Songer, N.B., & Linn, M.C. (1991). How do students' views of science influence knowledge integration? Journal of Research in Science Teaching, 28(9), 761-784.

Thorndyke, P., and Stasz, C. (1980). Individual differences in procedures for knowledge acquisition from maps. Cognitive Psychology, 12, 137-175.

Treagust, D., Chittleborough, G., & Mamiala, T. (2002). Students' understanding of the role of scientific models in learning science. International Journal of Science Education, 24(4), 357-368.

ON THE WEB:
You can learn more about this project by visiting the Modeling Across the Curriculum web site at http://mac.concord.org/.