††††††††††† ††††††††††† †††††††††††
I worked on the Vectors in the Time of Scalars project this summer.† Preliminary data suggested that there was an interference effect between the type of instruction students were receiving and how they performed on certain questions.† We were looking at two main types of instruction: vector instruction and scalar instruction.† If students were receiving vector instruction, we hypothesized that they would do better on vector-type questions and worse on scalar-type instruction.† Likewise, if students were receiving scalar instruction, they would do better on scalar-type questions and worse on vector-type questions.
The data were collected at Rochester Institute of Technology (RIT) over a period of two years.† Questions were divided into tasks by topic, such that each task was approximately 5-10 questions.† Students would complete a different task every week to prevent re-test effects.† By testing the students each week, we can see how students respond to conceptual questions on a weekly basis, rather than a simple pre-post test.
For this project, the following tasks were analyzed:
∑ Field and Potential Energy(Electric)
∑ Gravitational Potential Energy
After having analyzed the data, I found no interference effect between the type of instruction and how students performed on questions.† Each question fell into one of three categories: static noise, chance or ceiling effects, or typical learning curve.† Static noise refers to a graph that is above the chance line, but there is nothing but noise between the points.† Chance or ceiling effects refer to graphs that hovered around the chance line (minimum number of right answers) or the ceiling line (maximum number of right answers).† Typical learning curve refers to graphs that clearly show that students did not know anything (few right answers) and then they learned something (significantly more right answers).† While these three categories were present in the data, the data did not support the interference hypothesis.
Given that the data did not produce any sort of meaningful result, I have also developed new questions for each of the 5 tasks listed above.† About 20 questions were written for each task, to make the tasks longer and more comprehensive.† The new questions have been validated by 11 students and faculty members at KSU, and they will be incorporated into the testing schedule at RIT in the Fall of 2012.† Extensive interviews will be conducted in October to judge studentsí responses to the questions.
This program is funded by the National Science Foundation through grant number PHY-1157044. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.