I would have liked to see the student carry the linearity hypothesis to its logical conclusion: at some point, the spring constant will be zero…and then negative. Stick a theorized (n=∞,k=0) data point on the graph and reconsider the fit. Then graph something like 1/k vs n or k vs 1/n. Is that asking too much, especially with just 4 data points? Did students present and defend their work? If so, did this come out in the presentation?

No, unfortunately, students did not present and defend their work. I’d love to do that next year.

I was particularly disappointed because we had just spent the whole 4th quarter specifically evaluating data (in a data table) to determine if the relationship was linear, quadratic, inverse, or inverse square by saying “If X changes by factor of N, how does Y change?” They had to support their choice of function with proportional evidence, not the RMSE from a computerized fit.

So, without that specific prompting, this student seems to have switched back to default mode. The “default mode” is a sticking point I’ve noticed with many students for many different things in physics: analyzing motion graphs, drawing free-body diagrams, taking measurements, making graphs, etc. It’s not lack of intelligence, but the inability for new skills to because part of the active toolset. Any suggestions?

Do you think that that default mode is the competing default conception responsible for the low point in U-shaped development? I think the hope is that whiteboarding will take care of this, though I’m not experienced enough to catalyze that well.

I have two thoughts.

1. A “defend the mathematical model” protocol for whiteboarding labs. Students would have to argue things like that the model makes reasonable preditions for (a) large values of the IV and (b) small values of the IV, (c) that the scaling (as you mentioned above) makes sense, and that (d) the error bars wouldn’t accommodate a simpler or alternate model, all in order to justify the fit. (I had trouble with (d) this year when students would argue for a linear model when a constant one was more reasonable.) Think of these as conceptual tools for the R of CER.

If that gets too pedantic…
2. Like in equation jeopardy, students are given a mathematical model (equation), and they have to argue for situations in which it would or would not make sense. The not-making-sense is the tricky part. What I’m really going for is for situations in which all the variables are relevant but in some other combination or relationship. For instance F=(9.8N/kg)m may make sense for a gravitational force on a rocket but not for its thrust.

I really like both ideas. My post-lab discussions definitely need a protocol. I could see turing that into a poster. And equation jeopardy would make for great group problem solving. I think a sign of “intellectual maturity” would be when kids can follow protocol when not prompted.

I recall talking with Eugenia Etkina about prompting vs. non-prompting. She’s all for prompting the whole time. That’s the only way you’ll be able to get kids engaged in deeper thinking and forcing them to think about the connections. It’s silly to expect kids to have the thinking habits of experts in less than a year.

About error bars: I like when students do multiple trials for each level of independent variable and then graph all the data points instead of graphing the average for each level. It’s like natural error bars. Helps students see where data actually overlaps for constant relationships.

I would have liked to see the student carry the linearity hypothesis to its logical conclusion: at some point, the spring constant will be zero…and then negative. Stick a theorized (n=∞,k=0) data point on the graph and reconsider the fit. Then graph something like 1/k vs n or k vs 1/n. Is that asking too much, especially with just 4 data points? Did students present and defend their work? If so, did this come out in the presentation?

Hi Brian,

No, unfortunately, students did not present and defend their work. I’d love to do that next year.

I was particularly disappointed because we had just spent the whole 4th quarter specifically evaluating data (in a data table) to determine if the relationship was linear, quadratic, inverse, or inverse square by saying “If X changes by factor of N, how does Y change?” They had to support their choice of function with proportional evidence, not the RMSE from a computerized fit.

So, without that specific prompting, this student seems to have switched back to default mode. The “default mode” is a sticking point I’ve noticed with many students for many different things in physics: analyzing motion graphs, drawing free-body diagrams, taking measurements, making graphs, etc. It’s not lack of intelligence, but the inability for new skills to because part of the active toolset. Any suggestions?

Do you think that that default mode is the competing default conception responsible for the low point in U-shaped development? I think the hope is that whiteboarding will take care of this, though I’m not experienced enough to catalyze that well.

I have two thoughts.

1. A “defend the mathematical model” protocol for whiteboarding labs. Students would have to argue things like that the model makes reasonable preditions for (a) large values of the IV and (b) small values of the IV, (c) that the scaling (as you mentioned above) makes sense, and that (d) the error bars wouldn’t accommodate a simpler or alternate model, all in order to justify the fit. (I had trouble with (d) this year when students would argue for a linear model when a constant one was more reasonable.) Think of these as conceptual tools for the R of CER.

If that gets too pedantic…

2. Like in equation jeopardy, students are given a mathematical model (equation), and they have to argue for situations in which it would or would not make sense. The not-making-sense is the tricky part. What I’m really going for is for situations in which all the variables are relevant but in some other combination or relationship. For instance F=(9.8N/kg)m may make sense for a gravitational force on a rocket but not for its thrust.

I really like both ideas. My post-lab discussions definitely need a protocol. I could see turing that into a poster. And equation jeopardy would make for great group problem solving. I think a sign of “intellectual maturity” would be when kids can follow protocol when not prompted.

I recall talking with Eugenia Etkina about prompting vs. non-prompting. She’s all for prompting the whole time. That’s the only way you’ll be able to get kids engaged in deeper thinking and forcing them to think about the connections. It’s silly to expect kids to have the thinking habits of experts in less than a year.

About error bars: I like when students do multiple trials for each level of independent variable and then graph all the data points instead of graphing the average for each level. It’s like natural error bars. Helps students see where data actually overlaps for constant relationships.

Thanks for the advice!