[Saltassessmentworkinggroup] Help pls - assessment study design ?s
acurcio at gsu.edu
Wed Oct 14 19:21:13 CDT 2009
Mike kindly agreed to make our group into a list serv. This will allow others to join and we won't have to be worried about constantly hitting the right "reply all" button. Hopefully, all of you are o.k. with this new format.
In the spirit of the group's sounding board purpose, I'd love to quick off this new listserv with ome questions I have regarding an on-going assessment research project.
Last year in my evidence class, I did a lot of hypos and problems in class but never made students write out answers. I tested via a closed book 3 and 1/2 hour final which consisted of either short answer or short essay questions worth between 4 to 7 points each. I also handed out a case file before the final- telling students that about 40-50% of their final exam questions would come from evidence questions embedded in the case file. They were allowed to discuss those issues with colleagues if they wanted to do that.
This year in evidence, I will give a similar final exam format. However, I also gave a short graded mid-term with feedback from me and required students to do self-reflective analysis of their answers based upon a model answer and rubric, and I am giving my students an ungraded "quiz" every other week - either in class or as a take home, they get a short essay question they must answer in writing. After they do so, they get a model answer/rubric; time for self and/or peer edits, time to ask questions in class, and are asked to do self-reflectiion about their answers. I have begun asking them to turn in their self-reflections to help me see where they are going off-track [an interesting aside here - after looking at their self-reflective analysis, I've changed the questions I ask students to reflect upon - e.g. rather than "did you spot the issue" I now ask "the issue involved Rule 403; did you identify that as the applicable rule" because I found that even with the model answer in front of them, some students said they had identified the applicable issue when, in fact, they had not done so].
I want to see if the formative feedback I am giving students this year results in a statistically significant difference in students' ability to spot and analyze evidence issues.
This year's final exam will have some of the same questions as last year, although I will add different questions just in case some of the questions from last year somehow were circulated. I plan to compare last year's and this year's students' raw score grades on four of the same questions to see if the practice/self-reflection resulted in any statistically significant differences in raw score points between the two classes.
1. I am thinking that I need to have my secretary type out all the answers from each year's class to the four questions I am comparing and, after I finish grading this year's exams, I need to re-grade the four questions from both years so that I am really blind-grading. This seems like a lot of extra work but it does eliminate bias in terms of grading. Do you all agree I probably should do that?
2. In the last class this year, I want to give this year's students a survey seeing if they feel that the extra work helped them learn: a. how to study for evidence; b. learn the substance; and c. learn how to analyze an evidence test question. I'd love to use a survey that someone else has already done/validated that asks similar questions. Any suggestions here about a survey I could adapt?
3. There are some differences between these two classes I cannot control for, such as the following: 1. last year's class was about 65 people; this year I have 52; 2. last year's exam was a 3 and 1/2 hour closed book; this year, it will be a 3 hour exam with a few less questions b/c of the mid-term [trying to keep the time pressure about the same between the two years]; 3. last year's exam required all students to hand write their answers in a designated space; this year the students can either write or type their answers [typers have a word count]; 4. I don't know if the students in each class are similar (am doing a LSAT/UGPA/first year LGPA to determine if they come into the class with similar grade predictor stats).
Do you see any of the above differences as potentially result-altering hurdles to a valid study?
4. Do you have any questions/issues about the design that you think it would be useful for me to know while the study is on-going [i.e. that I might be able to change or adjust before it's too late :-)]
Thanks for any help you can provide!!!
More information about the SALTassessmentworkinggroup