Public
Activity Feed Discussions Blogs Bookmarks Files

Continuous improvement, validity and relaibility

One of the most difficult attributes of applying reliability and validity to assessment is recognizing that over time minor changes to content, expectations, or interpretations of content have a cumulative effect that can undermine assessments. In other words, it is very difficult to overcome a static perspective of reliability and validity. Over time, it's very likely that many small changes occur in content, interpretation of assignments, and emphasis on specific content elements. There seems to be much more focus on updating material and less focus on the reliability and validity of assessments. It's possible that very small changes in course content can significantly affect the validity and reliability of assessments. For example, the recently released APA formatting (6th edition), may represent a change that in the instructors perception should not affect reliability and validity. Add that to the mix of small changes to refine the course. Yet this is a perception, unsubstantiated with evidence. On one hand I understand it is not necessary to review validity and reliability of assessments every time a minor change is implemented, but on the other hand I am a bit worried about the cumulative effect of small increments over time.

This module has prompted me to go back and review impact of these small changes over time and assess their cumulative impact on the validity and reliability of the assessments.

Does anyone have some guidelines or experiences that would prompt an instructor as to when to go back and review validity and reliability of assessments?

Ron,

What a great point! You are right. We have to change our assessments as we change content. I review my objective assessments after every delivery. It is easier now as I can run statistical analysis and have data on the student results. I do look at test item analysis. It gives you a look at how the students answer. I think will go back and review standards.

Dr. Kelly Wilkinson

Yes it is easier when you can run the reports and for me, I report to the design team any changes or inconsistencies I may find. I think its great to reach out to other instructors that may be teaching the course, but that can be a challenge also with everyone's busy schedules.

Bernadette,

That is true. That is why I like having venue like this to share strategies. It does take time.

Dr. Kelly Wilkinson

Yes, I teach project based digital media. A combination of software skills and design principles are taught. Technology and software change on a daily bases. Due to this "over time" both Reliability and most apparent Validity changes in the "real world" and then becomes quickly adapted in the classroom.
Thank you.
Prof.Carrinton

Kelly,

Great post! In your field "real world" is important. You want the student to integrate their learning. The learning environment should reflect that.

Dr. Kelly Wilkinson

I have not discovered any changes in validity and reliability when minor course changes are implemented. I do agree that it is possible for minor course changes to have an impact on validity and reliability. However, in my professional experience, I have not seen any indication that such posited changes occurred.

Donald,

It really depends on the quality of the first assessment and does it measure what you want it to measure. I have seen changes when someone started to implement project based learning. That was a challenge to both validity and reliability.

Dr. Kelly Wilkinson

Sign In to comment