Public
Activity Feed Discussions Blogs Bookmarks Files

Ask a question from your peers to help you in your professional work. Seek different points of view on a topic that interests you. Start a thought-provoking conversation about a hot, current topic. Encourage your peers to join you in the discussion, and feel free to facilitate the discussion. As a community of educators, all members of the Career Ed Lounge are empowered to act as a discussion facilitator to help us all learn from each other.

Subjective assessment

I often use subjective assessment for my online students via written essay type answers. A study showed the value of subjective assessment used in anesthesiology residency e-leanrning ( Chu et. al, 2013). Reference Chu LF, Ngai LK, Young CA, Pearl RG, Macario A, Harrison TK. (2013). Preparing Interns for Anesthesiology Residency Training: Development and Assessment of the Successful Transition to Anesthesia Residency Training (START) E-Learning Curriculum. J Grad Med Educ. ;5(1):125-9. doi: 10.4300/JGME-D-12-00121.1.

Alternative assessment

Because alternative assessment is conducible to online online, I try to use this with my students. Students being able to evaluate their work and benefit from the evaluation process is wonderful. Great Module!

Formative Assessment

Formative assessment is critical in online programs. I am aware of the need to be using such assessment to better understand my students' ongoing progress; thus I am able to modify lessons to improve from assessment results. The value of formative assessment programs are noted in randomized control trials (BMC Med Educ, 2014). Reference Palmer, E., & Devitt, P. (2014). The assessment of a structured online formative assessment
program: a randomised controlled trial. BMC Med Educ. 14:8.

technology tools

Technology tools are very important for a solid online course. In my health science courses I use technology tools and references for online resources most appropriate for this population. Technology tools are very crucial for the success of online courses for medical students (Han, Nelson, & Wetter, 2014). 
Reference 
Han, H., Nelson, E., & Wetter, N. (2014). Medical students' online learning technology needs.
Clin Teach. 11(1):15-9.

Digital Portfolios

While there are many sites with Digital Portfolio Abilities, many of them free. I feel it is extremely important to have a platform easy for your students to access and "play" with. The site chosen should also be user friendly to the instructor as well. The most important thing is that you have "played" with it enough and saved things to your portfolio so you can guide your students through it. I did a simple internet search and found over 20 sites for saving digital portfolios. These were free. For a fee, there were several more. How would you narrow this much information down? I think discussions with fellow instructors and others who have worked with these is a good start. In many schools they have established accounts with certain sites. Do yourself a favor and check with your Academic Dean first to see if there is some direction there. Who do you think is important to touch base with on this topic?

Rubric

How do we properly weight an exam rubric used in the medical professional field? Using practical to practice from pretend patient to clinical.

Tools

How do we integrate the new tools with older students?

Student Attention

How do you as professor maintain a high level of engagement?

Diagnostic Assessment

What are some good examples of diagnostic assessments for computer courses? I find people often overestimate their capabilities on questions and answers.

Reflective Journaling

Has anyone used reflective journaling as an assessment of learning? I understand managing them could be time consuming, but could provide additional benefits such as writing improvement. Jacquie Porter

Fighting a grade

Does anyone have students who still continue to fight their grade despite a detailed organized rubric? For example I once had a student fight with me that they got a A-. I had marked them lower on the student participation area of the rubric. They were supposed to deliver a lesson plan and engage students in the process. Her response was that it was "ridiculous" that I would expect her to be responsible for other students being engaged in her presentation/lesson. Furthermore when it was her turn to listen to others and engage in their lesson plan she continued to chat with her friends and look at her laptop instead. If my students aren't engaged in my classroom online or face to face I take it that I need to revise my content/curriculum because they cannot learn if they are not engaged.

Technical surveys

One thing I find that differs greatly among students is their technical skills. SOme people are very tech savvy, not afraid of learning new software & hardware while other students seem hesitant and need to have their hands held every step of the way. I find giving a diagnostic tool in the form of a technical survey helps me to use differentiating learning and engage all learners in an online environment helping each and every student to increase their technical skills at their own comfort level.

Rubrics in the classroom help to motivate students

I've found that a good set of rubrics is to begin with the course objectives and each of the assignments. The flow should be course objectives to grading rubric to classroom student assignment. This way we can use this assessment to determine if students are getting out of the classroom what we expected them to and be able to circle back and improve the course curriculum. At the same time we can provide students with feedback to help them throughout the course.

Grading subjective materials

I teach a Careers class where we build individual resumes etc. This is such a subjective area and sometimes it's difficult for me to use traditional grading methods. I find that using a Digital Portfolio works best for me.

Comptenecy based-MBA Programs

Just recently one of the institutions I work for started an innovative MBA program that is based on competencies. Essentially, a competencies assessment is performed before taking any specific courses in the program in order to assess knowledge and skills of the student against the expected knowledge and skills of an MBA graduate. This assessment, up to a certain degree, allows for students to get constructive credit for some courses based upon their previous knowledge and experience. This allows students to complete an MBA program and less than the traditional time of two years. However, the success or failure of the program is based upon prior assessments or more specifically on how authentic assessment is performed. The institution’s ability to reliably and validly present a series of tasks with meaningful application to assess how well the student can synthesize and demonstrate competencies from various basic courses is crucial to the success of this program. If the rubrics are too much oriented towards course content the amount of students being able to pass these courses based on their competencies could be very limited. On the other hand, if rubrics are focused too much on real-world problems and situations quite distant from Course and program objectives, the institution could be awarding unwarranted credit. Since this is a relatively new program it does not benefit yet from the inputs are feedback from the students. Does anyone have any ideas on how to instill the first round of validity and reliability to a competency based assessment?

Continuous improvement, validity and relaibility

One of the most difficult attributes of applying reliability and validity to assessment is recognizing that over time minor changes to content, expectations, or interpretations of content have a cumulative effect that can undermine assessments. In other words, it is very difficult to overcome a static perspective of reliability and validity. Over time, it's very likely that many small changes occur in content, interpretation of assignments, and emphasis on specific content elements. There seems to be much more focus on updating material and less focus on the reliability and validity of assessments. It's possible that very small changes in course content can significantly affect the validity and reliability of assessments. For example, the recently released APA formatting (6th edition), may represent a change that in the instructors perception should not affect reliability and validity. Add that to the mix of small changes to refine the course. Yet this is a perception, unsubstantiated with evidence. On one hand I understand it is not necessary to review validity and reliability of assessments every time a minor change is implemented, but on the other hand I am a bit worried about the cumulative effect of small increments over time. This module has prompted me to go back and review impact of these small changes over time and assess their cumulative impact on the validity and reliability of the assessments. Does anyone have some guidelines or experiences that would prompt an instructor as to when to go back and review validity and reliability of assessments?

Rubrics: Process vs. knowledge

Does anyone else incorporate process into their grading rubrics? I do so to emphasize the work required to write multiple paper drafts.

First graduate assignments as diagnostic assessments

Since I teach the first course in a graduate business program, many students are not aware of the large amount of skills they are expected to apply during their graduate studies. Accordingly, I take the time to let them know that their first round assignments will also help them assess where they are with respect to the expected skills for the whole program. For me this is a valuable form of diagnostic assessment. During the chats and e-mail conversations for the first unit I inform the students they will be applying 10 skills simultaneously in their first round of assignments. These skills include basic computer skills, classroom navigation skills, online learning skills, Microsoft word skills scholarly style writing, use of scholarly databases and library resources, APA formatting skills, critical thinking skills, judgment and decision-making skills, and experimental research skills. I also let them know they should not become frustrated with the amount of skills they need to apply at once but rather send me a note so we can discuss over e-mail or over a phone call. Once the first round of assignments are assessed and evaluated. I'll take the opportunity to not only provide summative assessment for the unit but also offer formative assessments regarding the application of these 10 skills and their relevance throughout the entire graduate program. Then in the first chat of the second unit I'll also take some time to discuss and offer an overall impression of skills competencies and suggest different resources for those students that discovered they need to refine certain skills. Although this is not something that is mandated by the current institution which I work for, I think it is very practical and productive way to allow students that have not been in school for while to be able to recognize the range of skills necessary for graduate work and at the same time provide them opportunities to refine some of their skills. Does anyone else have the experience of using the unit one deliverables as a form of diagnostic assessment?

Digital portfolios and journals

E-portfolios are very much like journals, but with multimedia content. In my graduate leadership classes (Masters and doctorate) I’ve used journaling as an assessment tool since it allows me to see how students apply in their workplace what they learn in the classroom. The fact that they have to know the material, find the moment to apply the material, reflect on the event and then document the experience in a journal helps me see progress throughout the session. Although some aspects of the journal may be very personal, I let them know that the Final Exam is a summary of major learning events documented in the journal. What strikes me as a possibility is to expand documentation sources and allow students to use multimedia in their journaling. I think it will enhance the learning pr0cess and at the same time allow for some creativity. But then again, I’ll also have to communicate the advantages and disadvantages of each form of media so the students don’t dedicate more time to technology than to the documenting of the experience. Essentially (and if properly done) journaling formats can be expanded and look more like e-portfolios. Has anyone tried this expanding journaling by using multimedia based on the e-portfolio tool?

Evaluating Rubrics

Regardless of whether you are modifying an existing rubric, creating one from scratch, or using a rubric developed by another party, both before and after you use the rubric is a good time to evaluate it and determine if it is the most appropriate tool for the assessment task. Questions to ask when evaluating a rubric include: Does the rubric relate to the outcome(s) being measured? The rubric should address the criteria of the outcome(s) to be measured and no unrelated aspects. Does it cover important criteria for student performance? Is the rubric authentic, does it reflect what was emphasized for the learning outcome and assignment(s)? Does the top end of the rubric reflect excellence? Is acceptable work clearly defined? Does the high point on the scale truly represent an excellent ability? Does the scale clearly indicate an acceptable level of work? These should be based not on the number of students expected to reach these levels, but on current standards defined by the department often taking into consideration the types of courses student work was collected from (introductory or capstone courses). Are the criteria and scales well-defined? Is it clear what the scale for each criterion measures and how the levels differ from one another? Has it been tested with actual student products to ensure that all likely criteria are included? Is the basis for assigning scores at each scale point clear? Is it clear exactly what needs to be present in a student product to obtain a score at each point on the scale? Is it possible to easily differentiate between scale points? Can the rubric be applied consistently by different scorers? Inter-rater reliability, also sometimes called inter-rater agreement, is a reference to the degree to which scorers can agree on the level of achievement for any given aspect of a piece of student work. Inter-rater reliability depends on how well the criteria and scale points are defined. Working together in a norming session to develop shared understandings of definitions and adjusting the criteria, scales, and descriptors will increase consistency.