Center for Information Technology Services
University of Maryland Baltimore
eLearning Steering Committee Meeting Notes
June 9, 2005
Prepared by: Chris Klimas
Attendees: Dave Pitts (SOWK), Darryl Roberts (Nursing), Vavian Cox (CITS), Brent Nickles (CITS), Carol O'Neil (Nursing), Kevin So (Nursing), Barbara Covington (Nursing ILT), Julie Gilliam (CITS), Debra Spunt (Nursing CSC/ILT), Chris Phillips (CITS), Jim Leoni (CITS), Sarita Sanjoy (SOMPTRS), Alexa Mayo (HS/HSL Library), Eileen Patton (SOM-DMRT), Jeff Hawk (SOMPTRS), James Craig (Dental), Peter Murray (CITS), Gary Early (Law), Shannon Tucker (PHARM), Chris Klimas (PHARM), Peter Atsaves (Blackboard), Jenny Roth (Blackboard), Kate Bishop (Blackboard)
This meeting with Blackboard representatives covered many issues we are currently experiencing with assessments in Blackboard 6, and also included information on AppPack 3 as well as other future Blackboard products.
Assessments and Question Pools
AppPack 3 will be bringing a number of new question types for Blackboard assessments. Most notable are a Likert scale question for course evaluations, and a "hotspot" question type that asks students to click a particular section of an image. There is a fairly major limitation to the hotspot question type in that the correct area must be rectangular in shape, and there can be only one correct hotspot. This would make a question where a student has to identify an irregularly-shaped item (i.e. an organ) very difficult to create.
One recommended way to make cheating harder while students are taking an assessment is to randomize the order of questions. There's also an option to draw a random set of questions from a saved question pool. It's possible to match questions with sepcific learning outcomes or difficulty levels, so that while each student may receive a different set of questions, they can be tested on the same material and presented with the same level of difficulty -- in theory, at least. A student at Social Work filed a grievance when this was tried there, so caution may be warranted.
There are two caveats when working with assessments composed of randomly-selected questions:
- You cannot aggregate test results and export them to an Excel file, as you can normally.
- There is no built-in way to run discriminant functions on the results, as you may normally do in Excel.
Question pools are a way to re-use questions across many assessments. Creating a question pool works in a very similar way to creating a test -- you have the same options when creating an assessment. While it is possible to export a question pool and then re-import it into another course (or even a different section of a course), the process is a bit cumbersome.
An interface hassle -- where, if you wanted to import all the questions from a given pool into an assessment, you had to click each question's checkbox manually -- has been fixed in AppPack 3.
Other advice Blackboard representatives offered included making all assessments one-question-at-a-time. This way, it's harder for browsers to time out -- each time they send data to the server resets that time-out. It also allows students to save their progress, so that if they experience a crash halfway through an exam, half of their progress has been saved.
Keep in mind that if a student takes longer than the time limit set on an exam, Blackboard does not force the test attempt to end. It simply reports how long it took to complete, and leaves it up to the instructor to decide what to do. The one problem with this is that it is not immediately obvious which students in a class went over time -- you have to look up each attempt detail. Blackboard suggested we look into a Building Block or constructing one ourselves to provide this functionality.
To prevent cheating, Blackboard recommended several third-party products: Securexam, TurnItIn, and SafeAssignment. There is a significant cost associated with each of these products. Shannon Tucker indicated Pharmacy had tried Securexam, but had had major problems with a real-time test administered in class.
A survey is like a test, only the results are kept anonymous and randomized. It's ideal for course evaluations. AppPack 3 will allow you to predicate the availability of other course material based on whether you have completed a survey, for example -- although you cannot cut off access to final grades through this feature, called adaptive release.
If you find yourself needing more flexibility when it comes to course evaluations, Blackboard recommends putting all evaluations into a separate course. There are also third-party solutions for evaluation that may be helpful.
There is a free Webinar on June 20 that covers what will be in Blackboard's new product for more detailed assessments, tentatively called Caliper. Development is in a very early stage, so input will help shape Blackboard's plans. Blackboard also gave us demo copies of Blackboard Backpack, which allows students or instructors with laptops to synchronize with the online version of Blackboard, then work with it offline. It does not allow students to take assessments offline, however.
Next meeting: July 14, 2005 -- location TBA -- including demo of Tegrity