1. Course Information
Psychology 9545A. (Fall 2020). Test Construction
Half course (0.5); Fall term. MONDAY, 9:30 am to 12:00 noon. Start date: Monday, September 14, 2020.
2. Instructor Information
Prof. Don Saklofske email@example.com Office: SSc 7314
3. Course Description
This course is intended for psychology graduate students who need to develop test/assessment instruments such as questionnaires, short performance scales, observation schedules, interview checklists etc. for their current or planned research program.
Students should know in advance what variables/factors they are intending to measure with this new developed instrument (e.g., resiliency, motivation, interpersonal conflict, happiness) and be familiar with the relevant research and assessment issues. Students should also have completed at least a foundational course in psychometrics as well as intermediate statistics and be familiar with statistical packages such as SPSS. It is expected that students will complete the basic literature review needed to operationalize their measure, develop prototypes and drafts of the scale/measure, and initiate data collection in order to provide a preliminary demonstration of the psychometric integrity and utility of the measure. While each project will stand alone, common themes such as test format options, item writing, reliability and validity, data collection, and standardization will be discussed in the larger group, creating a richer and collaborative/supportive learning opportunity.
Students interested in applying to this course require the approval of the instructor, and should meet with him/her to determine the 'goodness of fit'. A 1-2 page outline of the proposed measure should be submitted at or before this initial meeting.
4. Course Materials
Textbook and Other Useful References:
DeVellis, R.F. (2017). Scale Development: Theory and Applications (4th Ed.). Sage.
A reading list of key publications relevant to the measure being developed will be prepared by each student at the start of the class. Journals and book chapters will be recommended to either the group or individuals as required during the class. Journals that provide excellent examples of test/measurement development and validation include: Assessment, European Journal of Psychological Assessment, Journal of Psychoeducational Assessment, Psychological Assessment.
5. Course Outline
Outline and proposal for constructing a measure for your research
In order to ensure this ‘lab’ type class with both an individual and group focus works for you, I am requesting that you complete this brief outline that gives both an overview and timelines for your particular objectives. This will assist me in knowing what you propose to accomplish in this class and that will serve as the guideposts for how best to support you to meet your objectives. Not everyone will end up moving into more extensive data collection depending on the measure, tryout/pilot results etc. but each student will be in a very good position to present your measure and summarize its ‘current status’ with a clear direction of going forward.
Following DeVellis’ (2012, 2017) guidelines at the end of this note, please reply to the questions below and return to me before the start of class.
Tentative title of the measure you propose to develop:
Brief description (1-2 page) of the theoretical/conceptual/research underpinnings of your larger study for which a new or adapted measure is required:
Brief description of the construct/variables/factors to be measured:
Type/method /format of measurement (e.g., questionnaire, interview schedule, observation checklist, performance task, etc) and why this choice
First draft of measure and research outline (expected date) Presentation for feedback to class (date)
Pilot study (item/scale tryout) and preliminary data collection/analysis re. rxx, rxy, etc (date) Revised measure based on data/results/feedback (date):
Round 2 of data collection and data analysis (date):
Paper describing the above (3000-5000 words for full paper; see any of the articles in Assessment, J. of Psychoeducational Assessment, etc. ) that describe a new /modified scale and results (final assignment and presentation end of class in December) .
Following DeVellis’ (2012) guidelines, the researcher undertakes five steps to initially develop the scale. Using an example:
First, the content of the scale was grounded in substantive theory and literature review. For example, Bandura’s (1997) self-efficacy theory was adopted as the theoretical framework to clarify the construct being measured. Considering that self efficacy beliefs can vary significantly in different contexts, the researcher assessed mainstream preservice teachers’ self-efficacy in working with ELLs, which pinpointed the target population and specificity level of self-efficacy.
Second, an item pool was created based on a systematic literature review. Four sources of information were synthesized for item generations: (1) psychometric scales on teacher self-efficacy, (2) journal articles on teaching ELLs, (3) textbooks/books on ELL education, and (4) the teaching license requirements of states (e.g., California,
Florida, and New York) that demand all pre-service teachers to receive training in ELL education. The variety of information helped ensure that the items reflected the purpose of the scale and were supported by empirical evidence and teaching practices.
Third, the researcher decided on the format and wording of the items. The 100 point scale format (Bandura, 2006) was selected, because the format is more sensitive and reliable to capture differentiating judgments of the respondents than the traditional likert-scales (Pajares, Hartley, & Viliante, 2001). Following Bandura’s (2006)
suggestion, the researcher phrased the items in terms of can do, because “can is a judgment of capability; will is a statement of intention” (p. 43).
Fourth, the scale items were reviewed by a panel of experts to identify potential problems and enhance content validity. The researcher established a panel of fifteen Page 10 of 30 experts consisting of faculty members, P-12 practitioners and administrators with expertise in ELL instruction. These individuals firstly completed the survey, and then provided constructive feedback on how to revise the scale.
Fifth, the revised items were pretested with a sample of preservice teachers to further improve the scale. Ten preservice teachers were recruited to complete the scale. They were also invited to discuss the relevance, clarity, and conciseness of the items.
Further revision was made based on the results from this pilot study
And… Onwards and upwards… including pilot testing (RSVP)
6. Methods of Evaluation
Each student will complete a proposal of their intended outcomes which will be the ‘criteria’ by which grades are assigned. Projects will range from scale creation and preliminary data collection/analyses to more detailed validity and application support. This class reflects both process/product in the creation of the proposed measure and thus the evaluation will be both formative and summative and essentially criterion-referenced based. Progress, decision making, and feedback will occur throughout the course.
7. Statement on Academic Offences
Scholastic offences are taken seriously and students are directed to read the appropriate policy, specifically, the definition of what constitutes a Scholastic Offence, at the following Web site: http://www.uwo.ca/univsec/pdf/academic_policies/appeals/scholastic_discipline_grad.pdf
All required papers may be subject to submission for textual similarity review to the commercial plagiarism-detection software under license to the University for the detection of plagiarism. All papers submitted for such checking will be included as source documents in the reference database for the purpose of detecting plagiarism of papers subsequently submitted to the system. Use of the service is subject to the licensing agreement, currently between The University of Western Ontario and Turnitin.com (http://www.turnitin.com).