Foundations of the NBCOT Certification Examinations
Exam Development and Construction
NBCOT annually recruits subject matter experts (SMEs) to develop new items for the examinations in accordance to the accreditation standards relating to the use of qualified personnel for examination development. Specifically, NCCA standards (2014) indicate SMEs must represent the appropriate demographics of the population to be certified and provide insight and guidance into examination processes.
SMEs are OTR and COTA certificants who represent the profession in terms of practice experiences, geographic regions, gender, and ethnicity. After questions or “items” for NBCOT examinations are developed, the items then undergo a rigorous review process by an additional committee of SMEs. This review is designed to: validate that the knowledge and tasks measured are compatible with the domain-level blueprint specifications; assess the relative criticality and frequency of each item to occupational therapy practice; and confirm that each item meets generally accepted fairness guidelines.
NBCOT takes into account the fairness of its examinations during item development. NBCOT adheres to recognized item writing, test development, and review procedures to ensure readability, neutral language, and the universal accuracy of terms used in its items. Additional fairness criteria include, but are not limited to:
- Editing items for issues of bias and stereotyping;
- Coding items to the approved examination blueprint specifications;
- Referencing items to approved and published resources in occupational therapy;
- Selecting subject matter experts who are OTR and COTA practitioners and educators from diverse geographical areas, practice experiences, and cultures;
- Field-testing items prior to their use as scored items on the exam.
During the examination process, fairness is addressed through standardized procedures regarding the registration process, accessibility issues, roles of proctors, and security of test materials and equipment. Fairness is addressed after the examination by consideration of confidentiality, accuracy of scoring, and timeliness of reporting the results.
The COTA examination consists of single-response multiple-choice items and six-option multi-select items.
The single-response multiple-choice item contains a stem and three or four possible response options. Of the response options presented, there is only one correct or best answer. Candidates earn a point if the correct response is selected.
The six-option multi-select items include a question stem followed by six possible response options. Of the options provided, three are correct responses and the other three are incorrect responses. The candidate must select three response options in order to proceed to the next item on the exam. Examples of multi-select items can be found here.
Candidates are allotted a period of four hours to complete the examination. The COTA examinations consist of 200 multiple-choice items based on the exam outline. Multiple-choice items include the single-response items in which the candidate selects the BEST option, as well as multi-select items in which the candidate must select the three BEST options out of six options. Multiple-choice and multi-select items are presented one at a time to the candidates. Some of the items may have a picture or chart included that contains information needed to answer the question. During the examination, candidates are able to highlight text that they deem important. A strikeout feature is also available to help candidates visually eliminate options. Candidates can mark items for review and change their item responses as testing time allows, or until they submit the examination for scoring. If time runs out before a candidate reviews the marked items, the selected response will be submitted for scoring. No credit will be given to a marked item that has no response option selected.
At the start of the examinations, candidates have the option of taking a tutorial about the functionality of the test screens. Time spent on the examination tutorials are not deducted from the four-hour test clock. Details on the features of the computer, as well as additional functionality of the exam, can be viewed by accessing the online tutorial.
The OTR examination is divided into two distinct components: clinical simulation test (CST) problems and single-response multiple-choice items. Each CST problem consists of three main parts:
- Opening scene – This includes general background information about a practice-based situation that sets the scene for the entire CST problem.
- Sections – A series of four sections, each with a section header. Section headers provide information specific to the OT process that is addressed within the section.
- Response options and feedback – This includes a listing of potential options the OTR may consider in response to the question posed in the section header. The list of options in the CST problem consists of positive and negative options. A candidate must select either “Yes” or “No” for each option in a CST section before proceeding to the next section of a CST problem. Selecting “Yes” will result in feedback appearing to the right of the option. The feedback provides additional information related to the outcome of the “Yes” options, but does not give information on whether the candidate’s response was correct or incorrect. Feedback is not provided when “No” is selected.
At the option level, one point is awarded if “Yes” is selected for a positive option or “No” is selected for a negative option. Alternatively, no points are awarded if “No” is selected for a positive option or “Yes” is selected for a negative option.
In each section of a CST problem, candidates have the option of scrolling to previous screens within the same CST problem to view opening scenes, section information, responses they have selected, and feedback for positive selections; however, no changes to responses can be made. Candidates must complete each CST problem one at a time in the order presented.
Examples of the CST problems can be found by accessing the following links:
The multiple-choice item contains a stem and three or four possible response options. Of the response options presented, there is only one correct or best answer. Candidates earn a point if the correct response is selected.
Candidates taking the OTR exam are allotted four hours to complete the examination. The OTR examination includes two sections: a three problem clinical simulation test (CST) section and a 170 item single-response multiple-choice section.
The CST problems are presented one at a time. Candidates are required to select “Yes” or “No” for each option presented. Once a response is selected in a CST problem, it cannot be de-selected. After response choices have been selected for each option within a section, the candidate clicks “Next” to proceed to the subsequent section. Candidates are able to navigate back to see previous screens in the same CST problem, but response choices cannot be changed once the candidate has progressed to a new screen. Candidates must complete the CST section of the OTR examination before proceeding to the multiple-choice section. For additional information on these features, as well as functionality of the CST problems, view the online tutorial and access the sample CST problems here:
Once a candidate enters the multiple-choice section of the OTR examination, he or she is not able to reenter the CST section. In the OTR multiple-choice section, candidates can mark items for review and change their item responses as testing time allows, or until they submit the examination for scoring. If time runs out before a candidate reviews the marked items, the selected response will be submitted for scoring. No credit will be given to a marked item that has no response option selected.
During the entire examination, candidates are able to highlight text that they deem important. A strikeout feature is also available to help candidates visually eliminate possible options.
Each section of the OTR examination includes an optional tutorial. Time spent using a tutorial is not deducted from the four-hour testing time. Details on the features of the exam computer, as well as additional functionality of each section of the exam, can be viewed by accessing the online tutorial.
Percent correct and number of correct scores are simply other ways of reporting raw scores and therefore, do not resolve the issue of comparability of scores across different versions of the exam. Although each version of the examination tests the same domains, tasks, and knowledge, each form contains a different set of test items, which means that one or more questions on one test form may vary in difficulty than the questions appearing on another test form. Simply using a raw score does not account for this difference.
Norm-referenced scoring is used to indicate performance differences among test takers, not to determine if a candidate has acquired specific knowledge. Best test practices involve the use of scaled scoring to address direct comparison of scores across multiple versions of the examination, not to other candidates’ performance.
Candidates who pass the examination receive a performance feedback report that includes a congratulatory letter and the candidate’s earned exam score. NBCOT also provides performance feedback to those candidates who do not achieve the passing standard on the examination. This performance feedback report includes the candidate’s score, as well as the average score of new graduates who recently passed the examination. The report also includes a domain-level performance chart to help a candidate identify areas of relative strength and weakness and additional information regarding how individual domain scores can be used. Finally, a list of Frequently Asked Questions is presented to address queries regarding the determination of the passing score, use of scaled scores, candidate performance comparisons, score reporting, exam preparation, and exam content.
Each NBCOT exam includes a pre-selected number of field-test items. Although these items are not considered when determining candidates’ scores, performance data is collected and analyzed for each field-test item. The statistical analysis of the field-test items is an important quality control measure NBCOT uses to preserve the reliability and validity of the examinations. Candidates are not able to distinguish between the scored and non-scored items.
Once a sufficient number of responses are collected on a field-test item, the item statistics are reviewed based on pre-determined psychometric measures. Field-test items meeting these metrics are entered into the bank of items that can be used as scored items on subsequent exams. Item-level statistics falling below these metrics for field-test items are flagged for additional review and revision before undergoing further levels of field-testing.