Publication Date
| In 2015 | 0 |
| Since 2014 | 2 |
| Since 2011 (last 5 years) | 13 |
Descriptor
| Test Items | 10 |
| Test Construction | 8 |
| Computer Assisted Testing | 6 |
| Difficulty Level | 5 |
| Adaptive Testing | 4 |
| Models | 4 |
| Test Validity | 4 |
| Content Validity | 3 |
| Elementary Secondary Education | 3 |
| Evidence | 3 |
| More ▼ | |
Source
| Journal of Applied Testing… | 13 |
Author
| Lissitz, Robert W. | 2 |
| Ackermann, Richard | 1 |
| Burke, Matthew | 1 |
| Crotts, Katrina | 1 |
| Davis-Becker, Susan L. | 1 |
| Devore, Richard | 1 |
| Eguez, Jane | 1 |
| Ewing, Maureen | 1 |
| Ganguli, Debalina | 1 |
| Guo, Fanmin | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 13 |
| Reports - Research | 7 |
| Reports - Evaluative | 5 |
| Reports - Descriptive | 1 |
Education Level
| Elementary Secondary Education | 4 |
| Higher Education | 3 |
| Postsecondary Education | 2 |
| Adult Basic Education | 1 |
| High Schools | 1 |
| Secondary Education | 1 |
Audience
Showing all 13 results
Katz, Irvin R.; Tannenbaum, Richard J. – Journal of Applied Testing Technology, 2014
Web-based standard setting holds promise for reducing the travel and logistical inconveniences of traditional, face-to-face standard setting meetings. However, because there are few published reports of setting standards via remote meeting technology, little is known about the practical potential of the approach, including technical feasibility of…
Descriptors: Standard Setting, Comparative Analysis, Feasibility Studies, Program Implementation
Smith, Russell W.; Davis-Becker, Susan L.; O'Leary, Lisa S. – Journal of Applied Testing Technology, 2014
This article describes a hybrid standard setting method that combines characteristics of the Angoff (1971) and Bookmark (Mitzel, Lewis, Patz & Green, 2001) methods. The proposed approach utilizes strengths of each method while addressing weaknesses. An ordered item booklet, with items sorted based on item difficulty, is used in combination…
Descriptors: Standard Setting, Difficulty Level, Test Items, Rating Scales
Burke, Matthew; Devore, Richard; Stopek, Josh – Journal of Applied Testing Technology, 2013
This paper describes efforts to bring principled assessment design to a large-scale, high-stakes licensure examination by employing the frameworks of Assessment Engineering (AE), the Revised Bloom's Taxonomy (RBT), and Cognitive Task Analysis (CTA). The Uniform CPA Examination is practice-oriented and focuses on the skills of accounting. In…
Descriptors: Licensing Examinations (Professions), Accounting, Engineering, Test Construction
Hendrickson, Amy; Ewing, Maureen; Kaliski, Pamela; Huff, Kristen – Journal of Applied Testing Technology, 2013
Evidence-centered design (ECD) is an orientation towards assessment development. It differs from conventional practice in several ways and consists of multiple activities. Each of these activities results in a set of useful documentation: domain analysis, domain modeling, construction of the assessment framework, and assessment…
Descriptors: Evidence, Test Construction, Educational Assessment, Learning Theories
Luebke, Stephen; Lorie, James – Journal of Applied Testing Technology, 2013
This article is a brief account of the use of Bloom's Taxonomy of Educational Objectives (Bloom, Engelhart, Furst, Hill, & Krathwohl, 1956) by staff of the Law School Admission Council in the 1990 development of redesigned specifications for the Reading Comprehension section of the Law School Admission Test. Summary item statistics for…
Descriptors: Classification, Educational Objectives, Reading Comprehension, Law Schools
Luecht, Richard M. – Journal of Applied Testing Technology, 2013
Assessment engineering is a new way to design and implement scalable, sustainable and ideally lower-cost solutions to the complexities of designing and developing tests. It represents a merger of sorts between cognitive task modeling and engineering design principles--a merger that requires some new thinking about the nature of score scales, item…
Descriptors: Engineering, Test Construction, Test Items, Models
Lissitz, Robert W.; Hou, Xiaodong; Slater, Sharon Cadman – Journal of Applied Testing Technology, 2012
This article investigates several questions regarding the impact of different item formats on measurement characteristics. Constructed response (CR) items and multiple choice (MC) items obviously differ in their formats and in the resources needed to score them. As such, they have been the subject of considerable discussion regarding the impact of…
Descriptors: Computer Assisted Testing, Scoring, Evaluation Problems, Psychometrics
Crotts, Katrina; Sireci, Stephen G.; Zenisky, April – Journal of Applied Testing Technology, 2012
Validity evidence based on test content is important for educational tests to demonstrate the degree to which they fulfill their purposes. Most content validity studies involve subject matter experts (SMEs) who rate items that comprise a test form. In computerized-adaptive testing, examinees take different sets of items and test "forms" do not…
Descriptors: Computer Assisted Testing, Adaptive Testing, Content Validity, Test Content
Li, Ying; Jiao, Hong; Lissitz, Robert W. – Journal of Applied Testing Technology, 2012
This study investigated the application of multidimensional item response theory (IRT) models to validate test structure and dimensionality. Multiple content areas or domains within a single subject often exist in large-scale achievement tests. Such areas or domains may cause multidimensionality or local item dependence, which both violate the…
Descriptors: Achievement Tests, Science Tests, Item Response Theory, Measures (Individuals)
Kingsbury, G. Gage; Wise, Steven L. – Journal of Applied Testing Technology, 2011
Development of adaptive tests used in K-12 settings requires the creation of stable measurement scales to measure the growth of individual students from one grade to the next, and to measure change in groups from one year to the next. Accountability systems like No Child Left Behind require stable measurement scales so that accountability has…
Descriptors: Elementary Secondary Education, Adaptive Testing, Academic Achievement, Measures (Individuals)
Rudner, Lawrence M.; Guo, Fanmin – Journal of Applied Testing Technology, 2011
This study investigates measurement decision theory (MDT) as an underlying model for computer adaptive testing when the goal is to classify examinees into one of a finite number of groups. The first analysis compares MDT with a popular item response theory model and finds little difference in terms of the percentage of correct classifications. The…
Descriptors: Adaptive Testing, Instructional Systems, Item Response Theory, Computer Assisted Testing
Wandall, Jakob – Journal of Applied Testing Technology, 2011
Testing and test results can be used in different ways. They can be used for regulation and control, but they can also be a pedagogic tool for assessment of student proficiency in order to target teaching, improve learning and facilitate local pedagogical leadership. To serve these purposes the test has to be used for low stakes purposes, and to…
Descriptors: Test Results, Standardized Tests, Information Technology, Foreign Countries
Jacobsen, Jared; Ackermann, Richard; Eguez, Jane; Ganguli, Debalina; Rickard, Patricia; Taylor, Linda – Journal of Applied Testing Technology, 2011
A computer adaptive test (CAT) is a delivery methodology that serves the larger goals of the assessment system in which it is embedded. A thorough analysis of the assessment system for which a CAT is being designed is critical to ensure that the delivery platform is appropriate and addresses all relevant complexities. As such, a CAT engine must be…
Descriptors: Delivery Systems, Testing Programs, Computer Assisted Testing, Foreign Countries

Peer reviewed
Direct link
