Publication Date
| In 2015 | 0 |
| Since 2014 | 0 |
| Since 2011 (last 5 years) | 2 |
| Since 2006 (last 10 years) | 2 |
| Since 1996 (last 20 years) | 2 |
Descriptor
| Behavioral Science Research | 1 |
| Comparative Analysis | 1 |
| Computer Assisted Testing | 1 |
| Correlation | 1 |
| Effect Size | 1 |
| Evaluation | 1 |
| Meta Analysis | 1 |
| Questionnaires | 1 |
| Reliability | 1 |
| Response Style (Tests) | 1 |
| More ▼ | |
Source
| Practical Assessment,… | 2 |
Author
| Gamliel, Eyal | 2 |
| Cahan, Sorel | 1 |
| Peer, Eyal | 1 |
Publication Type
| Journal Articles | 2 |
| Reports - Descriptive | 1 |
| Reports - Research | 1 |
Education Level
| Higher Education | 1 |
| Postsecondary Education | 1 |
Audience
Showing all 2 results
Peer, Eyal; Gamliel, Eyal – Practical Assessment, Research & Evaluation, 2011
When respondents answer paper-and-pencil (PP) questionnaires, they sometimes modify their responses to correspond to previously answered items. As a result, this response bias might artificially inflate the reliability of PP questionnaires. We compared the internal consistency of PP questionnaires to computerized questionnaires that presented a…
Descriptors: Response Style (Tests), Questionnaires, Reliability, Undergraduate Students
Cahan, Sorel; Gamliel, Eyal – Practical Assessment, Research & Evaluation, 2011
Standardized effect size measures typically employed in behavioral and social sciences research in the multi-group case (e.g., [eta][superscript 2], f[superscript 2]) evaluate between-group variability in terms of either total or within-group variability, such as variance or standard deviation--that is, measures of dispersion about the mean. In…
Descriptors: Social Sciences, Effect Size, Evaluation, Behavioral Science Research

Peer reviewed
Direct link
