International Journal of Language Testing

International Journal of Language Testing

HOME

International Journal of Language Testing (IJLT - Former: Iranian Journal of Language Testing) is a scholarly double blind peer-reviewed international scientific journal published biannually in October and March. IJLT publishes original research articles on language testing, assessment and evaluation. Besides, the Journal specially focuses on such issues as testing theory, psychometrics, and practical implications of language tests on classroom assessment.

International Journal of Language Testing (IJLT) is an open access journal which means that all content is freely available without charge to the user or his/her institution. Users are allowed to read, download, copy, distribute, print, search, or link to the full texts of the articles, or use them for any other lawful purpose, without asking prior permission from the publisher or the author. This is in accordance with the Budapest Open Access Initiative (BOAI) definition of open access. For more info on BOAI, click here.

The ownership of IJLT is officially held by Tabaran Institute of Higher Education, affiliated to Iran Ministry of Science, Research and Technology.

 

Publication Information

Publisher

Editor-in-Chief
Associate Editor
Managing Editor

Frequency
Semiannual
Online ISSN

Indexing and Abstracting

Keywords Cloud

  • Validity
  • Assessment
  • Reliability
  • Rasch model
  • C-test
  • Validation
  • EFL teachers
  • reading comprehension
  • IELTS
  • assessment literacy
  • Language assessment
  • Language Assessment Literacy
  • cloze test
  • Dynamic assessment
  • Authenticity
  • EFL learners
  • Fairness
  • partial credit model
  • cognitive diagnostic assessment
  • Formative Assessment
  • listening
  • Self-assessment
  • Local item dependence
  • Unidimensionality
  • non-compensatory
  • Writing assessment
  • Q-matrix
  • listening comprehension
  • Rater training
  • EFL
  • Foreign language anxiety
  • Test fairness
  • Assessment Practices
  • testing
  • L2 writing
  • Classroom Assessment
  • Q-Matrix construction
  • test anxiety
  • Test format
  • Second language
  • speaking
  • Iraqi EFL teachers
  • Item Response Theory
  • WDCT
  • Gender
  • UEE
  • test preparation
  • Cognitive Diagnostic Models (CDMs)
  • Diagnostic Classification Models
  • reduced redundancy tests
  • Confirmatory Factor Analysis
  • Measurement Invariance
  • Assessment as learning
  • test development
  • Pragmatic Assessment
  • dictation
  • exploratory factor analysis
  • DINA
  • feedback literacy
  • rating scale model
  • PIRLS
  • language testing
  • Teaching Methodology
  • questionnaire design
  • Writing
  • reading comprehension test
  • Differential Item Functioning
  • Impact
  • compensatory
  • construct identification
  • Test Administration
  • Factor analysis
  • teacher-based assessment
  • GDINA
  • connected speech
  • Construct validity
  • Zone of Proximal Development
  • test impact
  • reduced forms
  • DIF
  • Multiple-choice items
  • High-Stakes Test
  • Ambiguity Tolerance
  • teachers’ practice
  • feedback
  • practices
  • language proficiency
  • Translation Quality Assessment
  • CDMs
  • Motivation
  • Multiple-group IRT
  • reading comprehension ability
  • Ethics in language testing
  • attribute
  • Keywords: Cognitive Diagnostic Assessment
  • PhD admission interviews
  • French as a Foreign Language
  • Many Facet Rasch Model
  • Mental representations
  • EFL assessment
  • Students’ perception
  • rating quality
  • Keywords: assessment for learning
  • Automated Writing Evaluation (AWE)
  • Integrated assessment
  • Rater
  • General Education
  • Inter-rater reliability
  • attitudes toward reading
  • Teacher-Mediated Dynamic Assessment
  • LID
  • Teacher assessment literacy
  • Reading
  • general English course
  • metacognitive strategy use
  • Mokken-scale analysis
  • Chinese test taker perception
  • teacher identity
  • questionnaire development
  • Text Complexity
  • Unidimensional IRT
  • Zone of Proximal Development (ZPD)
  • Graduate Record Examination (GRE)
  • Polytomous Data
  • Language Ability
  • EFL writing assessment
  • EFL Context
  • ZPD
  • IRT
  • Professional Development
  • evaluation apprehension
  • Academic Buoyancy
  • Perception
  • university admission
  • RASCH measurement
  • Integrated Listening/Speaking Assessment
  • Analytical Scoring Scale
  • high school GPA
  • Chinese as a second language
  • self-regulation
  • Exam Anxiety
  • Test-led Changes
  • Differentiated assessment
  • Distractor analysis
  • Iranian Universities
  • multi-dimensionality
  • gender DIF
  • Undergraduates
  • washback
  • Rubrics
  • assessment tools
  • Test design
  • Life issues
  • TEFL Ph.D. Entrance Exam
  • Attitudinal and Beliefs Dimensions
  • Monotone Homogeneity Model
  • Expectations
  • Construction-integration model
  • Generalizability Theory
  • consequence of test use
  • model fit indices
  • Grammatical Range and Accuracy
  • proficiency levels
  • rating criteria
  • EFA
  • English as a foreign language
  • Attitude
  • self-efficacy
  • General Language Ability
  • classroom-based teacher assessment
  • Online Group Dynamic Assessment (GDA)
  • WTC
  • Local Stochastic Dependence
  • Experienced EFL Teacher
  • Oral Performance
  • advanced translation
  • interlanguage pragmatics
  • English M.A. university entrance examination
  • oral language assessment
  • Interactive Discourse Completion Tasks
  • Teaching and Learning
  • communicative proficiency
  • TCF
  • Non-Uniform DIF
  • compensatory selection
  • Performance Assessment
  • corpus linguistics
  • B1 Preliminary English test
  • Communicative Language Testing
  • constructed response
  • online Interaction Learning Model
  • Cronbach’s alpha
  • alternative assessment literacy
  • local English tests
  • Inferencing
  • Writing assessments
  • Raters’ severity
  • Grammatical knowledge
  • L2 speech act performance assessment
  • reduced redundancy