International Journal of Language Testing

International Journal of Language Testing

HOME

International Journal of Language Testing (IJLT - Former: Iranian Journal of Language Testing) is a scholarly double blind peer-reviewed international scientific journal published biannually in October and March. IJLT publishes original research articles on language testing, assessment and evaluation. Besides, the Journal specially focuses on such issues as testing theory, psychometrics, and practical implications of language tests on classroom assessment.

International Journal of Language Testing (IJLT) is an open access journal which means that all content is freely available without charge to the user or his/her institution. Users are allowed to read, download, copy, distribute, print, search, or link to the full texts of the articles, or use them for any other lawful purpose, without asking prior permission from the publisher or the author. This is in accordance with the Budapest Open Access Initiative (BOAI) definition of open access. For more info on BOAI, click here.

The ownership of IJLT is officially held by Tabaran Institute of Higher Education, affiliated to Iran Ministry of Science, Research and Technology.

 

Publication Information

Publisher

Editor-in-Chief
Associate Editor
Managing Editor

Frequency
Semiannual
Online ISSN

Indexing and Abstracting

Keywords Cloud

  • Validity
  • Reliability
  • Assessment
  • C-test
  • Rasch model
  • Validation
  • reading comprehension
  • EFL teachers
  • IELTS
  • Language assessment
  • assessment literacy
  • Language Assessment Literacy
  • Authenticity
  • Dynamic assessment
  • cloze test
  • Fairness
  • Local item dependence
  • Self-assessment
  • cognitive diagnostic assessment
  • Formative Assessment
  • EFL learners
  • partial credit model
  • listening
  • Unidimensionality
  • Q-matrix
  • EFL
  • listening comprehension
  • Rater training
  • Classroom Assessment
  • Test fairness
  • Foreign language anxiety
  • Assessment Practices
  • Writing assessment
  • non-compensatory
  • L2 writing
  • testing
  • reduced redundancy tests
  • UEE
  • language proficiency
  • reading comprehension test
  • Diagnostic Classification Models
  • PIRLS
  • teacher-based assessment
  • language testing
  • test impact
  • reading comprehension ability
  • Measurement Invariance
  • High-Stakes Test
  • Cognitive Diagnostic Models (CDMs)
  • Q-Matrix construction
  • test preparation
  • Ambiguity Tolerance
  • Confirmatory Factor Analysis
  • Multiple-choice items
  • test development
  • Assessment as learning
  • DIF
  • Item Response Theory
  • speaking
  • DINA
  • exploratory factor analysis
  • test anxiety
  • Factor analysis
  • rating scale model
  • feedback literacy
  • compensatory
  • Differential Item Functioning
  • construct identification
  • Second language
  • Writing
  • Test Administration
  • Test format
  • Teaching Methodology
  • GDINA
  • connected speech
  • Construct validity
  • WDCT
  • Zone of Proximal Development
  • questionnaire design
  • Gender
  • Pragmatic Assessment
  • Iraqi EFL teachers
  • dictation
  • teachers’ practice
  • feedback
  • practices
  • reduced forms
  • Translation Quality Assessment
  • CDMs
  • Multiple-group IRT
  • Motivation
  • Impact
  • Ethics in language testing
  • attribute
  • Keywords: Cognitive Diagnostic Assessment
  • PhD admission interviews
  • French as a Foreign Language
  • Many Facet Rasch Model
  • Mental representations
  • EFL assessment
  • Students’ perception
  • rating quality
  • Keywords: assessment for learning
  • Automated Writing Evaluation (AWE)
  • Integrated assessment
  • Rater
  • General Education
  • Inter-rater reliability
  • attitudes toward reading
  • Teacher-Mediated Dynamic Assessment
  • LID
  • Teacher assessment literacy
  • ZPD
  • general English course
  • metacognitive strategy use
  • Mokken-scale analysis
  • questionnaire development
  • Chinese test taker perception
  • teacher identity
  • Text Complexity
  • Unidimensional IRT
  • Professional Development
  • Graduate Record Examination (GRE)
  • Polytomous Data
  • Language Ability
  • EFL writing assessment
  • EFL Context
  • Reading
  • IRT
  • Zone of Proximal Development (ZPD)
  • evaluation apprehension
  • Academic Buoyancy
  • Perception
  • university admission
  • RASCH measurement
  • Integrated Listening/Speaking Assessment
  • Analytical Scoring Scale
  • high school GPA
  • Chinese as a second language
  • self-regulation
  • Exam Anxiety
  • Test-led Changes
  • Differentiated assessment
  • Distractor analysis
  • Iranian Universities
  • multi-dimensionality
  • Undergraduates
  • washback
  • Rubrics
  • assessment tools
  • Test design
  • item quality
  • Life issues
  • TEFL Ph.D. Entrance Exam
  • Attitudinal and Beliefs Dimensions
  • Expectations
  • Monotone Homogeneity Model
  • Construction-integration model
  • Generalizability Theory
  • consequence of test use
  • model fit indices
  • Grammatical Range and Accuracy
  • proficiency levels
  • rating criteria
  • EFA
  • English as a foreign language
  • Attitude
  • self-efficacy
  • General Language Ability
  • classroom-based teacher assessment
  • oral language assessment
  • WTC
  • Local Stochastic Dependence
  • Experienced EFL Teacher
  • Oral Performance
  • advanced translation
  • interlanguage pragmatics
  • English M.A. university entrance examination
  • Online Group Dynamic Assessment (GDA)
  • Interactive Discourse Completion Tasks
  • Teaching and Learning
  • TCF
  • communicative proficiency
  • Non-Uniform DIF
  • compensatory selection
  • B1 Preliminary English test
  • corpus linguistics
  • Performance Assessment
  • Communicative Language Testing
  • constructed response
  • gender DIF
  • Cronbach’s alpha
  • alternative assessment literacy
  • local English tests
  • Inferencing
  • Writing assessments
  • Raters’ severity
  • Grammatical knowledge
  • online Interaction Learning Model
  • L2 speech act performance assessment