Measuring Information Systems Service Quality: Concerns for a Complete Canvas
This paper responds to the research note in this issue by Van Dyke et al. concerning the use of SERVQUAL, an instrument to measure service quality, and its use in the IS domain. This paper attempts to balance some of the arguments they raise from the marketing literature on the topic with the well-documented counterarguments of SERVQUAL’s developers, as well as our own research evidence and observations in an IS-specific environment. Specifically, evidence is provided to show that the service quality perceptions-expectations subtraction in SERVQUAL is far more rigorously grounded than Van Dyke et al. suggest; that the expectations construct, while potentially ambiguous, is generally a vector in the case of an IS department; and that the dimensions of service quality seem to be as applicable to the IS department as to any other organizational setting. Then, the paper demonstrates that the problems of reliability of difference score calculations in SERVQUAL are not nearly as serious as Van Dyke et al. suggest; that while perceptions-only measurement of service quality might have marginally better predictive and convergent validity, this comes at considerable expense to managerial diagnostics; and reiterate some of the problems of dimensional instability found in our previous research, highlighted by Van Dyke et al. and discussed in many other studies of SERVQUAL across a range of settings. Finally, four areas for further research in this area are identified.
|Leyland F. Pitt, Richard T. Watson, and C. Bruce Kavan
|Measurement, reliability, validity, service quality, marketing of IS, IS research agenda