Home > UGC Publications > Speeches and Articles > 2000 > "Preparing for the Next Round of TLQPRs" Workshop on 8 April 2000 - A Campaign for Quality: What Next? Presentation by Don F. Westerheijden, CHEPS, University of Twente, the Netherlands (8.4.2000)

A Campaign for Quality: What Next? Presentation to the Workshop "Preparing for the next round of TLQPRs", Hong Kong, 8.4.2000

Don F. Westerheijden, CHEPS, University of Twente, the Netherlands

  1. Review of the TLQPRs

    In 1998-99, I had the honour to chair an international team that reviewed the first round of TLQPRs in HK (TLQPR Review Team 1999). The UGC had asked for such a review, and the Consultative Committee also showed keen interest in it. Our brief was to:

      1st To evaluate the TLQPR against its stated aims,

      2nd To recommend improvements, and

      3rd To give a broad-brush evaluation of TDG and CAV in relation to the TLQPRs.

    Today, I should like to put our Review Team's work in the broader context of international trends concerning quality assurance in higher education. In the light of that broader perspective, I will give a renewed overview of our main recommendations in terms of costs and benefits.

  2. Some Trends in Quality Assurance in Higher Education

    • 2.1 Learning Systems

      The very fact that a review of the TLQPRs was organised after their first round with a view to its improvement -- or discontinuation if that should be a better option -- is an example of applying quality improvement principles to the quality improvement mechanisms themselves. In other words, it is an application of principles of the 'learning organisation' (Senge 1990) to the quality policy in Hong Kong's higher education. While I would not maintain that public policy around the world is a full-fledged learning system, it has become good practice in many fields of public policy, including higher education, to evaluate policies after a number of years with a view to their possible continuation and improvement. Sometimes, such periodic reviews of quality assessment practices are conducted internally by the organisation co-ordinating the external quality assurance. The working parties that revised the quality assessment protocols in the Netherlands within the Association of Universities (VSNU) are an example of such an informal review. The Californian regional accreditation organisation, WASC, organised a public debate among its stakeholders last year; that is another model to generate feedback on one's external quality assessment policy. The Hong Kong UGC took a more research-oriented and formally evaluative approach as it invited the external consultants of our Review Team. This 'royal road' was also chosen in Norway, Denmark and Belgium (Flanders) -- to mention but a few examples from Northwestern Europe (Scheele et al. 1998).

    • 2.2 Trend: Towards Accountability and Uniformity

      An intriguing topic in the coming years will be the development of quality assurance after completion of the current cycle. Apart from Hong Kong, this stage has been reached amongst others in France, in the UK and in the Netherlands. In the Netherlands and in the United Kingdom, steps have been taken to continue the external quality assurance but with some changes. In the Netherlands, incremental though important adaptations have been proposed after mainly internal feedback and development processes -- as mentioned above. In the UK the changes are much larger, involving the encompassing review of the whole higher education system under Lord Dearing and the establishment of a new quality assessment agency. Both cases exemplify a trend, forecasted amongst others by De Groof (1998), based on the concept of reaching (academic) standards. Thus the initiative in the Netherlands to start pilot projects in programme accreditation (in the vocational institutions of higher education, at bachelor's level). Some British developments with regard to national standards and qualifications for greater transparency of the quality assurance system would seem to be going in the same direction.

      The main aim of the external reviews in such cases is to show society that certain standards are being met. In other words, the main aim is accountability for programme quality. The emphasis on accountability is growing stronger in the cases mentioned, compared to previous quality assessment arrangements. At the same time, the stress on national standards means that ideas of judging programmes against their own goals and aims is released, which implies a drive for greater uniformity of programmes.

    • 2.3 Counter-Trend: Towards Diversity and Institutional Autonomy

      Yet, to maintain that in new rounds of external quality assessments everything points towards a 'tightening of the screws' of accountability would be too one-sided. There are also signals of a counter-trend. Thus for example, again in the Netherlands, the university association VSNU decided to open its external quality assessment for more tailor-made assessments, by enabling universities to choose their own assessors instead of the standard procedure with mostly peers from the Netherlands. This shows that even within the framework of a national procedure of assessment of programme quality, diversity can be built in.

      In other European countries, one can find examples going further in the direction of allowing diversity in external quality reviews. Thus in France, the situation can be summarised as continuation of the first round of institutional evaluations with an interesting shift in emphasis, i.e. more attention to follow-up of external audits by the university, and to specific, local topics. Other countries, starting later with their quality reviews, could jump to a second-generation approach immediately. Finland is a good example of that: this country allows its universities to choose any evaluation approach they prefer, as long as the national evaluation council approves of the solidity and transparency of the evaluation procedure chosen. Here, the responsibility is clearly laid on the universities to assure their quality, but leaving all possible room for the institutions' autonomous decisions regarding their views of quality and the needs they see for external support and review.

    • 2.4 Commonalities in Trend and Counter-Trend

      Taking a step back, the paradox of trend and counter-trend can be solved partly in (at least) two ways. First, there is the growing involvement of external stakeholders in quality assurance activities. It has become more and more accepted both in circles of quality assessment agencies and in academe, that views on quality legitimately differ between the various stakeholders in higher education, and that all stakeholders have a right to voice their opinion. Ivory tower mentalities are giving way to accepting that mass higher education takes place in close interaction with society, hence that societal stakeholders -- especially students and employers -- have a right to judge the quality of higher education for themselves. The forms in which this happens, may differ; that is why above we saw growing emphasis on standards and accreditation as 'efficient' forms of communicating quality levels to society in some cases, while in other cases stakeholders become involved in reviews in more formative ways.

      The other commonality concerns the 'harder' position of national (governmental) agencies in subsequent rounds of evaluation. It is possible to interpret this as a clearer crystallisation of these agencies' role to guard the public weal. In combination with this, the higher education institutions take a more autonomous stance. The argument runs as follows. As a result of a first round of external evaluations, higher education institutions have started to attend more to the quality of their education (and research). After stimulating the development of quality assurance mechanisms in the institutions during the first round, governmental agencies can now step back, let the universities do their job, and just need to check that they indeed do that. But this is not really new to the higher education community in Hong Kong, for this division of roles has been a presupposition in the whole TLQPR process.

      This suggests a phase model for the development of quality assurance systems, but I shall not go into that now. More pertinently for this paper's argument, this suggests that there is a shift in the costs and benefits of quality assurance, and in the division of costs and benefits among the main parties concerned, i.e. the evaluators and the higher education institutions. The evaluators reduce their monitoring costs, by making external evaluation simpler for them. The question that will concern us next, is what happens to the costs and benefits of the higher education institutions.

  3. Costs and Benefits for Higher Education Institutions in a Second TLQPR

    To treat this question, I would first like to remind you briefly of the main outcomes of our Review.

    • 3.1 Main Conclusions

      Our Review Team's remit was to evaluate the TLQPRs independently, against the three goals formulated for the TLQPR: (a) focusing attention on teaching and learning, (b) assisting the higher education institutions in their efforts to improve teaching and learning, and (c) enabling the UGC and the higher education institutions to discharge their obligation to be accountable for quality. On the whole, we phrased our main conclusion as: The TLQPR was 'the right instrument at the right time'. Some qualifications are implicit in this characterisation that may seem too positive to be true. First, other instruments could have been 'right' too, but the TLQPRs were among the best available. Second, it was right at that time, but times change -- partly because of the impacts of the first TLQPR. I mean by this, that we expected some of the changes in Hong Kong's higher education institutions made around or after the first TLQPR to remain. For example, in some universities quality assurance processes became more formalised. A second TLQPR, therefore, would be faced with changed higher education institutions, and many 'easy gains' could not be made again. In other words, the benefits of an unchanged repetition would be lower than those of the first TLQPR.

      What then were the benefits of the first TLQPRs? There were clear achievements towards goal (a), commitment to teaching and learning in the higher education institutions. The signal given by the introduction of an evaluation procedure for quality of teaching and learning, namely that teaching and learning are highly important for Hong Kong's higher education institutions was seen by many as the prime impact. Regardless of the form of the review process, it shifted the balance of attention in the higher education community, which had been shifted before towards research previously as a result of the introduction of the research assessment exercise (RAE).

      There were achievements towards goal (b), i.e. to assist higher education institutions to improve teaching and learning quality assurance processes. We found institutionalisation of quality management procedures in higher education institutions, especially in universities that did not have very institutionalised quality management before. We also found a number of examples of innovation of existing quality processes.

      With regard to goal (c), the discharge of accountability of UGC and higher education institutions, we found that the process and the reports plus progress reports showed the whole world that the Hong Kong higher education institutions were accounting for their quality improvement processes. We could not gauge to which extent the relevant audiences in Hong Kong's society are satisfied with the present type of accountability -- even less if they would remain so in future. In the eyes of the Hong Kong higher education community, the way the press reported about the TLQPR reports was not seen as a balanced reflection of the process. However, we could not study this aspect in sufficient detail to be certain that, e.g., universities' issuing press statements could redress that balance.

    • 3.2 What Next? Maintaining a Positive Balance of Costs and Benefits

      Based on the major conclusions summarised above, and on the findings stated earlier in our report, the Review Team made a number of recommendations. I will review those now in view of the effort to maintain as positive a balance as possible between costs and benefits for the higher education institutions concerned.

      First, we advised to continue the 'campaign for quality'. That is to say that a second round of some form of separate, external quality review of education would be needed as a signal for the higher education community. The signal should reinforce the first TLQPR's message that education is indeed of prime importance for Hong Kong's higher education according to the UGC. The very fact that this workshop is organised bears testimony to the intention to make this happen indeed. I do not claim that our Review Team did much for reaching this conclusion: it was 'in the air'. The interesting question for us then -- and for today's workshop too -- was rather: What should such a review look like?

      Our second main recommendation addresses this question. We recommended continuing attention to quality processes. We are in favour of maintaining the focus on quality processes -- or perhaps in terms that professor Massy now uses -- 'quality work', rather than on the quality of delivered education itself. A main argument being that this is a much lighter procedure than the alternative, leading to reduced review costs for all.

      The third main recommendation had to do with clarity about the aims and objectives of an external review. We found that the communication about them from the UGC to the academics within the higher education institutions might be improved. Setting a clear agenda helps to make it clear to participants what are the costs and benefits they can expect. I do not mean to suggest that there should be an a priori statement about the financial punishments or rewards for performing well in the next TLQPR. While I understand the feelings of those who told us during our interviews that the phrase that the TLQPR outcomes would 'inform funding' is ominously vague, I am also of the opinion that establishing a direct and substantial link between quality reviews and core funding would turn quality review into a strategic game to gain money (Westerheijden 1990: 206). My understanding of costs and benefits, in this contribution, is broader than just the financial ones. Accordingly, what we advocated in this recommendation is intended to reduce uncertainty about what the UGC and the higher education institutions actually wish to achieve through the TLQPRs. Less uncertainty helps to focus efforts more, so that costs of going through a TLQPR will be reduced.

      There is, however, more to this recommendation. Clarifying one's goals often means narrowing them. We expected that a major gain for a second round of TLQPRs could be made if the agenda to be pursued were narrower than the first time. For the first round, three main goals of the TLQPR had been identified by the UGC, as stated before.

      The first goal, to focus attention in the higher education community on education, would be aided by any type of future TLQPR, as maintained above. At issue is the balance between the second and third goals, respectively to support the higher education institutions to improve the quality of teaching and learning, and to be accountable for the quality of teaching and learning. A certain minimum of both functions must be fulfilled in any case; to that extent quality improvement and accountability are indeed 'two sides of the same coin' (Vroeijenstijn 1989). For that reason, too, we suggested in several places that there should be a common core of issues and approaches in a second TLQPR.

      However, above a certain threshold, the two goals may become conflicting. Thus, for example, to maximise the quality improvement character of an external quality review, the reviewers must act practically as consultants: in a flexible manner, in an atmosphere where it is possible to discuss strengths and weaknesses openly, giving feedback to the university about its individual problems. If, on the other hand, accountability is to be maximised, a standard approach, guaranteeing equal treatment of all and equal information about all, is needed, designed with a view to publishing the external reviewers' findings. In the latter case, we argued in our report, the external reviewers behave like auditors rather than as consultants. For the purpose of an audit of quality processes, e.g. fewer unit visits would be needed than the first time, because they are only meant to check if central policies are indeed implemented -- they are not intended to give immediate feedback to the units about strengths and weaknesses of their quality processes and how to improve those. Equally, instead of reviewing a broad range of quality processes, such auditors could identify a small number of crucial processes. For example, they could focus only on how the universities assess students' learning outcomes (an issue of crucial importance). Or, taking issues of perhaps somewhat lesser centrality, they could focus on how external examiners are used in curriculum design, on quality assurance of delivery of lectures, or on the functioning of teaching improvement units. One could imagine that in a series of TLQPR-rounds, the next one would focus on assessment processes for learning outcomes and external examiners' input in curriculum design, while in a third round the assessment processes (supposing that this is a core issue every time) would be reviewed together with the functioning of teaching improvement units. Universities would still need to have in place mechanisms covering all important aspects (cf. the matrix developed by the TLQPR-panel), but a thorough check would be limited to just some aspects. In this way, the preparation costs could be reduced.

      Let us return now to the case that quality improvement is put more in the focus of attention. Since improvement of quality processes is much more tied to the situation within the individual higher education institution, a much more tailor-made approach would be advisable. We recommended such an approach, especially after we experienced the different approaches to quality assurance of education in the seven Hong Kong higher education institutions. These different approaches can probably be explained from their different histories with respect to quality management and their different views on their internal structure and management. A university that is strong on, e.g., recruitment policies for its staff but that has only weakly developed its system of student evaluation questionnaires, may realise that it is in need of a different type of advice from external consultative reviewers than a university where the converse situation exists. Indeed, it may need different types of reviewers.

      An improvement-oriented review could consist of a small common core of issues that need to be raised in every higher education institution every time, but that devotes most of its attention to some issues suggested by the higher education institution, by the UGC (from other information it has), or that may result from judgements by the TLQPR-panel in the first round. The costs of preparing and implementing the review could be reduced, because of the smaller number of issues to be studied in the self-evaluation process, through smaller review panels, etc. Equally important, here we saw a possibility to reach larger benefits than before through focusing on selected issues, although we were aware that 'the easy gains' had already been made in the first round.

      As a fourth area of recommendations, the Review Team thinks that in order to attain internalisation of a quality culture in the higher education community, it is important to maximise the local ownership of the process. The TLQPR Panel was successful in its 'campaign for quality', but better embedding in the Hong Kong higher education community is desirable for the future. I pass over this recommendation quickly at this occasion, because it is not of major interest from the point of view of the immediate balance of costs and benefits, although we do think it to be essential for reasons of sustainability.

      Finally, we made a number of more detailed, or more practical recommendations with regard to the elements of the TLQPR process. Let me just highlight the main one. This concerns the self-evaluation process. It is practically a universal finding that this is the most valuable part of the review process. Accordingly, a self-evaluation should not only be seen as a cost factor (and probably the largest one in the whole review process at that), but also as an investment in quality awareness, in knowledge of quality assurance policies and instruments, in commitment of the university's academic staff, etc. If one tries to reduce the costs of a self-evaluation process, one should be aware, therefore, that the returns might diminish too. Therefore, where in the foregoing I mentioned possibilities to streamline the self-evaluation process by leaving out issues, one should be aware that cutting issues from the self-evaluation process might imply preventing persons from being involved, with its consequent reduction of benefits. In the same vein, we recommended to study further how external support for the self-evaluation process could be developed. External support has been introduced in the management reviews. Although this external support was principally designed to aid the external reviewers rather than the higher education institutions, I understand that it was found by most institutions to facilitate their self-evaluation process. The more a self-evaluation phase in a review process is needed to ensure commitment of university staff and the more quality improvement is aimed at, the more is actual involvement of the university staff needed and the less is the net benefit that can be expected from outsourcing this work. This does not mean that all work needs to be done inside the university involved in the review; there may be good reasons to use efficient external support rather than rely on intelligent but less experienced and expensive inside support. External help in self-evaluation also may be very appropriate to ensure quick dissemination of good practices throughout the Hong Kong higher education institutions. I cannot make the final judgement on the appropriate balance between insourcing or outsourcing in the context of future TLQPRs; I just want to clarify the arguments the Review Team had to be cautious in its recommendation on this point.

  4. Concluding Observation

    Cautiousness has been a guiding principle for our Review Team throughout the review that we performed of the TLQPRs. We are aware that external reviewers have limited knowledge and understanding of their subject, however good we have been briefed and however frank our interlocutors in the Hong Kong higher education community have been in the discussion that we had with them. Hopefully, we have been not too cautious, writing only bland and self-evident statements. Equally today, I hope that I have not just complicated your thoughts by expanding somewhat on the arguments we had for making our main recommendations, but that my presentation may help to advance your discussions.

    Thank you for your attention.

References
- De Groof, J. (1998). To be continued ... from a lawyer's perspective. In To be continued...: Follow-up of quality assurance in higher education (J. P. Scheele, P. A. M. Maassen & D. F. Westerheijden, eds.) (pp. 76-83). Maarssen: Elsevier/De Tijdstroom.
- Senge, P. (1990). The fifth discipline: The art and practice of the learning organization. New York: Doubleday/Currency.
- TLQPR Review Team (1999). A Campaign for Quality: Hong Kong Teaching and Learning Quality Process Review. Hong Kong: University Grants Committee of Hong Kong.
- To be Continued . . . : Follow-Up of Quality Assurance in Higher Education. (J. P. Scheele, P. A. M. Maassen & D. F. Westerheijden, eds.). (1998). Maarssen: Elsevier/De Tijdstroom.
- Vroeijenstijn, A.I. (1989). Autonomy and assurance of quality: Two sides of one coin. In Proceedings of the International conference on Assessing Quality in Higher education (T.W. Banta & M.W. Bensey, eds.). Knoxville: University of Tennessee.
- Westerheijden, D. F. (1990). Peers, Performance, and Power: Quality assessment in the Netherlands. In Peer Review and Performance Indicators: Quality assessment in British and Dutch higher education (L. C. J. Goedegebuure, P. A. M. Maassen & D. F. Westerheijden, eds.) (pp. 183-207). Utrecht: Lemma.