Search This Blog

Monday, 10 June 2013

Technical Communication » Aptara


Assessment in Technical and Professional Communication

Margaret N. Hundleby and Jo Allen, eds. 2010. Amityville, NY: Baywood Publishing Company. [ISBN 978-0-89503-379-6. 241 pages, including index. US$56.95.]

Assessment seems to be everywhere. It takes many forms, from performance reviews to grades on pop quizzes. Consumer products are assessed by Web sites, magazines, and television. Even book reviews such as this one evaluate the relevance, accuracy, usefulness, and overall value of books and software for various audiences.
The most discussed assessment in any era usually focuses on the effectiveness of education. U.S. teachers currently have the No Child Left Behind mandates from the federal government that are tied to federal assistance. Teachers must improve test scores to the required levels at the expense of helping students achieve their potential in ways that are often impossible to measure and quantify. Post-secondary schools are not exempt as legislators battle record budget deficits. Schools in Europe and elsewhere face similar difficulties. As a result, teachers and administrators in technical and professional communication programs are caught up in the assessment movement, which focuses on how to assess and what to do with the results.
Hundleby and Allen have assembled 14 essays meant to help teachers assess their courses and programs. The essays are divided into seven sections, each containing two essays: an extended discussion of a specific aspect of assessment and a “response essay” that expands the discussion. A foreword, an afterword addressing ethical issues, brief biographies of the contributors, and an index complete the book.
The essays approach assessment from various points of view. Some, such as Jo Allen’s excellent essay that places the goals and objectives of technical communication programs in the larger context of the university’s goals and objectives, present general overviews of the assessment landscape. Others, such as Nancy Coppola and Norbert Elliot’s, address specific assessment situations in specific schools and detail their methodology and results. In sum, the essays present both a general overview of assessment and specific applications of assessment to undergraduate programs, graduate programs, and service courses. Their broad range of perspectives can help you decide how to assess your program or course.
The concern is to describe where assessment is a valuable administrative tool. You won’t find practical and specific applications of assessment techniques, such as point systems versus holistic grading systems for specific assignments. However, you’ll find material on quantitative methods of assessing programs and courses that you can adapt to your specific situation.
One sticking point of assessment has always been establishing criteria. As Gerald Savage asks in his response to an essay about relational models in assessment, “How are the competencies of scholars, students, and practitioners in the field assessed?” (p. 166). For example, do administrators adopt criteria based on industrial, business, or government criteria used in performance assessments? Or do they turn to national organizations such as STC or the National Council of Teachers of English for guidance in developing criteria for assessing their programs? STC is now building what the association identifies as a body of knowledge. Will that form the basis for assessment criteria? Or will program and course directors turn to standardized testing (such as DANTES Subject Standardized Test of Technical Writing—discussed by Norbert Elliot)?
Or maybe a better approach is the one found in the criteria from the Accreditation Board of Engineering and Technology (ABET) on communication? U.S. engineering programs have for years been assessed by ABET criteria. The collection includes Michael Carter’s essay on how technical communication can aid the engineering programs being assessed by ABET.
A second problem in assessment is what to do with the results. Chris Anson presents two case studies of how assessment can be used to modify how writing is taught. Both cases are of academic departments concerned about how their students are writing. One department houses a technical communication program (English) and the second houses chemical engineering. Anson shows how assessment leads to modifications in how the students learn to write. With this approach, he sets the tone for the essays to follow: Assess in a variety of ways, but apply the results to enhance student learning.
The anthology includes an essay on assessing a graduate program. Coppola and Elliot describe an empirical study and the subsequent report of a technical communication graduate program at the New Jersey Institute of Technology. What is interesting is that this graduate program is 100% online, thus giving those who teach online courses insights into how they can be assessed.
Jeffrey Jablonski and Ed Nagelhout offer another model for evaluating instruction. They wanted to know how their students were reacting to their course Web site, so they brought in a consultant to do usability studies of the site and conduct focus groups with the instructors. In his response, William Hart-Davidson broadens the issue to the question of lifelong learning and the role technology plays in the continuing instruction offered graduates.
Doreen Starke-Meyerring and Deborah Andrews address an issue that could have relevance in nonacademic settings: evaluating virtual teams in a cross-cultural environment. Most assessment techniques and methods are aimed at traditional approaches that exclude cross-cultural situations. The authors describe a partnership for a business communication course offered at McGill University in Canada and the University of Delaware. Although the teams were to address problems identified in specific businesses, the real focus was on developing a shared culture in which effective communication leads to documents that solve specific problems.
Sam Dragga’s afterword describes the ethical situation in developing assessment tools and methods in a general rather than a specific sense. Still, his description of ethical situations does reinforce the point that you should be ethical when you assess.
A word of warning to those not used to academic prose: This group of essays is written by academics for academics using a style and language that non-academics will find maddening and infuriatingly complex.
So if you are an academic charged with assessing your technical communication course or program, the collection offers you considerable detail on how to do this assessment and what to do with the results. Training personnel in business, industry, and government could find suggestions for evaluating their training programs in technical communication in the collection but would have to overcome the problem of the academic style. For them, it might be a good book to have in the company library; for technical communication program administrators, the collection is a necessity.
Tom Warren
Tom Warren is an STC Fellow, a winner of the Jay R. Gould Award for teaching excellence, and professor emeritus of English (technical writing) at Oklahoma State University, where he established the BA, MA, and PhD technical writing programs. Past president of INTECOM, he serves as guest professor at the University of Paderborn, Germany.

CSS Mastery: Ad


Technical Communication » Aptara

No comments:

Post a Comment