Introduction: Assessment

The basic tenet of this topic is that assessment should reflect the type of knowledge teachers want students to develop. Earlier topics have emphasised the importance of deep conceptual understanding; of usable, applicable and flexible knowledge; of problem solving and thinking strategies and finally the importance of metacognitive strategies to monitor learning. If all these types of knowledge and strategies are important, and knowing that students often concentrate on what will be assessed, then assessment should test these forms of knowledge. In addition, it is important that assessment procedures are explicit so that students know exactly what is required and how they can achieve that, (resources available, criteria for assessment, etc.), as well as how they are currently progressing toward that outcome.

In the first reading, Ramsden (1992) discusses various models of assessment and their relationships to cognitive (e.g., students’ approaches to learning) and affective (e.g., students’ cynical and negative attitudes toward the subject matter; p. 186) factors. The importance of feedback is emphasised. Some “rules” for better assessment in higher education are given. As you read this, reflect back to ideas presented in Brown and Knight (1994), particularly regarding summative and formative assessment.

The brief extract by Lytle and Schultz (1970) compares “current traditional” and “learner-centred” frameworks in assessment. This reading is now very old. Are these contrasting frameworks still evident in today’s adult learning settings?

The final reading (Smith & Keating, 1997) for this topic focuses on competency based assessment (CBA). It examines CBA in the light of some concepts and basic principles associated with assessment. The nature, use and interpretation of assessment are some of the issues raised. As you read this, reflect on the different environments in which adult learning should occur.

Look at the assessment methods for Facilitating Lifelong Learning. Do you think they adequately reflect the specific objectives of the course? Could they be improved? If so, how? (As a matter of fact, it would be most appreciated if you sent your suggestions to the Unit Co-ordinator at the end of the semester!)


Ramsden, P. (1992). Assessing for understanding. In Learning to teach in higher education (extract pp. 181–197; 210–213). London: Routledge.

Lytle, S. L., & Schultz, K. (1970). Assessing literacy learning with adults: An ideological approach. In R. Beach & S. Hyds (Eds.), Developing discourse practices in adolescence and adulthood (extract pp. 368, 370). Norwood, NJ: Ablex.

Smith, E., & Keating, J. (1997). Assessment in a CBT system. In Making sense of training reform and competency-based training (pp. 152–164). Wentworth Falls, NSW, Australia: Social Science Press.


The online students are encouraged to discuss the connections between the theories of assessment from this week's reading and the theory of collaborative learning. Please share your thoughts about how theory of assessment agrees with or contradicts elements of the collaborative learning.


As Ramsden (1992) argues assessment is about several things at once, with a distinction between summative and formative assessment noted, as part of the student "understanding the processes and outcomes of student learning, and understanding the students who have done the learning".

The implication here is that even with a individualistic assessment process, there is still a high level of collaborative learning, if the assessment process is conducted with both summative and formative elements.

Smith and Keating (1991) use a specific example of assessment as part of competency-based training, which can be best described as the sort of assessments that are appropriate for the learner to carry out their task. That is, the assessment is of "real-world" situations. This can dovetail with previous material on CBT and workplace learning (Hodgkinson and Hodgkinson, 2007) which emphasised the process of socialisation into the workplace community of practise as a matter of neglected importance.

A challenge exists for assessment methods on both counts (and that can also include this course!). Do the assessment methods much real world circumstances? Do the assessment methods incorporate integration into a community of practise? To the degree that they do not reflect these collaborative realities, an assessment process may indeed be flawed.


This time of semester is filled with assignments, pending exams and practical assessment within the discipline of nursing. Next week the students will be doing their assessment in the school of nursing and the mark will be competent, competent with assistance or not competent and student needs to resit. This process is very long and students are very nervous doing this. As tutor, assessor and facilitator in semester break I am working within the hospital setting with these students, I do feel for them. I believe I am supportive and explained in the unit scaffold this learning process. I do promote critical thinking, self/peer assessment. I would like to think I am instilling deep knowledge for the student nurses. This process is so important because within my profession the basic knowledge and linking in through out their career. This process in my case is intrinsic, because I feel so responsible to support this process of education within the discipline of nursing. The assessment process I believe used is fair and allows the student to display their knowledge, but it is also reflection on the educator's ability to instill this urge to learn and promotion of deep knowledge.


That assessment process sounds fairly challenging, although it is very interesting indeed that the grading method is similar to that which is derived by Vygotsky's ZPD and matches previous discussions here on the scaffolding and cognitive apprenticeship process.

Advocacy of continuous assessment, self and peer assessment, 'open-book' examinations and so forth are, of course, not promoted because they are 'easier', but rather because they are a better representation of a real-world situation that the learner may find themselves in. In another forum that I am involved in, for students and alumni of the Chifley Business School, there has been a recent move from the three-hour, closed-book, hand-written exam for various courses for an MBA. Some people justly claimed that this did not reflect what they learned or what actual practise entailed. One memorable remark from a student was that the only time that they had used a pen in the last decade, except to sign contracts, was in the Chifley exams.

Rather sensibly, the School has moved to 24 hour, open-book, take-home (or rather, downloaded) exams. Now some students are complaining that the exams (more of a short turnaround assignments) are too hard! The reality is that they just reflect a common business case: "I need a report on X on my desk by tomorrow morning".

There are situations where real-time, practical assessment, with no reference material or possibly support is necessary. Some examples of piloting is one which always comes to mind, and certainly nursing and surgery would fit as well. However is there any other form of continuous and/or collaborative assessment prior to the exams and practical assessments? Does a student-nurse (and for that matter an assessor/trainer) have a metric which they can estimate their competence and in what areas prior to going into these exams?

(Kwarn's response)

It is encouraging, and a timely reminder for myself, to see you are considering the importance of critical thinking and self/peer assessment, and it is obvious that you are very passionate. Your comments regarding fair assessment are worthy of great consideration, as without fair assessment we cannot truly gauge a student's knowledge and ability. The Curriculum Framework’s principles of assessment “require(s) assessment to be valid, educative, explicit, fair and comprehensive and based on explicit criteria. Assessment strategies will support a developmental approach to student learning. As part of assessment, students should know the outcomes and/or competencies they are striving for” (WA Curriculum framework.)

Likewise in an adult learning context, "fair and just assessment tasks provide all students with an equal opportunity to demonstrate the extent of their learning. Achieving fairness throughout your assessment of students involves considerations about workload; timing and complexity of the task" (Griffith University, 2012.)

It is my understanding that without fair and valid assessment opportunities, regardless of the context, learners are not given the most opportunity in which to demonstrate their learning and understanding.

Griffith University (2012) Assessment Matters. Retrieved from:

WA Curriculum Framework (1997) Our Youth, Our Future. Retrieved from:

Response to STP

That was a delightful combination of human expression and academic rigour.

This response will address the first part of the STP, being the acknowledgement of the difficulties faced by junior academics marking papers raises three additional issues.

Firstly, how does this effect the quality of summative assessment? Is it indeed quite probable that, pressed for time, a marker is either going to miss some insightful nuances on behalf of the student, or they will simply have to reduce their pay-per-hour to pick these up? Secondly, what does this mean for continuous assessment? Especially given that this is widely considered to better practise in for formative assessment? Thirdly, what does this grim assessment mean for us who are "learning to be teachers"?

These are rhetorical questions of course, as we can readily ascertain what the probable answers are going to be. How can quality assessment of learners be carried out in such a manner that gives dignity to the life of the assessor? The following are some thoughts on the issue.

* More continuous assessment, but of an overall reduced word count. There are significant formative advantages to continuous assessment, and the more continuous, the better the student fares on how they are going. Reducing the total word count however is a necessity to give the assesor additonal time for evaluating, and for consideration of an administrative overhead.

* Smaller class sizes. Ceritus paribums, an educator with a smaller class will be able to conduct a more thorough assessment of material submitted. Even if the Federal Opposition spokesorc on Education does claim "class sizes are not that important". This does mean that education will cost more.

* Security of employment and improved renumeration. Teachers who are poorly paid and in insecure jobs are, like anyone else, likely to do a rushed job. Of course, teaching like any service, does suffer a "cost disease" as labour become scarcer relative to capital (Baumol and Bowen, 1966). Ultimately in comes down to how much as a society we supposedly value eduation.

These are just a few ideas to be thrown into the mix from one of the parts of the STP.

(Also, I think it is necessary to point out at this stage that I am *not* a member of the NTEU.)

Additional References

Baumol, William; Bowen, William (1966). Performing Arts, The Economic Dilemma: a study of problems common to theater, opera, music, and dance. New York: Twentieth Century Fund.

RE: TOPIC 4B Assessment

From the Unit Guide: "Look at the assessment methods for Facilitating Lifelong Learning. Do you think they adequately reflect the specific objectives of the course? Could they be improved? If so, how? (As a matter of fact, it would be most appreciated if you sent your suggestions to the Unit Co-ordinator at the end of the semester!)"

The comments take into account the distinction between course objectives and learning outcomes (section 4A of the course), the "Unit Aims in the Learning Guide, the "Learning Outcomes" in the same, and the Assessment methods utilised in the course, along with the insights from various readings, especially Topic 4B (Assessment).

Assessment weights were as follows:

• 20 Participation (partially derived from self-assessment; includes LMS leadership and STP)
• 30 Final exam (open book, selection of two questions from an initial grouping of six)
• 50 Major project (potentially split into two components)

The Participation component of the course is that most strongly aligned with the course readings on assessment practises, providing both the potential to satisfy unit aims (LMS leadership can be seen as a type of scaffolding to introduce adult educators and the STP contributes the opportunity for a significantly deeper criticial reflection on one week's reading material), and for continuous assessment and feedback which is preferential for formative assessment. However, the summative component of this course is deferred when it probably could be introduced during the course itself. As this is not included a potential strength of continuous assessment is missing.

An requisite examination component was introduced for all Murdoch University courses in 1986 follow a proposal from the Vice-Chancellor, despite significant opposition at the time. The ESTR committee accepted the suggestion, albeit with some notable flexibility of interpretation, which still seems to be the case. This course certainly does utilise this flexibility through an open-book examination with questions provided beforehand. However the hand-written exercise still does not represent a particularly realistic method of evaluation. Also, because the exam is known to consist of two questions from a selection of three, from an initial range of six, mathematically an avoidance orientation can be taken by only researching five of the questions. This does not encourage a full overview of course material.

The Major Project is the most significant of the course, and arguably too significant, given that the focus is inevitably quite narrow based on the potential project questions. Again the course satisfies a scaffolding-like approach to learning by outsourcing much of the initial evaluation to fellow learners, which certainly represents a skill that they must acquire. However, again as summative assessment is not provided by the time of examination it is somewhat difficult for a learner to ascertain how they are faring.

Overall then the following is recommended:

• 40 Participation (partially derived from self-assessment; includes LMS leadership and STP, and with each week's contribution evaluated as continuous assessment)
• 30 Major project (a narrower study on a specific topic with initial evaluation by fellow learners and summative assessment provided prior to examinations)
• 30 Final exam (24-hour open-book research paper, covering the entire course material - no need for sitting with invigilators!)