Reports from ASQA still indicate that at audit time, over 80% of RTOs continue to be deemed non-compliant when it comes to assessment. Why is this happening? There are a few key reasons why assessment continues to be the bane of auditors, compliance managers, and assessors alike, so let’s break it down.
Let me start at the outset by saying buyer beware when you purchase off-the-shelf materials. Most require you to contextualise and adapt the product for your audience, and there’s a big difference in the way materials and assessments need to be designed for apprentices who work on the job, or for international students with lower language, literacy, and numeracy skills. Additionally, auditor interpretations regularly change.
Designers don’t get the opportunity to sit in on audits, nor do they spend a lot of time applying the tools in the classroom. As such, even when a tool is compliant, it can be designed in a way that makes it difficult for assessors to understand or apply.
To make a well-rounded tool you need a comprehensive and current understanding of the competency standards, Principles of Assessment and Rules of Evidence, a clear understanding of to whom the tools will be applied, and an operational understanding of how assessors will apply the tools. This is a fairly unique skill set and most people will specialise in just one area. As such, it often takes a team to make it work. When you look at design prior to application, do these things:
Run through the materials with the team that will be delivering them.Do they understand the instructions? Will the students understand the instructions?What will be the timing on the assessments?Do you have all the resources required to apply the assessments?Do you need to adjust for your target group or delivery method?What evidence needs to be collected for each instrument?
Does the type of assessment line up? For example, if you are assessing practical criteria or performance evidence, you should be seeing practical assessments, observations, or portfolios of evidence - not knowledge or written questions.What is the volume of assessment? Is once enough?How has the competency been interpreted? Is it in line with industry expectations?Is everything covered? Foundation skills? Assessment conditions?
Prior to the delivery of any new tool or instrument, you need to review the expectations with the team. Gain a clear agreement on how the tool is interpreted, how it meets the requirements, and how it will be applied.
If you have done all of the above, you’re starting on the right foot. You have a compliant tool that should work for your students and is broadly understood by the assessors, and all the resources to actually deliver the assessment. As noted above, the devil is in the detail and by detail, I mean the application.Even with compliant tools, the student audit model focuses on the application of the tools, gauging how the tool has been applied, and whether evidence is collected in a way that reflects the student’s competency. As Javier Amaro noted just last week at the InSources VET Summit, when you are deemed non-compliant, the auditor is not saying that your students aren’t competent, they’re saying that you haven’t collected evidence in a way which demonstrates your students’ competency. And this is the case for a great many RTOs: great training, good assessment, but poor evidence.
Ask a trainer and assessor what they love about their job and you’ll be told the same thing 99% of the time: I love training, helping students learn, and making a difference. What do trainers and assessors hate? Paperwork. And yes, paperwork is how we evidence competency. We need to flip the focus so there’s more emphasis on the parts of the jobs that trainers want to do. So how do you tackle this?
First, you need to re-frame the conversation. Why do trainers collect evidence, why does their paperwork need to be accurate? Not to meet a standard, not because they have to, but because we want to evidence competency. This is done for the:
Our assessment decisions are regulated by law, therefore we should have the same level of confidence in our ability to evidence an assessment decision as we do when we sign a legal contract. Good assessment means that we’re reinforcing the learning and clearly evidencing the outcome. Students can use the assessment evidence as part of their portfolio development: photos of the amazing dishes they have created or products they have designed, projects evidencing their application of sound financial principles or latest industry trends, presentations which can be used as a template for the future. Good assessment is as meaningful as the training itself, creating a real sense of accomplishment and pride not only for the student, but also the assessor.
Second, you need to provide assessors with constant up-skilling to ensure their interpretations remain current by including them in audits, and having them lead industry consultations. Remember my last article? Trainer competence = trainer confidence. You run a training organisation, train your staff to know what is expected, and give them the skills and knowledge to achieve that. Pick a Principle or Rule every fortnight and go through how a tool or instrument you have meets that requirement. You’ll figure out pretty quickly who’s walking the walk, and who’s just a lot of talk.
The last lesson builds on 1 and 2. There’s a lot of debate around terminology, but let’s not focus on semantics and rather focus on the process: moderation. Prior to deeming the competence of a new trainer, you review their application of the tool and their assessment decision:
Have they used the tool correctly?Does the evidence meet the marking guides?Does the evidence meet industry and competency standards?
A “no” to any of these means two things. You’ll need to gap train and/or reassess the student, and you’ll need to up-skill your new trainer. In the beginning do it often, until you are comfortable with the outcomes.
When the new standards were released in 2015, there was a real focus on validation. Regulators wanted RTOs to be checking themselves to ensure their trainers were doing what’s required. You take completed assessment tools, evidence of the assessment decision and ask the same questions as above. The team who does this (and I say team because solo validations are rarely as valuable) needs both industry and VET competencies and currencies, and must not have been involved in the assessment decision.
Moderation and validation are the mechanisms by which you are confirming your staff are competent and current, this is how the regulator confirms the same thing. So once again, do it often until you are comfortable with the outcomes, then do it regularly to maintain that confidence.
You don’t have to have the world’s most complicated system, but it does require a little bit of planning (you can build this into your SMS and workflows), some regular training (yes, this should be included in all trainer and RTO calendars and PD plans), and a lot of healthy communication. Get this combination right and you’re not only future proofing your RTO, but you’re on track to delivering quality training and assessment that will not only be compliant, but be meaningful for your staff and students.
About the author: Lauren Hollows is founder and CEO of Understand TAE, the comprehensive resource site dedicated to giving VET practitioners access to industry knowledge and products. You can read more of Lauren’s VET insights on her blog, or connect with her on Twitter and LinkedIn.
Check out these articles:
VET moves fast. Stay informed, with blogs straight to your inbox.