We're hiring for a range of exciting new roles - if you'd like to join the team, click here to learn more



Stay in the loop

Never miss a beat with the latest in VET news delivered straight to your inbox.

Smart Surveys

November 3, 2016

How being smart about surveys can help with your ASQA auditing processes.

It is a well-known business adage that keeping customers is less expensive than acquiring new ones. The same concept can also apply to training organisations. If certain practices drive students away, a training organisation has to repeatedly spend time and money on advertising and other efforts to recruit more students.

The obvious answer to this conundrum is to keep students happy enough so that they’ll keep returning to your organisation. However keeping students happy is easier said than done. How do you determine if a student is satisfied with your service? How do you identify the point where they became dissatisfied with your organisation or the training they’ve received?

Considering that ASQA's audit process will primarily revolve around the student and their experiences in 2017, implementing methods to measure students satisfaction levels are going to be increasingly important for RTOs.

While there are some obvious indicators of customer satisfaction beyond survey data such as an increase or decrease in sales volumes, customer complaints or anecdotal feedback, surveys are the easiest and most effective way to gain insightful qualitative and quantitative feedback (Hague & Hague, 2016).

The survey tool-kit for measuring customer satisfaction boils down to three options, each with their advantages and disadvantages. These tools are also not mutually exclusive and are often used alongside each other. These three options are postal/electronic surveys, face to face interviews and telephone interviews. All options may revolve around the same list of questions, however the delivery of each is different and suited to different situations. Electronic surveys are the cheapest and least time intensive option. However, it is harder to gain as much in-depth insight into a student’s experience through an electronic survey as it would be through face to face interviews, for example.

Smart Survey - a popular digital survey solution, encourage businesses to incorporate all three methods in order to get an accurate satisfaction measurement. Thankfully, there are a lot of digital survey tools that can assist in setting up these different methods and reporting on the results received. There are also student management systems, including aXcelerate that have the capability to send out pre and post program surveys automatically for each course an RTO delivers. The results can then be easily tracked, illustrated (through graphs) and reported on in order to have effective statistics to show during an audit.

However, effective survey results are only as good as the questions asked. So you then have to ask….what questions should you include in order to receive accurate satisfaction data?

Market research specialists B2B International, recommend initially basing the questions around key interactions with a customer. For example, questions based on the initial sales process (first hearing about the service, getting in touch with the organisation) to the actual purchasing of the service and using it.

Luckily for RTOs, ASQA have identified the key phases of the student experience and thus the areas RTOs need to gather data from. These are: 

  • Marketing and Recruitment
  • Enrolment
  • Support and Progression
  • Training and Assessment
  • Completion

When developing questions based on these key areas, Smart Surveys also recommend using rating scales so that answers can be as measurable as possible. An example of this is asking a student to rate the effectiveness of their trainer from one to five. The organisation can then determine the parameters of this rated scale in order to get an accurate measurement of overall effectiveness; a five could equate to highly effective, a four or three could equate to somewhat effective and a two or one could equate to ineffective.

If forty percent of a class gave the trainer a rating below a three for example, the RTO can identify the particular area of the student experience which is causing client dissatisfaction.

Smart Survey then recommend following a rated scale question with an option to provide qualitative feedback. For example, Why did you give this rating regarding trainer effectiveness? This qualitative information will provide the trainer with a greater depth of information and pinpoint the moment dissatisfaction has occurred during the training delivery. This is crucial as it is from this point an RTO can effectively develop steps to resolve the issue in future.

Another important indicator of customer satisfaction which can be applied to surveys is the Net Promoter Score (NPS). This score is based on the fundamental perspective that every company’s customers can be divided into three categories; promoters, passives and demoters.

Promoters are loyal enthusiasts who keep buying from a company and urge their friends to do the same. Passives are satisfied but unenthusiastic customers and can be easily wooed by competition and detractors are unhappy customers trapped in a bad relationship.

The thought process behind the NPS concept according to the top management consulting firm Bain & Company, is that a customer (or a student in this context) is likely to be highly satisfied with the service they’ve received if they are willing to recommend it to friends (2016).

Bain & Company also claim that the NPS score is a key way to gauge the efficiency of a company’s growth and overall performance; the more promoters a company has the greater the likelihood of leads (or student inquiries in this context).

Constructing an NPS survey is very simple and is drilled down to one essential question: How likely would you recommend our company/training course to another student?

This question would then use a scale from one to ten. Students who rank their answer from one to six would be classed as a demoter, students who rank their answer between seven and eight would be classed as passive and students that rank their answer between nine and ten would be classed as a promoter. Once gathering these answers, the company will calculate the total percentage of each category and then minus the total percentage of demoters from the total percentage of promoters to get the overall NPS score. This is an easily trackable score that an organisation can apply to their overall performance or for a particular course, assessment piece or training team.

Bain & Company believe that incorporating an NPS element to surveys, as well as standard ranking scales will ensure an accurate insight into the satisfaction levels of customers or students. These types of questions can be easily customised into many survey platforms and student management systems to ensure RTOs can better manage their student experiences and be well prepared for the upcoming changes to the ASQA audit process.

For more information on these upcoming ASQA changes, you can visit their website.

For more information on NPS and how you can apply it to your survey process, you can visit Bain & Company’s website.

Get the latest VET news and insights

VET moves fast. Stay informed, with blogs straight to your inbox.

Enjoy this blog? Please share using the buttons below