Posted by Jim Farrell

Informal Learning. Those two words are everywhere. You might see them trending on Twitter during a #lrnchat, dominating the agenda at a learning conference or gracing the pages of a training digest. We all know that informal learning is important, but measuring it can often be difficult. However, difficult does not mean impossible.

Remember that in the 70+20+10 model, 70 percent of learning results from on the job experiences and 20 percent of learning comes from feedback and the examples set by people around us. The final 10 percent is formal training. No matter how much money an organization spends on its corporate university, 90 percent of the learning is happening outside a classroom or formal training program.

So how do we measure the 90 percent of learning that is occurring to make sure we positively affect the bottom line?
First is performance support. Eons ago, when I was an instructional designer, it was the courseware and formal learning that received most of the attention. Looking back, we missed the mark: Although the projects were deemed successful, we likely did not have the impact we could have. Performance support is the informal learning tool that saves workers time and leads to better productivity. Simple web analytics can tell you the performance support that is most searched on and used on a daily basis.

But onto what I think Questionmark does best – that 20 percent that is occurring through feedback and examples around us. Many organizations have turned to coaching and mentoring to give employees good examples and to define the competencies necessary to be a great employee.

I think most organizations are missing the boat when it comes to collecting data on this 20 percent. While I think coaching and mentoring is a step in the right direction, they probably aren’t yielding good analytics. Yes, organizations may use surveys and/or interviews to measure how mentoring closes performance gaps, but how do we get employees to the next level? I propose the use of observational assessments. By definition, observational assessments enable measurement of participants’ behavior, skills and abilities in ways not possible via traditional assessment.

By allowing a mentor to observe someone perform while applying a rubric to their performance, you allow for not only analytics of performance but the ability to compare to other individuals or to agreed benchmarks for performing a task. Also, feedback collected during the assessment can be displayed in a coaching report for later debriefing and learning. And to me, that is just the beginning.

Developing an observational assessment should go beyond the tasks someone has to do to perform their day-to-day work. It should embody the competencies necessary to solve business problems. Observational assessments allow organizations to capture performance data, and measure the competencies necessary to push the organization to be successful.

If you would like more information about observational assessments, click here.