Latest Resources

Bridge the Gap from Training to Application with Predictive Learning Analytics

by Ken Phillips, CEO, Phillips Associates

In my previous blog post, I discussed the concept of scrap learning and how it is arguably the number one issue confronting the L&D profession today. I also provided a formula you could use to estimate the cost of scrap learning associated with your training programs.

In this post, I’ll share with you a revolutionary methodology I’ve been working on for the past several years called Predictive Learning Analyticts™ (PLA). The method enables you to pinpoint the underlying causes of scrap learning associated with a training program. It consists of three phases and nine steps to provide you with the data you need to take targeted corrective actions to maximize training transfer (see figure below). 

While the specific questions and formulae for the scores are proprietary, I hope you can apply the concepts in your organization using your own survey questions and your own weighting for the indexes. Even if you adopt a simpler process, the concepts will guide you and the article will give you an idea of what is possible.

Phase 1: Data Collection and Analysis

Unlike other training transfer approaches which focus mostly on the design and delivery of training, PLA offers a holistic approach to increasing training transfer. Built on a foundation of three research-based training transfer components and 12 research-based training transfer factors (see chart below), PLA targets the critical connection among all these elements. In short, PLA provides L&D professionals with a systematic, credible and repeatable process for optimizing the value of corporate learning and development investments by measuring, monitoring, and managing the amount of scrap learning associated with those investments.

Training Transfer Components & Training Transfer Factors

Phase 1: Data Collection & Analysis

The objective of phase one, Data Collection & Analysis, is to pinpoint the underlying causes of scrap learning associated with a training program using predictive analytics and data. Five metrics are produced and provide L&D professionals with both direction and insight as to where corrective actions should be targeted to maximize training transfer. The five measures are:

  • Learner Application Index™ (LAI) scores
  • Manager Training Support Index™ (MTSI) scores
  • Training Transfer Component Index™ (TTCI) scores
  • A scrap learning percentage score
  • Obstacles preventing training transfer

Data for calculating the first three measures: LAI, MTSI, and TTCI scores, is collected from program participants immediately following a learning program using a survey. The survey consists of 12 questions based on the 12 training transfer factors mentioned earlier. Data for calculating the final two measures are collected from participants 30 days post program using either a survey or focus groups and consists of the following three questions:

  1. What percent of the program material are you applying back on the job?
  2. How confident are you that your estimate is accurate?
  3. What obstacles prevented you from utilizing all that you learned if you’re not applying 100%?

Waiting 30 days post program is critical because it allows for the “forgetting curve” effect—the decline of memory retention over time—to take place and provides more accurate data.

LAI Scores

LAI scores predict which participants attending a training program are most likely to apply, at risk of not applying and least likely to apply what they learned in the program back on the job. Participants who fall into the at-risk and least likely to apply categories are prime candidates for follow-up and reinforcement activities. Examples include email reminders, micro-learning or review modules, and coaching or mentoring to try and move them into the most likely to apply category.

MTSI Scores

MTSI scores predict which managers of the program participants are likely to do a good or poor job of supporting the training they directed their employees to attend. Managers identified as likely to do a poor job of supporting the training are prime candidates for help and support in improving their approach. This help might take the form of one-on-one coaching; a job aid explaining what a manager should do before, during, and after sending an employee to training; or creating a training program which teaches managers how to conduct pre- and post-training discussions with employees.

TTCI Scores

TTCI scores identify which of the three training transfer components and the 12 training transfer factors affiliated with them are contributing the most and least to training transfer. Any components or factors identified as impeding or not contributing to training transfer are prime candidates for corrective action.

Scrap Learning Percentage

The scrap learning percentage score identifies the amount of scrap learning associated with a training program. It provides a baseline score against when follow-up scrap learning scores can be compared to determine the effect targeted corrective actions had on increasing training transfer.

The obstacles data identifies barriers participants encountered in the 30 days since attending the training program that prevented them from applying what they learned back on the job. Waiting 30 days to collect the data allows for the full range of training transfer obstacles to emerge. For example, some are likely to occur almost immediately—I forgot the things I learned—while others are likely to occur laters—I never had an opportunity to apply what I learned. Frequently mentioned obstacles are prime candidates for corrective actions to mitigate or eliminate them.

Phase 2: Solution Implementation

The objective of phase two: Solution Implementation, is to identify, implement, and monitor the effectiveness of corrective actions taken to mitigate or eliminate the underlying causes of scrap learning identified during phase one. Here is where the “rubber meets the road,” and you have an opportunity to demonstrate your creative problem-solving skills and ability to manage a critical business issue to a successful conclusion. Following the implementation of the corrective actions, it is now time to recalculate the amount of scrap learning associated with the training program. You can then compare the results to the baseline scrap learning percentage calculated during phase one.

Phase 3: Report Your Results

The objective of the third phase: Report Your Results, is to share your results with senior executives. Using the data you collected during phases one and two, it is time to show that you know how to manage the scrap learning problem to a successful conclusion.

In Sum

Scrap learning has been around forever, however what is different today is that there are now ways to measure, monitor, and manage it. One of those ways is through Predictive Learning Analytics™. Alternatively, you might employ the concepts to build your own simpler model. Either way, we an an opportunity to reduce scrap learning.

If you would like more information about the Predictive Learning Analytics™ methodology, email me at: ken@phillipsassociates.com. I have an ebook that covers the method and a case study illustrating how a client used the process to improve the training transfer of a leadership development program.

Business Alignment: A Critical Success Factor for L&D Organizations

by Peggy Parskey, Associate Executive Director, Center for Talent Reporting

Business Alignment
Signpost with Alignment wording

If you have attended a Learning and Development (L&D) industry conference within the past three years or listened to a panel of senior L&D leaders, it’s likely that someone has raised the topic of alignment. Numerous blogs from ATD, SHRM or even technology vendors confirm that alignment is clearly top of mind.

Given the attention paid to the topic of alignment, you might think that L&D has nailed down the principles, process, and outputs of business alignment. Unfortunately, you would be wrong. We haven’t nailed down this process by a long shot. And therein lies the problem: The L&D industry does not have a consistent definition of what alignment is, let alone how to achieve it.

Do a quick Google search on L&D business alignment and you will get thousands of articles on the topic. Click some of the links and you will find as many suggested approaches to alignment as search results. Some authors suggest that business alignment is about getting the right KPIs to support business goals. Other suggest that alignment is about doing a gap analysis between the current and future state to identify the needed training programs. Some bloggers recommend conducting interviews with business leaders as input to an L&D strategy. And a few organizations with whom we have worked view alignment as a simple mapping exercise: “We need to grow sales this year, we have a bunch of sales programs. Voila, Alignment!”

With all of these possible approaches, how should an L&D leader navigate the alignment process?

Principles of Effective Alignment

At least four principles should guide leaders who are grappling with achieving business alignment:

  1. Alignment isn’t a nice to have, it’s a must have
  2. Alignment requires engagement and commitment from both L&D and business leaders
  3. Alignment has both strategic and tactical components
  4. Alignment isn’t simply about what L&D offers but also how it offers them

1. Alignment Isn’t a Nice to Have

Strategic and tactical alignment is critical to ensure that L&D is investing its scarce resources in the right place at the right time on the right programs. When L&D doesn’t execute this process or doesn’t manage it effectively, the consequences can be significant. If L&D delivers the wrong programs to the wrong audiences, the organization will lack the needed capability to achieve its goals. Furthermore, other organizations now have to pick up the slack created by L&D. Non-L&D functions may develop shadow learning organizations, or hire external resources to fill the gap left by L&D. If L&D wants to fulfill its purpose to develop the needed capability for the current and future workfoce and improve L&D performance, then effective alignment is the critical success factor.

2. Alignment Requires Mutual Engagement

Discussions about L&D alignment often imply that the business has a peripheral role to play. L&D leaders get the strategic goals from above or they interview a few business leaders to understand priorities. At that point, the business seems to disappear from view.

If L&D leaders want to achieve effective alignment, the business can’t be a bystander. Senior business leaders must engage with L&D not only to communicate their priorities but also to ensure L&D has a role to play in achieving those priorities. Moreover, the business is an influential voice to ensure that L&D has the appropriate resources and support to deliver on its promise. Furthermore, the business will have the primary responsibility for reinforcement, without which there is likely to be little application to the job. If the business doesn’t understand its role, then it is incumbent on L&D leadership to spell it out and create shared accountability for success.

3. Alignment Has Both Strategic and Tactical Components

The alignment process must start at the strategic level. Business leaders establish strategic priorities, set goals and then allocate resources to achieve them. Based on these priorities. L&D then determines if and where it plays a role. If the organization plans to launch a new product line, L&D and the business must agree that L&D should own the process to train employees on these new products. If the organization wants to capture new demographic for its products, the business and L&D may agree that L&D has a minimal role to play. Regardless of the objective, business and L&D leaders need to clarify if L&D has a role as well as the importance and urgency of that role.

Having reached agreement on the strategic priorities, L&D must also ensure it aligns tactically. As an example, imagine that $500K is allocated to develop customer experience competencies. L&D practitioners then dive into the details to assess specific organizational needs. During this process, performance consultants discover that a primary root cause of underdeveloped customer experience capability is the lack of quality resources for call center personnel. At this point, L&D discovers that its requirements have changed. Training, while necessary, is not at the heart of the under performance. At this point, L&D may simply turn over the requirement to the business to develop the needed content for call center employees and invest its scarce resources elsewhere.

The point of this example is that what appears to be development need at the strategic level, may not translate into a development requirement when practitioners study the requirements more fully. It is incumbent on L&D practitioners to adjust their approach to ensure they stay aligned.

4. Alignment Isn’t Just About What L&D Offers

Alignment isn’t just about what programs L&D offers, but increasingly, how it offers them. In a fast-paced world, instructor-led training (ILT) has a long development cycle, is expensive to produce, and requires a large time investment for learners. Yet, according to ATD’s 2018 State of the Industry Report, ILT methods still comprise 67% of all training hours with self-paced at 29% and mobile learning a mere 2%.

Learners need content at their fingertips. They need a rich reservoir of material in different forms (white papers, how-to guides, videos, checklists, case studies) that are easy to find and easy to consume. Content must be high quality and useful on the job.

Alignment requires not simply that L&D builds capability and improves performance, but also does it in the most efficient and effective manner appropriate to the need.

Conclusion

Alignment is critical to ensure the L&D function meets the needs of the business by building the necessary skills to run the business and achieve its strategic objectives. This post focused on the key principles at the heart of alignment. In subsequent posts we will explore the importance of treating alignment as a continuous process and building skills to manage the end-to-end process effectively.

I recommend you learn more about alignment from respected industry leaders who can provide guidance on how to achieve meaningful alignment with the business. Below are four resources you should check out:

The Business of Learning by Dave Vance. See Chapter 4 which discusses strategic alignment in depth.

Attend the CTR Measurement and Reporting Workshop, which addresses the importance of alignment and how to engage business partners in the process.

Read the recently published IDC PlanScape document on the importance of L&D alignment to maximize the impact of training investment.

Read the upcoming 2nd Edition of “Learning Analytics,” which will be published in February 2020. (The current edition may be found here. Look for the second edition at www.explorance.com in February 2020. The second edition has a chapter devoted to the topic of strategic and tactical alignment.)

We’d like to hear from you. If you’re having business alignment challenges, don’t hesitate to email Dave Vance or Peggy Parskey.

The Greatest Issue Facing L&D Today

Scrap Learning

by Ken Phillips

What is arguably the top issue facing the L&D profession today?

The answer is scrap learning or the gap or difference between training that is delivered but not applied back on the job. It’s the flip side of training transfer and is a critical issue for both the L&D profession and the organizations L&D supports because it wastes money and time—two precious organization resources.

Now, you might be wondering, “How big is the problem?”

Two empirical studies, one by KnowledgeAdvisors in 2014 and one by Rob Brinkerhoff and Timothy Mooney in 2008, found scrap learning to be 45 percent and 85 percent respectively in the average organization. To add further credibility to these percentages, I’ve conducted three scrap learning studies over the past few years and found the scrap learning percentages associated with three different training programs in three separate organizations to be 64 percent, 48 percent, and 54 percent respectively. Added together, these five research studies results in an average scrap learning figure of approximately 60 percent in the average organization.

To further highlight the magnitude of the scrap learning problem, consider its effect on the amount of wasted organizational dollars and time. According to the 2018 ATD State of the industry research report, the average per employee organization training expenditure in 2017 was $1,296 and the average number of training hours consumed per employee was 34.1. Using the KnowledgeAdvisors and Brinkerhoff scrap learning percentages mentioned above, you can see in the table below just how much scrap learning costs the average organization in wasted dollars and time.

Cost of Scrap Learning in Wasted Dollars & Time

Average per employee training expenditure (=) $1,296(X) 45% scrap learning(=) 583 wasted $
The average per employee training expenditure (=) $1,296(X) 85% scrap learning(=) 1,102 wasted $
The average number of training hours consumed per employee (=) 34.1(X) 45% scrap learning(=) 15 wasted hours
Average per employee training expenditure (=) $1,296(X) 85% scrap learning(=) 29 wasted hours

Taking all of this data into account reminds me of James Lovell’s famous quote during his Apollo 13 mission to the moon when an oxygen tank aboard the space capsule exploded putting both the flight and crew in great peril: “Houston, we have a problem!”

If you would like to take a crack at estimating the cost of scrap learning associated with one of your training programs, you can use the Estimating the Cost of Scrap Learning Formula below. To gain the most useful insight, you should make every effort to collect the most accurate data possible for each of the input variables. Also, when selecting an estimated percentage of scrap learning associated with the program, variable 4 in the formula, you should get input from several people familiar with the program such as other L&D colleagues, participants who previously attended the program or perhaps even the managers of program participants and then compute an average of their estimates. Gaining the input of others will increase both the accuracy and credibility of the estimate and remove any concerns that the scrap learning percentage is merely your opinion.

Estimating the Cost of Scrap Learning Formula

Wasted Participant Dollars

The length of a learning program in hours _____
X the number of programs delivered over 12 months _____
X the average number of participants attending one program _____
= the cost of scrap learning in wasted time _____
X the average hourly participant salary + benefits _____
= the cost of wasted participant dollars _____ (A)

Wasted L&D Department Dollars

Administrative expenditures (e.g., materials, travel, facility, facilitator, delivery platform, food, etc.) for one program _____
X the number of programs delivered over a 12-month period _____
X the estimated percent of scrap learning (45-85%) associated with the program _____
= the cost of wasted L&D department dollars _____ (B)

Total Cost of Scrap Learning

Cost of wasted participant dollars (A) _____
+ cost of wasted L&D department dollars (B) _____
= total cost of scrap learning _____

With this information in hand, you are now ready to pinpoint the underlying causes of scrap learning and take targeted corrective actions to mitigate or eliminate these causes and maximize training transfer. How to do this will be part two of this blog article.

Change the Conversation about Funding Measurement

The L&D profession continues to struggle to get budget and attention for more robust measurement strategies. Learning professionals, particularly those with measurement or analytics responsibilities, appreciate the value of measurement and know that further budget would be well spent but often cannot convince senior leaders in learning to make the investment, let alone senior leaders outside learning. So, how do we make the case for more resources?

My advice is that we take a stealth approach. While some senior leaders will readily see the reason for more robust measurement, in my experience most will not. Senior leaders always have many more requests for budget and staff than they can grant, and typically measurement by itself doesn’t rise to the top of the priority list. And, since you’re already providing learning, measurement seems like an add-on or a ‘nice to have,’ but not something that is essential. This is especially true if most of the measurement currently being done is simply to report activity or historical results.

The alternative is to change the conversation. Instead of talking about measurement, talk about what will be required to deliver the planned results from the learning initiative. In other words—talk about management rather than measurement. Of course, you cannot manage without measures, so management will be the trojan horse that gets in measurement. This approach makes measurement the means to the end. It also focuses on the most important purpose or use of measurement which is to manage.

How would this work? Start with programs aligned to your organization’s key goals or needs. For these initiatives you need to partner closely with the goal owner, like the head of sales or manufacturing. Both parties need to agree on program specifics like learning objectives, target audience, and completion date. Most importantly, they need to agree on mutual expectations for the impact of the learning, which may be the isolated impact of the Phillip’s approach or the more subjective expectation of the Kirkpatrick approach.

In either case, both parties will need to agree on all the relevant measures and their targets that are required to deliver the planned impact. These measures would include efficiency measures like number of participants, completion rate, completion date, and cost as well as effectiveness measures like participant reaction, learning, and application rate. These measures will have to be managed to plan the development, delivery, reinforcement, and impact stages in order to deliver the ultimate measure, which is impact. (ex. The program will have to be completed by all participants by April 30 with a 90 percent initial pass rate a 90-day application rate of 85%.)

Notice that you now have a robust measurement strategy which is an integral part of the management for this initiative. In fact, you will not be able to meet expectations without it and consequently you should refuse to do the learning if a senior leader suggests stripping out the measurement. However, since measurement was not presented separately (no budget or staff identified for it, simply part of the plan), the whole issue of stripping it out is not likely to come up. You can follow the same approach for other learning initiatives including those not directly aligned to the goals of the organization. In every case, you should identify some measure of success and should identify the relevant measures and the targets required to deliver the success.

This is the stealth approach—treat measurement as a means to the end—embed measurement in all key initiatives by getting agreement with goal owners and senior leaders upfront on the planned outcome and on targets for all the relevant efficiency and effectiveness measures. In addition to measures to ensure expectations are met, you may need to employ measurement and analytics upfront to better understand the issue, optimum learning solution, or modality. Again, measurement will be integrated into the management of the initiative. It will not be a stand-alone item and will not be presented as such.

You may already have a dedicated measurement group to serve as an integral support staff; however, every program manager needs to work with goal owners and leaders to agree on a measure of success and the relevant efficiency and effectiveness measures and their targets. They need to be comfortable with measurement. If you want to expand your measurement efforts, look for ways to do this as part of key programs and initiatives (the end) rather than just as expanding your measurement group (the means).

In conclusion, try changing the conversation from measurement (which senior leaders generally do not understand or value) to measurement or to help you with your measurement strategy. They really don’t care. Instead, engage them to manage their program to deliver planned results which they do care about and which, by necessity, will include measures. Try this approach and see how it works for you.

 

2019 CTR Conference a Great Success

We are just weeks back from our Sixth Annual CTR Conference
in Dallas. We weren’t sure it could top last year’s but I think it did. We had
117 participants and they were very enthusiastic and engaged. Lots of great
sessions and discussions. And of course some delicious food in Grapevine!

Patti Phillips kicked it off with a talk on ROI, emphasizing
how important impact and ROI are and how they can be calculated. She included
some very interesting history and the story of how she met Jack. When we came
back together later in the morning after breakouts, Peggy facilitated a panel
on scrap reporting and measurement literacy, and then Brenda Sugrue, 2018 CLO
of the Year, shared some thoughts from her career. Six more breakout sessions
followed in the afternoon leading up to the awards ceremony at the end of the
day.

CTR recognized Jack and Patti Phillips as the first
recipients of the Distinguished Contributor Award, acknowledging their truly
significant contributions to our field including the 1983 Handbook of Training
Evaluation, the concepts of isolated impact and ROI, and the successful
implementation of ROI in over 70 countries. Then we recognized the winners of
the TDRp Excellence Awards: SunTrust Bank for L&D Governance, and Texas
Health Resources and Heartland Dental for L&D Measurement. Jack and Patti
recognized three winners for ROI studies as well.

Day two began with Steven Maxwell of the Human Capital Management Institute sharing his perspective of what CEOs and CFOs want from learning as well as his thoughts on the newly released Human Capital Reporting standards, both of which got people thinking. After six more breakout sessions, Justin Taylor, General Manager and EVP of MTM, led a panel on the impact of learning to wrap up the conference. With a focus on just the measurement, reporting and management of L&D, everyone could focus on the issues of most concern to them and share ideas and questions with each other in detail. If you didn’t make this event, you missed hearing from industry thought leaders and leading practitioners.

Next year we are moving the conference to the fall, so mark your calendars for October 28-29, 2020. We will continue to begin CTR Week with a pre-conference workshop on measurement and reporting (October 26-27) and end the week with post-conference workshops..  Next year will be the 10th anniversary of TDRp and we hope you can join us for the celebration.

2019 TDRp Award Winners

The Center for Talent Reporting is pleased to announce the following 2019 TDRp Award winners:

Distinguished Contributor Award

Congratulations to Jack and Patti Phillips, CTR’s first Distinguished Contributor Award winners. This award recognizes outstanding and significant contributions to the measurement and management of human capital. Particularly noteworthy are Jack’s groundbreaking book, Handbook of Training Evaluation and his concepts on isolated impact and ROI. Jack and Patti’s ROI methodology has been implemented in hundreds of organizations in over 70 countries. They have forever changed the standard by which investments in human capital are measured.

2019 TDRp Award Winner for Learning Governance

Congratulations to Sun Trust, CTR’s 2019 TDRp Award winner for Learning Governance. Sun Trust earned this award for their best practices in running L&D like a business. In particular, Sun Trust provided evidence that L&D participates in a continuous robust enterprise-wide governance process that ensures L&D has a voice and input into enterprise priorities and funding. The governance council moreover has clear guidelines for funding initiatives and exercises ongoing oversight through disciplined financial budgeting, reporting, and forecasting.

2019 TDRp Award Winners for Learning Measurement

Congratulations to Texas Health and Heartland Dental who have been awarded the 2019 TDRp Award for Learning Measurement. 

Texas Health has demonstrated a depth and breadth of commitment to measurement through a Learning and Education Cabinet which directs, guides, monitors, evaluates, and reports quarterly on learning according to strategic business goals. In addition, they have established learning oriented measures not only at the program level but also at the CEO level. They have well-defined roles for measurement and strategic responsibilities.  They leverage Metrics that Matter™ technology to organize programs into business aligned portfolios that drive measurement planning and reporting.

Daniel Gandarilla, VP and Chief Learning Officer said this about the importance of measurement to Texas Health, “Measurement is one of the most critical aspects of the learning process. It allows us to understand what has happened, the effect, and to tell a more compelling story of how we play a critical role in the success of the organization. The best stories can tell both quantitative results and qualitative experiences. This method of storytelling makes a compelling case and lets everyone know that learning is a part of the business.”

Heartland Dental has demonstrated a commitment to measurement through well-defined measurement roles, the use of a portfolio model to segment their offerings and drive consistent decisions about what, when and how to measure their solutions. They focus on linking learning to business outcomes and demonstrating the “so what” to their business partners.  In addition, they have established specific goals for their measures and incorporate their goals into their regular reporting. Finally, they leverage Metrics that Matter (MTM) to gather and report data to their stakeholders.

Meredith Fulton, Heartland Dental’s Director of Education described their measurement journey, “Prior to establishing our robust measurement approach, we collected satisfaction and ad-hoc feedback from learners through simple “Smile Sheets”. We felt great about what we were doing, but we did not have a measurement strategy in place for collecting learner or manager feedback that could drive decision making for any course or program. Shifts in our organizational leadership and focus on company growth have helped pave the way to develop an evidence-based strategy.”

After working with MTM to establish a strong measurement process, Heartland has been able to see dramatic changes and an increased savings in Learning & Development. “We have also been able to create efficiencies in automating data collection and reporting. By incorporating MTM data and insights into company metrics we have been able to tell a deeper story of the impact of Learning & Development. Additional benefits include being able to socialize best practice metrics from the programs across various stakeholders and business leaders. Our compelling insights, shown in custom report, infographics, and dashboards have allowed us to elevate the conversation and our involvement in company decisions,” said Fulton.

Human Capital Reporting Standards

The International Standards Organization (ISO) released its first standard for human capital reporting in December. It is titled “Human Resource Management – Guidelines for Internal and External Human Capital Reporting.” The document is 35 pages long with guidance for internal and external reporting by both large and small organizations. The standard is the culmination of a great deal of work by an ISO working group representing experts from numerous countries over the last three years led by Stefanie Becker from Germany.

Although a number of measures still need to be defined in greater detail, the standard is a major achievement and an important first step towards bringing standardization of measures and reporting to the HR field. Accountants have long had definitions of measures and standard reports as well as guidance about how the measures should be used. This has served their profession well and a similar discipline would serve us well. Imagine that in the not-to-distant future everyone in our field might be using the same language for measures and would share a common understanding of how to use the measures in reports to better manage their operations. Imagine the rich benchmarking and best practice sharing that this standardization and reporting would allow. And imagine the potential for university students in our profession to graduate with this common knowledge just as accountants do today.

Of course, there are compelling business reasons to move in this direction as well. Today about 84% of the value of S&P 500 firms is attributable to intangible assets which have little visibility on the balance sheet. Just forty years ago the percentage was reversed with 17% of the value in intangibles and 83% of the value in tangible assets. So, in the “old days” an investor could look at a balance sheet and get a good feel for the underlying value of a company; namely, its physical assets like building, equipment, land, and investments. Today, human capital drives the value of the intangible assets which makes up most of the value of many companies. Human capital, however, does not appear on the balance sheet and appears on the income statement only as an expense to be minimized. This is why it is so important to provide visibility and transparency to human capital. Investors, customers and employees need to know more about the human capital in an organization.

The standards are completely voluntary although the hope is that leading organizations will adopt them to provide this greater transparency for their investors and employees. The working group recognized that measurement and reporting can be burdensome for small organizations so only the most important and basic (and easy to calculate) measures are recommended for reporting. The working group also recognized that some measures, while important to properly manage human capital internally, may not be appropriate for public sharing and thus are recommended for internal use only.

There are 54 measures in all with 23 recommended for external reporting by large organizations. Many large organizations are already measuring most of these but not reporting them publicly. It is hoped they will begin using the ISO definitions as they become available and that they will begin to publicly report some of the measures. In some countries the government may mandate adoption of the ISO standard but that is not the case for the United States where organizations will be free to decide which, if any, of the recommended measures they report internally or externally.

Of the 23 measures recommended for public reporting by large organizations, there are five for recruitment and turnover, five for diversity, three for compliance, and three for health, safety and well-being. Productivity and workforce availability each have two, and cost, leadership and skills & capability each have one. The one for L&D (skills & capability) is total training cost which is recommended for external reporting for both large and small organizations. While not an ideal measure since it focuses on inputs rather than outcomes, it is at least a beginning and gives some indication of how much an organization values its employees.

Four other L&D measures are recommended for internal reporting by large organizations: percentage of employees who participate in training, average formal training hours per employee, percentage of employees who participated in training by category, and workforce competency rate. The first two are also recommended for small organizations.

I recommend that everyone in our profession become familiar with the standard and, if your organization is not already reporting these measures internally today that you consider them for your measurement and reporting strategy going forward. In the future investors and employees will begin asking for these and you really should be reporting on them internally to best manage your human capital. You can read more about them here in Workforce magazine, and the standard is available for purchase from ISO or ANSI.

Making an Impact Through Learning

I hope you can join us in Dallas for our Annual Conference in five weeks. It will definitely be worth your time and investment. You will find a community of like-minded learning professionals who are passionate about the measurement, reporting and management of L&D. Leading industry thought leaders and practitioners will be there to share their thoughts and experiences with you. And, of course, you will learn a lot about what works and what doesn’t from fellow participants.

Our theme for this year’s conference is Making an Impact through Learning. We have selected speakers and topics to ensure that you will have the latest thinking from the best people in our profession on this always important topic. Like last year, we have scheduled plenty of time for networking so you can get your questions answered and learn from your colleagues.

Our keynoters this year are Patti Phillips and Jeff Higgins. I am sure you are familiar with Patti and Jack Phillips and their work on isolating the impact of learning and calculating ROI. (They will each also be conducting a breakout session.) You may not know Jeff who has a very interesting background as both an accountant and CFO and as a senior HR executive so he brings a unique perspective to our field. He has also been a contributor to the International Standards Organization’s (ISO) work on Human Capital Reporting standards which are just now being implemented around the world. You will learn a lot just from these two speakers!

You will also have the opportunity to hear from the current CLO of the Year, Brenda Sugrue, who will share her thoughts on her personal journey and on EY’s transformation. Other industry thought leaders include Jack Phillips on ROI, Ken Phillips on measuring application, Roy Pollock on ensuring results, John Mattox on the future of measurement, Laurent Balague on the best use of surveys, Peggy Parskey, Adri Moralis, and Justin Taylor on reducing scrap measurement and reporting, Jeff Carpenter on data analytics and adaptive learning, Cushing Anderson on adding value, and Kevin Yates on impact measurement.

In addition, we have some great practitioners lined up to share with you including Gary Whitney (formerly with IHG), Toni DeTuncq and Teddy Lynch (NSA), Laura Riskus (Caveo), Mark Lewis (Andeavor), and Dean Kothia (HP).

Last, please do consider joining us for our pre and post-conference workshops which round out CTR Week. We offer a two-day workshop on measurement and reporting and one on the Six Disciplines of Breakthrough Learning. And after the conference we offer four half-day workshops. So, come early or stay after the conference to invest in our own development.

If you have come to one of conferences or workshops in the past, welcome back. If this will be your first, we look forward to meeting you. Either way, you are an important part of this community and everyone will benefit from your questions and contributions. As you know, the Center for Talent Reporting is a nonprofit and our sole mission is to advance the profession. Each year’s CTR Week is an opportunity to come together, learn from each other, and grow. You will leave energized! Find out more about the conference and workshops at ctrconference.com. Hope to see you in Dallas!

A Look Back (and Ahead) on the Measurement and Management of L&D

The end of December is always a good time of the year to take a look back and ahead of the L&D profession. Looking back, I think we are blessed to have some great thought leaders who have provided a terrific foundation for both the measurement and management of L&D.

On the measurement side, I am particularly thinking of Don Kirkpatrick, who gave us the Four Levels and Jack Phillips, who gave us isolated impact for Level 4 and ROI for Level 5. I am also thinking about all of the work done by the Association for Talent Development (ATD) to promote measurement and to benchmark key measures through their annual industry survey.

On the management side, I’m grateful again for the contributions from Don Kirkpatrick and Jack Phillips and now, Jim and Wendy Kirkpatrick and Patti Phillips; as well as for their guidance in how to manage—particularly with respect to partnering closely with goal owners and focusing on what it takes to achieve Level 3 application. In addition, I appreciate the work by Roy Pollock and his associates in giving us The Six Disciplines of Breakthrough Learning, which is a must-read book for anyone focusing on the measurement and management of L&D.

And, there are many more—like Ken Phillips and John Mattox and others too. And beyond L&D, the HR profession, in general, has benefited tremendously from thought leaders like Jac Fitz-enz, Jeff Higgins, John Boudreau, and Wayne Cassio. Like Kirkpatrick and Phillips did for L&D, these thought leaders basically invented measurement for the rest of HR.

We are very fortunate to have this strong foundation built over the last 30+ years. Looking ahead, the question is, where do we go from here? As a profession, we now have well over 170 measures for L&D and over 700 for HR (in general). I don’t think we need more measures. What we do need, however, is a better way to utilize some of the measures we have—especially Levels 3 (application) and 4 (results or impact) and 5 (ROI) for L&D. Level 3 is the starting point and should be measured for all key programs. Research by Phillips clearly indicates that CEOs want to see impact and ROI more than any other measures, which will become increasingly urgent as the next recession draws closer (pencil in 2020 or 2021 for planning purposes). While some progress has been made over the last 10 years, it is not enough, so this remains a high priority for the profession moving ahead.

Another priority for us is to do a much better job managing learning. By this I mean running learning with business discipline, which starts by partnering closely with goal owners and agreeing on specific, measurable goals or targets for the learning (assuming of course that learning has a constructive role to play in achieving the business goal). And, once specific plans have been made, to execute those plans with the same discipline your colleagues in other departments use. This requires monthly reports and comparing results to plan so that corrective action can be taken as soon as possible, to get back on plan and deliver promised results.

Managing learning this way is hard for many and some simply do not want accountability. But, it is an area where the payoff of better performance and greater value delivered per dollar is huge. In fact, I would contend that it has a bigger payoff than even measuring at Levels 3-5.

To summarize, I think there is an opportunity to structure our departments differently to enable better management overall. To develop a close partnership with goal owners (like the head of sales, for example) and to really run learning like a business, there needs to be one L&D professional in charge of the program(s) identified to meet the business need. This person would:

  1. Meet with the goal owner initially and oversee the needs analysis
  2. Get agreement up-front with the goal owner on specific measurable plans for the learning program as well as roles and responsibilities for both parties
  3. Supervise the design, development, and delivery of the program
  4. Meet regularly with the goal owner to manage the successful deployment of the learning, including reinforcement by leaders in the goal owner’s organization

I know this may be a challenge in some organizations, but I think it is indispensable for a successful partnership and for accountability within L&D.

I truly enjoy being a part of the great profession and the opportunity to work with all of you. We have come a long way in a relatively short period of time and I believe the future is very bright if we continue to build on foundation that has been laid to take learning measurement, reporting, and management to the next level.

I look forward to what we can accomplish working together! Happy New Year and Best Wishes for the coming year!

New Sample Reports for L&D Now Available

CTR publishes sample L&D reports to provide guidance and ideas to practitioners. We are happy to announce that we’ve completely updated and revised these reports to reflect our latest thinking and the evolution of Talent Development Reporting principles (TDRp).

Get the latest reports here.

If you’re a CTR member, you have access to the editable, Excel versions, which may be used as a starting point or template to customize for your own purposes. Non-members may download a PDF version of the reports to use as a reference.

What’s New

The revised resources include samples for both private and government sectors. Samples are also provided for qualitative, quantitative and mixed outcome measures, including:

  • Lists for effectiveness, efficiency and outcome measures
  • Statements for effectiveness, efficiency and outcome measures
  • Four different Summary Reports, each using different types of outcome measures
  • Four different Program Reports which vary in complexity
  • One Operations Report

Some Tips for Use

We recommend practitioners start by compiling lists of the measures they anticipate using and then create one list for each type of measure. If the measure will be managed to a target or plan during the year, then one of the three reports is recommended. This will be the case for most of your important measures.

If the measure is to answer a question, determine if a trend exists or provide basic information, then a statement or scorecard is appropriate. These contain actual results, usually by month or quarter.

In contrast, reports will include a plan or target for each measure as well as year-to-date results and ideally a forecast of the measure is expected to end the year.

More Updates To Come

Work is underway to update the lists, statements and reports for the other HR disciplines, like leadership development and talent acquisition. If you have any questions or would like more information about these reports, please contact David Vance, Executive Director, DVance@CenterforTalentReporting.org

 

Impact and ROI…The Discussion Continues

This month, I continue the discussion  on ROI (level 5) and impact (level 4). After publishing last months article, some expressed that ROI was unnecessary, too confusing or just too complicated. I argued that it is a simple, straightforward calculation once you have isolated the impact of learning which can always be done using the industry standard participant estimation methodology.

So, let’s take a look at why impact and ROI are so important. L&D exists for many reasons, but I would suggest the most important is to help our organizations accomplish its business goals—like higher sales, increased productivity, reduced costs and greater customer or patient satisfaction. Of course, L&D is positioned to help achieve HR goals as well, like higher employee engagement, which in turn contributes indirectly to realizing all the business goals. Most L&D departments are also responsible for leadership development which is sometimes itself a business goal.

These are all very important, high-level business or HR goals. Organizations invest considerable time and money in learning to achieve these goals, so it follows that CEOs would want to know what they are getting for their investment. What difference does learning make? Is it worth doing? Jack Phillips asked CEOs what they most wanted to see from L&D and over 90% said impact and over 80% said ROI. Less than 10% were receiving any impact data at the time and only slightly more were getting any ROI information. The best practice here is to carefully align your learning to these goals, plan it in partnership with the goal owner, set targets for impact and ROI, and then jointly manage the program to ensure the targets are achieved, including the level of application required to deliver the planned impact and ROI.

From both a planning and a follow-up point of view, some measure of isolated impact and ROI are the only ways to ensure learning (not L&D, but learning, which must be co-managed by the goal owner and L&D in partnership) delivered on its promise. The CEO, head of L&D, program manager, and goal owner will want to know how much difference learning made and what opportunities can be identified for improvement or optimization. The upfront agreement on impact will drive a discussion on what is possible and what level of effort by all parties will be required. The upfront agreement on ROI is nothing other than the business case for the investment. Whether it is ROI or net benefits, all parties will want to be sure that benefits are expected to exceed the costs. If not, why would you proceed? Afterwards, everyone will be interested in what impact and ROI were achieved. Actuals will never exactly match plan, and that is okay, but hopefully they will be close to plan. If not, there is a learning opportunity since either the plan was off or execution of the plan was off.

So, impact and ROI are important for high-level goals. How about lower-level goals and day-to-day business needs? Here, I think the answer depends. Most organizations have limited resources, so those resources should be focused first on the most important organizational goals. Plus, lower level goals may not have easily identifiable outcomes and thus it becomes harder to identify impact. Some learning like compliance training, on-boarding, and basic skills training is simply essential and there is no need for a business case or ROI. For this essential learning the goal should be to conduct it as effectively and efficiently as possible. So, employ levels 1-3 to measure effectiveness and focus on minimizing the time and cost to be efficient.

In conclusion, impact and ROI are important for both learning professionals, organizational goals owners, and senior leaders like the CEO. These measures are important upfront to help plan the level of effort required and to reach agreement with the goal owner on roles and responsibilities. The higher the desired impact and ROI, the greater the effort will have to be. Once the planning is completed and the program is underway, the targets for impact and ROI can be compared to year-to-date results to determine if mid-course corrections are necessary. In other words, this is not a set it and forget it exercise. Programs must be managed monthly to achieve the planned impact and ROI. Last, at the end of the program once the impact and ROI have been estimated, there will be an opportunity to learn from any variances to plan and improve.

ROI: Still Evoking Confusion and Controversy

The Return on Investment (ROI) concept for learning has been around since Jack Phillips introduced it about forty years ago. You might think by now that at least the confusion over its use would have diminished even if the controversy had not. Not even close. I continue to see the concept abused and misused on at least a monthly basis.

Here is an example from a recent blog: “Anyone who is involved in the business world is familiar with the concept of ROI. It’s punchy, with its relatively simple calculation, and can make or break a purchasing decision. But when it comes to learning initiatives, gathering the necessary data to calculate ROI is difficult, to put it mildly.” The author goes on to say that learning initiatives implemented as an integral part of business strategy can be measured by the success of that strategy.

There are several issues with the blog. First, the author appears to be confusing ROI used in the business world with ROI used in the learning world. Typically, a financial ROI is calculated to make the business case for an investment, like a new product line or facility. The definition of this financial ROI is not the same as the learning ROI. The numerator of the financial ROI is the net present value (NPV) of the increase in profit due to the investment. The denominator is the cost of the asset required for the project, such as the cost of a new facility. This will be capitalized as an asset on the balance sheet and depreciated each year.

Contrast this with the learning ROI which has (usually) one year of contribution to profit in the numerator (so no need for NPV) and no asset whatever in the denominator. Instead, the denominator is the cost of the learning initiative which is an expense item from the income statement. So, two different definitions and the calculation for the financial ROI is actually more complicated than that for the learning ROI. Interestingly, it was exactly this difference in formulas which led my colleagues in accounting at Caterpillar to tell me that I could not keep referring to learning ROI as “ROI” since it was confusing our leaders. So, I renamed it Return on Learning or ROL in 2002. Take away here is to remember the two are not the same and let those in your organization know that learning ROI is calculated differently.

The next point by the author is that learning ROI is difficult to calculate. The ROI calculation itself is very easy: simply, the net benefit of the initiative divided by the total cost. Net benefit is the gross benefit minus total cost. Generally, total cost is easy to calculate, but the sticky part is the gross benefit which is the increase in profit before subtracting the cost of the initiative. The gross benefit, in turn, depends on the isolated impact of the learning initiative, like a 2% increase in sales due just to the learning. Likely, this is what the author had in mind when complaining about the difficulty of calculating ROI.

However, isolation need not be that difficult. The learning team can work with senior leaders to agree on a reasonable impact for planning purposes. And, once the program is completed, there are several methods available to estimate actual impact which do not require special training or hiring a consultant, Often, it will suffice to simply ask the participants and the supervisors what they believe the impact was and then reduce the estimate some to allow for the inherent error in any estimate. While not precise enough for an academic journal article, this is usually all we need in the business world to make appropriate decisions (like go/no-go after a pilot) or identify opportunities for improvement. It will also be close enough to determine if the investment in learning more than covered the total costs.

Last, the author suggests that if the learning is integrated into the business strategy, its success can be measured by the success of the business goal. I strongly agree that we should always start with the end in mind (the business goal) and design the learning so it directly supports the business goal. Further, we need to work in close partnership with the business goal owner to ensure that the learning is provided to the right target audience at the right time and then reinforced to ensure the planned impact is realized. While this does provide a compelling chain of evidence that learning probably had the intended impact, it does not tell us how much impact the learning had or whether the investment in learning was worthwhile. Instead of a measurement, then, we are simply left with a “mission accomplished” statement.

The question remains how much sales would have risen without any training. If sales would have increased by 10 percent without training, the training clearly was not worth doing. How about if sales would have increased 9% without training? Was it worth doing? We still need to isolate the impact of training and calculate net benefit or ROI to answer this ultimate question of value – not to “prove” the value of the training but to enable us to make better decisions about programs going forward and avoid investing when the return does not justify the cost.

So, the debate goes on. A friend asked me recently if I still believe in ROI. Yes, I do, but we need to use it wisely. It should not be used defensively to try to prove the value of an initiative or an entire department. In other words, it is not an exercise to deploy at the end of a program or fiscal year when you are under fire. Rather it should be used to help plan programs to ensure they deliver value and to identify opportunities for improvement going forward. It should be used to help decide which programs to continue and to identify ways to optimize programs. ROI will never take the place of careful planning, building relationships with goal owners or smart execution. You will always have to get the basics right first. Once you have done that, then ROI can be a powerful tool to make you a better manager.

The Portfolio Approach to Managing Learning

I just listened to a great webinar by Cristina Hall, Director Product Strategy for Metrics That Matter (MTM), which is an organization that specializes in helping companies gather, analyze and report data on learning. They have developed a very helpful approach for thinking about and managing the mix of courses an organization offers.

At CTR we spend most of our time focusing on the key programs in support of key organization goals as well as the key initiatives of the L&D department head to improve internal efficiency and effectiveness. Consequently, we advocate the use of program reports by the program manager and the goal owner which contain detailed data on the programs in support of key company goals like increasing sales. We also recommend summary reports for sharing with the CEO and other senior leaders to show alignment of learning to their goals and the expected impact from that learning. Last, we have operations reports for the L&D department head to use in managing improvement of a few select efficiency and effectiveness measures. So, the focus is primarily on targeted programs which are actively managed on a monthly basis to deliver planned results.

But what about all of the other courses an organization offers? Some companies have hundreds of courses in their learning management system (LMS). Most are not strategically aligned to the top goals of the company and are not actively managed, although they may be reviewed periodically. While not strategic, they can still be very important, and organizations want be sure they offer the right mix of courses and that the offered courses add value. So, the question is, “How should all of these programs be managed?” This where the MTM approach offers a very valuable contribution to the field.

MTM recommends assigning each course to one of four portfolios or categories. The four portfolios are: Drive Growth, Operational Efficiency, Mitigate Risk, and Foundational Skills. The portfolios reflect the reason for the learning and force the L&D group to answer the question of why the course is being offered. Is it to improve revenue? If so, it falls in the Drive Growth portfolio. Is it to reduce cost or improve productivity? If so, it falls in the Operational Efficiency portfolio. Is it to ensure compliance with regulations or standards or reduce the organization’s exposure to potential harm? If so, it falls in the Mitigate Risk portfolio. All other courses are assigned to the Foundational Skills category which would include all types of basic and advanced work skills, from very specific job training to communications skills and team building.

The beauty of this approach is in its simplicity. We could imagine more portfolios and we could argue some courses may fall in more than one portfolio, but assigning each course to just one of four portfolios forces the question of what the course is primarily designed to accomplish. Once all the courses have been assigned, imagine a grid with four quadrants—one for each portfolio. Now populate the box for each quadrant with measures that will help you manage your portfolios. First, show the

percentage of courses and hours as well as the amount of L&D budget dedicated to the portfolio to determine if your mix is right. For example, it may be that 5% of the L&D budget is being spent on driving growth, 35% on improving efficiencies, 10% on risk and 50% on foundational skills. This may be the right mix, especially for an L&D department with responsibility for a lot of basic skills training. On the other hand, a CEO might look at the mix and be surprised that more effort isn’t being allocated to driving growth and improving efficiency. Just sharing this data can lead to a great discussion about the purpose of L&D and on priorities.

Measures could be added for each portfolio to show the number of participants, level reaction, level 3 application, and some indicators of level 4 impact or outcome. The data could also be color coded to show comparison to last year or better yet to plan (or target) for this year. This portfolio approach can also be incorporated directly into TDRp’s operations report where the four portfolios serve as headings to organize the measures.

In conclusion, I strongly recommend an approach like MTM’s to better understand the existing mix of courses and to ensure alignment with the priorities of the CEO in terms of budget and staffing. I also believe the portfolio approach will be helpful in monitoring the courses throughout the year for replacement or revision.

Great Advice From the Kirkpatricks

By Dave Vance,

In their July 25 newsletter the Kirkpatricks ask if you are a Training Fossil. I love the imagery and their message. Here is what they say:

“The role of training professionals has changed as technology has advanced. Fifteen years ago, it may have been enough to design or deliver a great training class. If that’s all you do these days, however, you are in danger of being replaced by a smartphone app or other technology that can do the same things quickly and for free, or nearly free.

In a similar way, training professionals need to design and build “roads and bridges” into the job environment before, during and after training, and then monitor them regularly. Instructional designers need to do more than design formal classroom or virtual instruction. Trainers need to do more than deliver it. “

Their concern is a good one and I think in the near future we will see aggregators who can provide generic content on any platform so efficiently that internally developed content will not be able to compete. So, if a training department is focused solely on providing generic content, often in an effort to boost employee engagement, it risks becoming extinct just like the dinosaurs.

Unlike the dinosaurs, however, we can avoid extinction by making sure we are adding true value to our organizations. As Jim and Wendy explain, this goes far beyond providing generic content. It starts with understanding your organization’s goals and priorities and includes establishing a very close, strategic relationship with senior business leaders. Put simply, start with the “business” end in mind which the Phillips and all leading authors in our field recommend. Next, design the entire learning experience and define the roles that the business goal owner and L&D must play in order for the learning to be appreciated, applied and impactful. This starts by engaging the business goal owner to agree on reasonable expectations from the learning if both of you fulfill your roles. Neither party can make the learning a success on their own. While the learning department will play the biggest role in designing and delivering the learning, the business goal owner and supervisors will need to play the biggest role in communicating the need for the learning and then reinforcing the learning to ensure application.

Roy Pollok describes what is necessary for successful learning in The Six Disciplines of Breakthrough Learning which I highly recommend to understand learning as a process and not an event. In a similar fashion, Jack and Patti Phillips are emphasizing the importance of design thinking in their latest books. All agree that learning cannot be just an “event”. For true impact it must be carefully planned and managed in close partnership with the business, and the follow up after the “event” to ensure application and impact is just as important as the design and delivery of the “event” itself.

This process oriented approach coupled with a close strategic partnership with the business is what will allow the profession to add true value. It also will prevent us from becoming extinct since we add value at every stage of the process from consulting with the business upfront to helping the business reinforce the learning with their employees on the backend. While we may purchase generic content where it makes sense, our value add goes far beyond the design and delivery of content. We will not be in danger of becoming a training fossil as long as we focus on partnering with the business to meet their needs by delivering the entire learning experience required to make a difference in results.

Impact and ROI of Learning: Worth Pursuing or Not?

Several articles in the last month have suggested the profession should step back from trying to isolate the impact of learning and the resulting return on investment (ROI). The authors argue that it is difficult or impossible to isolate the impact of learning from other factors and no one will believe the results anyway so why bother. Instead, they call for showing the alignment of learning to business goals, focusing on easier to measure levels 1-3 (participant reaction, amount learned, and application), and finally focusing on employee engagement with learning (consumption of learning and completion rates). Aligning learning to business goals and measuring levels 1-3 are always good ideas, so no disagreement there. And, depending on your goals for learning and the type of learning, measuring average consumption and completion rates may also make sense. However, for certain types of learning there is still a need to measure impact and ROI (levels 4 and 5).

The primary reason is that senior corporate leaders (like the CEO and CFO) want to see it and so should the department head and program director. Research by Jack Phillips in 2010 showed that CEOs most want to see impact and ROI but instead are often provided with only participation data (number of learners, number of courses, etc.), participant reaction (level 1) and cost. While these measures are helpful they don’t answer the CEO’s question of what they are getting for their investment. CEOs and CFOs want to know what difference the training made and whether it was worth the time and effort. The program director and CLO should also be curious about this, not from a defensive point of view (like proving the value of training), but from a continuous improvement perspective where they are always asking what we learned from this project and how we can improve next time.

It is true that level 4, the isolated impact of learning on a goal, is harder to determine than levels 1-3. Sometimes there will be a naturally occurring control group which did receive the training. In this case, any difference in performance between the two groups must be due to the training. In other cases, statistical analysis like regression may be used to estimate the isolated impact from the training. The most common approach, however, is participant and leader estimation which is generally good enough to roughly determine the impact of learning and definitely good enough to learn from the project and to identify opportunities for improvement. In a nutshell, the methodology calls for asking participants to estimate the impact from just the training and to also share their confidence in that estimate. The two are multiplied together to provide a confidence-adjusted estimate of the isolated impact of learning. For example, one participant may say that the training led to a 40% increase in performance (like higher sales) but is only 50% confident in the 40%. The confidence adjusted impact would be 40% x 50% = 20% increase in performance. Repeat for others and average. Best practice would be to ask supervisors what they believe as well. Then share with the initiative’s sponsor and other stakeholders and modify as necessary. Once the level 4 isolated impact is determined, ROI is very straight-forward.

The Existential Question: Why Do We Measure?

There are several excellent answers to the question: Why Do We Measure? And, the answers will provide much needed direction to your measurement strategy. The most common answers are to:

  1. Answer questions
  2. Show results
  3. Demonstrate value
  4. Justify our budget (or existence)
  5. Identify opportunities for improvement
  6. Manage results

We will examine each in turn and comment on the implications of the answer for your strategy.

The most basic reason for measuring is to answer questions about programs and initiatives. For example, someone wants to know how many courses were offered in the year, how many participated in a particular course, or what the participants thought about the course. Assuming this information is already being collected and stored in a data base or captured it in an excel spreadsheet, simply provide the answer to the person who asked for it. If it is one-time-only (OTO) request, there is no need to create a scorecard or dashboard. If someone wants to see the same information every month, then it does make sense to create a scorecard to show the data by month.

The second most common reason is to show results. L&D departments produce a lot of learning each year and usually want to share their accomplishments with senior management. In this case, results generally translates to activity with departments sharing their results in dashboards or scorecards which show measures by month (or quarter) and year-to-date. The scorecard might also show results for the previous year to let senior management know they are producing more learning or improving.

The third reason is to demonstrate value. It is also very common and is just an extension of the second. Some believe that simply showing results demonstrates value while others believe that demonstrating value requires a comparison of activity or benefit to cost. For example, a department might demonstrate value by calculating cost per participant or cost per hour of development and showing that their ratios are lower than industry benchmarks or perhaps last year’s ratios.  Some adopt a higher standard for value and show the net benefit or ROI of a program. Net benefit is simply the dollar value of the impact less the total cost of the program. Any value above zero indicates that the program has more than paid for itself. Return on investment (ROI) is simply net benefit divided by total cost expressed as a percentage, and any positive percentage indicates the program more than paid for itself. Measures to demonstrate value are usually shared at the end of a program or annually rather than monthly.

The fourth reason is to justify the L&D budget or the department’s existence. This is an extension of the third reason where justification is the reason behind demonstration of value. In my experience this is almost always a poor reason for measuring. Typically, a department measuring to justify their budget is a department which is not well aligned to the goals of the business, lacks strong partnerships with the business, and has poor or nonexistent governing bodies. Not only is it a poor reason for measuring, but the effort in most cases is doomed to fail even with high ROIs. In this situation, energy would be better spent addressing the underlying problems.

The fifth reason is to identify opportunities for improvement. This is a great reason for measuring and indicates a focus on continuous improvement. In this case scorecards may be generated to show measures across units, courses and instructors with the goal of discovering the best performers so that the lessons learned from them can be broadly shared. There may also be a comparison to best-in-class benchmarks, again with an eye toward identifying areas for internal improvement. Another approach would be to create a scorecard, graph or bar chart with monthly data to determine if a measure is improving or deteriorating through time.

The last reason to measure is to manage. This is perhaps the most powerful, and least appreciated, reason to measure. A well-run L&D department will have specific, measurable plans or targets for its key measures. These plans will be set at the start of the fiscal year and the L&D leaders will be committed to delivering the planned results. By definition, this approach requires the use of measures, and time will be spent selecting the appropriate measures to manage and then setting realistic, achievable plans for each. Once the plans are done, reports need to be generated each month comparing year-to-date results to plan in order to answer two fundamental questions: 1) Are we on plan, and 2) Are we likely to end the year on plan? If the answer is “no” to either question, the leaders need to take appropriate action to end the year as close to plan as possible. In this case, reports must be generated each month showing plan, year-to date results, and ideally a forecast of each measure is likely to end the year.

A good measurement and reporting strategy will address all the reasons above except number four (justify existence). Reports, dashboards or scorecards are required in some cases but not in others just as monthly reporting is required in some cases but not in others. If there are limited resources, it is best to generate regular reports only for those measures to be actively managed (reason six) or those measures used to show results (reason two). Reports can be generated on an as-needed basis in other cases and most measures can simply be left in the data base until they are needed to answer a specific question.

What Finance and Accounting Should Expect from HR

Finance and accounting often do not hold HR to the same standards as other departments because it is believed that HR is different and people initiatives cannot be measured or managed like other business initiatives. However, having different standards for HR is a disservice to both HR and the company, resulting in lower expectations and less return from HR expenditures.  HR professionals are quite capable of playing by the same rules as their colleagues in other departments and can deliver tremendous business value, but finance and accounting professionals need to have a realistic sense of what may be expected from HR.

I believe that HR, including L&D, should be held to the same business standards as other departments. This means that finance and accounting should expect the same from HR as any other department – not more, not less – just the same. At a minimum, finance and HR should have five expectations for HR.

First, since HR is a support function, it should align some of its initiatives to the organization’s top business goals. This means that HR, through its many departments like L&D, talent acquisition and total rewards, should have programs or initiatives that directly support business goals like increasing sales, reducing defects, improving patient quality, etc. Of course, HR will also have programs in support of HR goals like increasing employee engagement, improving leadership and reducing regrettable turnover. While these programs are very important and will indirectly contribute to achieving all the business goals, the expectation should be for HR to directly contribute to the business goals as well, which it can easily do.

The second expectation is that HR will prepare a business case for major programs or initiatives, just like other departments are required to do. The business case will bring together the planned benefits and costs of the initiative, making explicit all the key assumptions and risks. The business case should include the HR ROI (return on investment) which is simply the net benefit divided by the total cost, which is commonly used in L&D. This will help finance and accounting make better decisions about funding and will allow comparisons among different HR requests, although it will not allow a direct comparison to the financial ROIs from other areas since the formulas are different for HR and financial return on investment.

The third expectation is that HR will create a business plan for the coming fiscal year, just like other departments do. The plan may be written or PowerPoint depending on the organization’s culture and should include at least an executive summary, a review of last year’s accomplishments, the business case for the coming year for at least the major initiatives, and the budget, staffing, and other resources required to deliver the plan. The plan will include specific, measurable goals for all key initiatives. The process of creating a plan is just as important as the finished product. A good business planning process will ensure the right questions have been raised and the right people have been involved to yield the organization’s best thinking about what can be accomplished in the coming year.

The fourth expectation is that HR will execute its approved business plan with discipline. Now that specific, measurable goals (targets, plans, KPIs – whatever you prefer to call them) have been set in the business plan, HR needs a process to ensure these planned results are delivered. Disciplined execution requires monthly reports comparing year-to-date (YTD) results to plan and ideally a forecast for how the year is likely to end compared to plan as well. The reports should include all the key measures identified in the business plan and should be used in a monthly meeting of senior leaders dedicated to actively managing results to come as close to plan as possible by the end of the year. Disciplined execution requires that two questions be answered every month: 1) Are we on plan year to date? and 2) Are we going to end the year on plan? Answers to these two questions will drive management efforts for the rest of the year.

The fifth expectation is that HR will be accountable for results. The approved business plan contained the specific, measurable goals for the year. Now HR needs to execute with discipline and be willing to be held accountable for achieving its plans, just the same as any other department.

These are the five most important expectations that finance and accounting should have for HR. Each is a realistic expectation that is being met by some HR departments today. Some in and outside HR have argued that HR is different and cannot meet these expectations. They claim that since HR works with people, specific and measurable goals cannot be set and HR initiatives cannot be managed the same way as in other departments. Consequently, they don’t believe in creating business cases or business plans or reports comparing progress to plan. As an HR insider, I believe they are wrong. HR can meet these expectations. HR is no more difficult to manage than sales, manufacturing, quality or other departments, and in some ways it may be easier since we often have more control over the outcome.

Meeting these expectations will strengthen HR’s seat at the table and vastly improve our credibility in the eyes of our colleagues. Let’s show them we can play by the same rules and realize our full potential to contribute to our organization’s success.

The Future of Measurement and Reporting for Learning and Development

2018 CTR Conference Summary

We had our best conference yet February 21-22 in Dallas. We had more than 90 participants despite widespread flight cancellations and delays due to terrible weather. The energy level, sharing, and participation more than made up for the weather.

Roy Pollock, co-author of The Six Disciplines of Breakthrough Learning, was our first keynoter and really set the tone for the conference by reminding people that learning must be connected to the business.  Our panel session before lunch gave Kimo Kippen, Adri Morales, Terrence Donahue, and Peggy Parskey an opportunity to share their thoughts on the greatest opportunities ahead of us. All agreed we live in a very exciting time for L&D and that advances in technology and learning platforms will allow us to reach more people in more impactful ways than ever before. Adri Morales, 2016 CLO of the Year, then stayed on the stage to share her personal journey and thoughts with the group.

The afternoon of Day One offered six more breakout sessions for a total of nine the first day. Predictive analytics, measuring informal learning, ROI Goal Setting, xAPI, spending levels, and the application of the 6D’s at Emerson were just a few of the topics that generated a lot of interest. Participants came back together late in the afternoon to learn about  a new maturity model for learning developed by Peggy Parskey, Cushing Anderson and Kevin Yates which was in turn used to help judge the First Annual TDRp Excellence Awards. The 2018 winner was USAA. We closed out Day One by having speakers meet with participants in the café for wine and lots of good discussion.

Day Two began with a keynote by Marianne Parry and Lee Webster on the brand new ISO standards for human capital reporting. The audience was very interested to hear about the recommended measures and next steps in the process to adopt these. More breakout sessions followed to engage participants on measurement strategy, isolating impact, level 3 design, alignment and the future of the corporate university. Leaders from Humana, Siemens and USAA also shared their measurement strategies and journeys. We wrapped up with a panel hosted by Patti Phillips to learn what Lorrie Lykins, Paul Leone and John Mattox saw for the future of measurement. Much like Wednesday’s panel, the group predicted a bright future for measurement enabled by technology, xAPI, and advances in the field, especially in predictive analytics.

Everyone left feeling recharged and excited after a day and a half of great speakers and provocative, informative sessions. And it just feels good to be around like-minded people who understand your challenges and who all want to improve.

Planning for next year’s conference is already underway. It will be February 20-21 in Dallas at the same venue. Patti Phillips, CEO of the ROI Institute, and Jeff Higgins, CEO of the Human Capital Management Institute (HCMI) will keynote and Lee Webster will return to give us an update on the ISO standards for human capital reporting. CTR is poised to help drive an ISO effort to establish TDRp as the framework for the types of measures and reports for our field so we will have an update on that as well.

Hope to see you in Dallas in 2019!

Alignment Revisited

Tim Harnett in a recent Human Capital Media Industry Insights article (undated) reminds us of the importance of aligning L&D initiatives to organizational goals. He shares some sobering research indicating that “only 8% of L&D professionals believe their mission is aligned to the company strategy” and “only 27% of HR professionals believe there is a connection between existing [learning] KPIs and business goals”. So despite perennial intentions to do a better job of alignment as a profession we still have a long way to go.

I am afraid, though, that he does not go far enough in recommending the actions we need to take. He suggests that “identifying and tracking KPIs related to L&D initiatives is the best way to align L&D to organizational goals and make the business case for development programs”. For KPIs (key performance indicators) he is thinking of measures like level 1 participant satisfaction, learning hours, level 3 application and employee satisfaction. While these are all important measures and can indeed help make the case for learning, they actually have nothing to do with alignment.

Here is why. Alignment is the proactive, strategic process of planning learning to directly support the important goals and needs of the organization. Alignment requires L&D leaders to discover the goals and needs of the organization and then go talk to the goal owners to determine if learning has a role to play. If it does, the two parties need to agree on the specifics of the learning initiative including target audience, timing, type of learning, objectives, cost, and measures of success (ideally the outcome or impact of the initiative on the goal or need). They must also agree on the mutual roles and responsibilities required from each party for success including communication before the program and reinforcement afterward.

Measures or KPIs will come out of this process but the measures are NOT the process. It is entirely conceivable to have a learning program with great effectiveness and efficiency measures indicating many employees took it, liked it, learned it, and applied it, but the program was NOT aligned to the goals of the organization and should never have been funded. This is true even if it had a high ROI. Great numbers do not take the place of a great process, and alignment is not determined to have existed by looking back on measures or KPIs.

Conversely, you can easily imagine a program that is definitely aligned to one of the organization’s top goals but was executed so poorly that its effectiveness numbers came in very low. So, alignment is about the process of working with senior organizational leaders to plan learning initiatives which directly address their goals and needs. It must start with the organization’s goals and not with existing learning initiatives.

Last, there is much discussion these days about using employee engagement as an indicator of alignment. It is not for all the reasons discussed above. It is simply another measure and not a process. For engagement to be an indicator of alignment you would have to assume that employees know the organization’s goals, as well as the senior leaders do and that learning about those goals is the primary driver of engagement. Both of these assumptions are likely to be false. A focus on employee engagement would be appropriate only if engagement is the highest priority goal of the organization. In most organizations business goals like higher revenue, lower costs, and greater productivity are more important although higher engagement is always a good thing and will contribute indirectly to achieving the business goals.

In conclusion, I am happy to see a focus on this important topic of alignment, but success will require us to work the process of alignment with senior leaders. At the end of this process, every L&D department should have a one-pager listing the organizational goals in the CEO’s priority order and, whenever it is agreed that learning has a role to play, a list under each goal of the key learning programs that are planned. This is how you demonstrate alignment, not through KPIs or measures.

Attend Corporate Learning Week

Demonstrating value to the business is old news. Nonetheless, demonstrating value to your learners is just as challenging. How are you meeting the expectations of the modern learner? Don’t get left behind!

Corporate Learning Week Silicon Valley returns to the heartland of innovation for three days of discovering what it takes to curate a continuous, personalized learning experience, and in turn, a high-functioning workplace.

Join us March 26-28 at the Argonaut Hotel in San Francisco to experience an unmatched opportunity to meet your learning & talent development peers representing the best in innovation: hear from Uber, Amazon, Adobe, Airbnb, Facebook, Google, and more!

Download the agenda

Why Attend Corporate Learning Week in Silicon Valley…

  • Take advantage of our unique site tour to Lucasfilm Studios, which will help you approach your own learning design through fresh eyes
  • Navigate the expo hall as you please, meeting and hearing from visionary solution providers bringing your big ideas to life
  • Gain insights at our Technology & Strategy breakouts: we’re committed to your ROI, and your focus. Bring a colleague to divvy up the sessions & share your notes!
  • Collaborate with your peers at the Interactive Discussion Group Discussions: make your most valuable connections through sharing & finding solutions to pressing challenges in our rotating, practitioner-only roundtables

In partnership with Corporate Learning Week, Silicon Valley, the Center for Talent Reporting has secured a 20% discount off standard registration pricing! Use the code: CLWSV_CTR when registering.

Register Here!

Portfolio Evaluation: Aligning L&D’s Metrics to Business Objectives

by Cristina Hall
Director of Advisory Services – CEB, now Gartner

Using data built on standard metrics has become table stakes for L&D organizations looking to transform and thrive in a rapidly-changing business environment.

A critical challenge that remains for many organizations, however, is how to prioritize and structure their metrics so that the numbers reinforce and showcase the value L&D is contributing to the business.  It’s important to select measurement criteria which reflect L&D’s performance and contextualize them with outcome measures used by the business.

Applying a Portfolio Evaluation approach to Learning and Development provides the linkage needed to address this challenge. It is a clear, outcome-centered framework that can be used to position L&D’s contributions in business-focused terms, at the right level of detail for executive reporting.

How Does L&D Deliver Value?

Delivering training does not, in itself, deliver value.  Training is a tool, a method, to develop employees’ knowledge and skills so they will deliver more value to the business.  The value that training programs deliver aligns to four strategic business objectives.

Driving Growth

Courses aligned to the Drive Growth objective are designed to increase top-line growth, thus growing revenue and market share.  The organization tracks metrics related to sales, renewals, upsells, customer loyalty and satisfaction, etc., while L&D can track leading indicators in each of these areas based on employees’ assessment of how much these areas will improve based on each training program they have attended.  These courses are intended to drive competitive or strategic advantage by focusing on organization-specific processes, systems, products, or skillsets.  Examples include courses that are designed to increase sales, customer retention or repeat business, new product innovation, or help managers best position their teams for business growth.

Increasing Operational Efficiency

Courses aligned to Operational Efficiency increase bottom-line profitability.  The business tracks metrics related to productivity, quality, cost, etc., while L&D can track leading indicators in each of these areas based on employees’ assessment of how much these areas will improve based on each training program they have attended.  These courses are intended to drive competitive or strategic advantage by focusing on organization-specific processes, systems, or skillsets.  Examples include courses that are designed to increase productivity, decrease costs, increase process innovation, or help managers maximize bottom line performance.

Building Foundational Skills

Courses aligned to the Foundational Skills value driver are designed to both ensure that gaps in employee skills can be addressed, and demonstrate that employees can grow and develop to provide even more value to the business; it’s frequently less expensive to fill a minor skill gap than to replace an employee who is already on-boarded and semi-productive. The business tracks metrics related to bench strength, employee engagement, turnover, promotion rates, etc., while L&D can track leading indicators in each of these areas based on employees’ assessment of how much these areas will improve based on each training program they have attended. These courses tend to be off the shelf content, rather than custom designed content specific to the business. Examples include time management, MS Office, and introductory/generalized coaching or sales courses.

Mitigating Risk

Courses aligned to the Mitigate Risk value driver are designed to shield the business from financial or reputational risk by ensuring employee compliance with specific policies or maintenance of specific industry certifications.  The business tracks metrics related to safety, legal costs, etc., while L&D can track leading indicators in each of these areas based on employees’ assessment of how much these areas will improve based on each training program they have attended.  These courses tend to be focused on compliance, regulatory, and safety training, and tend to incorporate content similar to that of other courses in the organization’s industry.

Become a Portfolio Manager

Every learning asset, whether informal or formal, can be tied back to one of the four drivers of value.  The variety and depth of metric collection and the performance expectations associated with those metrics differ across each of these value drivers, which is why grouping courses or learning assets into Portfolios is helpful.  L&D leaders become investment managers; monitoring and managing assets that are expected to produce comparable results to effect the performance of people, who in turn effect the performance of the business.

Getting Started

  1. Align metrics to Portfolios: what is most important? What data is needed?
  2. Align learning assets to Portfolios: this ensures that the right metrics are collected.
  3. Gather the data: gather training effectiveness data from learners and their managers and combine it with efficiency data from the LMS and Finance and outcome data from the business.
  4. Review, interpret, and share: use the metrics to communicate L&D’s contribution to business goals, confirm alignment, and inform strategic decision-making.

For additional detail regarding the Portfolio Evaluation approach, download our white paper, Aligning L&D’s Value with the C-Suite.

About CEB, Now Gartner

Leading organizations worldwide rely on CEB services to harness their untapped potential and grow. Now offered by Gartner, CEB best practices and technology solutions equip clients with the intelligence to effectively manage talent, customers, and operations. More information is available at gartner.com/ceb.

 

Personalized Learning: Means to an End or the End Itself?

The learning field is currently focused on personalized learning which might be defined as providing learners with individualized, custom content in the way each prefers to learn. Advances in digital learning and platforms combined with an explosion in learning content make this advancement not only possible but highly desirable. It has the potential to contribute significantly to better learning experiences and higher application rates leading to better outcomes. This said, there is a danger that some will consider personalized learning not just a strategy to improve learning but as the goal itself, the reason or mission for the learning department. This brings us to a discussion of means versus ends and the importance of keeping the two straight.

I would suggest that personalized learning is best considered as a means to the end and that it will almost never be an end in itself. Over the past several years some in our profession have advocated that it is the end. They have redefined their mission as a learning department or corporate university to provide learners with whatever they want in whatever form they want it, which is an extension of our definition of personalized learning above. At its heart this issue of means versus ends is far from an issue of semantics; rather, it is a fundamental question about the reason for the existence of corporate training. Imagine a discussion with your CFO or CEO. They ask what your strategy is for next year. You say it is personalized learning and that the majority of your resources will be dedicated to providing more and better-personalized learning. They ask why. You tell them learners will be more engaged, will learn more, and will retain more. I guarantee you that in their mind you never really answered the question. Your answer is good as far as it goes but doesn’t get to the business reason for learning. You described a process improvement for them, one that will deliver learning more effectively and efficiently, and that is good but not enough. Basically, you are improving the means to an end by personalizing the learning, but they want to know what the end is. In their mind, the end may be higher sales, greater productivity or quality, fewer accidents, lower operating costs, or higher employee engagement, but you didn’t connect the dots for them. By not appreciating the difference between means and ends, you focused just on the means when you needed to also focus on the end. Better to tell them that you will improve learning in order to effect higher sales, lower costs, or whatever their goals are. These are the ends they care about and once they know that you are working toward the same ends, they will be more receptive to your request for resources to improve the means (personalized learning).

As a profession, we must continue to make great strides in process improvement and personalized learning is one such process improvement. But it is not and never will be an end in itself any more than e-learning, blended learning or mobile learning are ends in themselves. We don’t provide learning just to provide learning. The learning must serve a higher need. It must serve an end and that end should be one of your organization’s high-level goals or needs. With this understanding we also can see that personalized learning is not the opposite of company learning which has been defined as learning directed by the company (not the employee) to meet company needs. Instead, personalized learning should support the company goals and needs even if it is directed or mandated by the company. If at the discretion of the employee it is most likely to improve employee engagement which is a company goal in almost all organizations. If directed by the company, the personalized learning will support one of the other company goals like higher sales. So, personalized learning may be at the discretion of the employee or at the discretion of the company, but in either case, it is a means to an end.

Informal Learning Evaluation: A Three-Step Approach

by John R. Mattox, II, PhD
Managing Consultant
CEB, now Gartner

You may recall that roughly 20 years ago, eLearning was seen as the next new thing. Learning leaders were keen to try out new technologies, while business leaders were happy to cut costs associated with travel and lodging. The eLearning cognoscenti claimed that this new learning method would deliver the same results as instructor-led training. They passionately believed that eLearning would become so prevalent that in-person classrooms would disappear like floppy discs, typewriters, and rotary telephones. The learning pendulum was ready to swing from the in-person classroom experience to the digital self-paced environment.

In time, the fervor surrounding eLearning waned and practical experience helped shape a new learning world where the pendulum was not aligned to one extreme or the other. Effective formal learning programs now employ a blended approach comprised of multiple components, including in-person instructor-led classes, virtual instructor-led events, self-paced web-based modules and maybe, just maybe, an archaic but valuable resource like a book.

Informal learning is the new hot topic amongst leaders in the L&D field. Three things appear to be driving the conversation: the 70/20/10 model, technology, and learners themselves. While the 70/20/10 model is by no means new—it was developed in the 1980s at the Center for Creative Leadership—it has become a prominent part of the conversation lately because it highlights a controversial thought: all oft he money and effort invested to create formal learning accounts for only 10% of the learning that occurs among employees. Only 10%! Seventy percent of the learning comes from on-the-job experience and 20% comes from coaching and mentoring.

These proportions make business leaders ask tough questions like, “Should I continue to invest so much in so little?” and “Will formal learning actually achieve our business goals or should I rely on informal learning?” L&D practitioners are also wondering, “Will my role become irrelevant if informal learning displaces formal learning?” or “How can L&D manage and curate informal learning as a way of maintaining relevance?”

The second influencer—technology—drives informal learning to a large extent by making content and experts easy to access. Google and other search engines make fact finding instantaneous. SharePoint and knowledge portals provide valuable templates and process documents. Content curators like Degreed and Pathgather provide a one-stop shop for eLearning content from multiple vendors like SkillSoft, Udemy, Udacity, and Lynda.

Employees are driving the change as well because they are empowered, networked, and impatient when it comes to learning:

  • 75% of employees report that they will do what they need to do to learn effectively
  • 69% of employees regularly seek out new ways of doing their work from their co-workers
  • 66% of employees expect to learn new information “just-in-time”

As informal learning becomes more prominent, the question that both L&D and business leaders should be asking is simple, “How do we know if informal learning is effective?” The new generation of learners might respond, “Duh! If the information is not effective, we go learn more until we get what we need.” A better way to uncover the effectiveness of information learning is to measure it.

Here’s a three-step measurement process that should provide insight about the effectiveness of most types of informal learning.

1. Determine what content you need to evaluate

This is actually the most difficult step if you intend to measure the impact of informal learning systematically across an organization. If you intend only to measure one aspect of informal learning—say a mentoring program then the work is substantially less. When undertaking a systematic approach, the universe of all possible learning options needs to be defined. Rather than give up now, take one simple step: create categories based on types of learning provided.

For example, group the following types of learning as:

  • Technology-Enabled Content: eLearning modules, videos, podcasts, online simulations or games
  • Documents: SharePoint resources, standard operating procedures and process documents, group webpages, wikis, and blogs
  • Internet Knowledge Portal

Create as many categories as needed to capture the variety of informal learning occurring in your organization.

2. Determine what measurement tool is best suited for each learning type

Common tools include surveys, focus groups, interviews, web analytics, and business systems that already gather critical operational data like widgets produced or products sold. Web analytics, business systems, and surveys tend to be highly scalable, low-effort methods for gathering large amounts of data. Focus groups and interviews take more time and effort, but often provide information rich details.

3. Determine when to deploy the measurement tool to gather information

For an eLearning module, it seems appropriate to include a web-based survey link on the last page of content. Learners can launch the survey and provide feedback immediately after the module is complete. If the content is curated by a vendor–preventing the insertion of a link on the final page of materials–then the completion of the module when registered in the LMS should trigger the distribution of an email with a link to the survey evaluation. Regardless of the type of learning (instructor led, virtual, self-paced, etc.), the timing and the tool will vary according to the content.

Is it easy to implement this measurement approach to evaluate the impact of informal learning? For some organizations, maybe. For others, not at all. However, measurement is a journey and it begins by taking the first step.

For More Information…

For guidance about measuring informal learning, contact John Mattox at CEB, now Garner john.mattoxii@gartner.com To learn more about how to improve L&D measurement initiatives, download Increasing Response Rates white paper.

About CEB, now Gartner

Leading organizations worldwide rely on CEB services to harness their untapped potential and grow. Now offered by Gartner, CEB best practices and technology solutions equip clients with the intelligence to effectively manage talent, customers, and operations. More information is available at gartner.com/ceb

Telling a Story With Data

Data Story

by Christine Lawther, PhD
Senior Advisor—CEB (now Gartner)

Data is everywhere. In our personal lives, we are continually exposed to metrics. The number of “likes” on social media, usage metrics on every bill, and the caloric breakdown of burgers at the most popular fast food chains are all examples of common metrics that society is exposed to on a regular basis.

Looking at data within a business context, data insight is in high demand. More organizations are focusing on doing more with less, and so data often becomes the key element that determines decisions on goals, resources, and performance. This increase in data exposure acts as an opportunity for learning and development (L&D) professionals to showcase their efforts and to truly transition the conversation from being viewed as a cost center to an essential contributor to the organization’s goals.

One common challenge is that L&D teams are often not staffed with team members that have a rich background in analytics. When instructional designers, facilitators, program managers, and learning leaders hold the responsibility of sharing data, it can be rather challenging to translate stereotypical L&D metrics into a compelling story that resonates with external stakeholders. It’s because of this that tapping into some foundational best practices in telling a story with data can be valuable to consider.

Structure Your Story: The Funnel Approach

If you visualize a funnel, imagine a broad opening where contents are poured in and a stem that becomes increasingly narrow. Apply this visualization as the framework to craft your story: start with broad, generic insights, and then funnel down to specifics. Doing this enables the recipient of the story to understand the natural flow of moving through diverse metrics, but still understand the overarching picture of L&D performance as a comprehensive whole. For example, it may be helpful to start by outlining overall satisfaction or utilization metrics, and then transition into something slightly more specific such as breaking out scores of key courses within your portfolio that are the biggest contributors to those overall metrics. From there, you can move into more detailed metrics by delving into components such as highest/lowest rated items within that course, time to apply training, barriers to training application, and insightful qualitative sentiments. At the very end of the story, one can conclude with specific actions that the team plans to take. Following this approach not only paints a comprehensive picture, but it also creates momentum for next steps.

Speak Their Language

Metrics that L&D often focuses on (e.g., activity, cost per learner) may not easily translate into insights that external stakeholders innately resonate with. Each department within an organization may have their own custom metrics. However, it is imperative that a function can demonstrate the linkage back to the broader organization. Doing this enables one to exhibit that they are being good stewards of the resources granted to them and also reveals how their day-to-day efforts align with the broader organization.

So, how can you demonstrate that leadership should be confident with your decisions? Communicate your impact with metrics that resonate with decision makers. If there are any core metrics that the company tracks, identify ways to directly demonstrate L&D’s linkage to them. If you are unsure, look for organizational metrics that are announced at company-wide meetings or shared on a regular basis. For example, if Net Promoter Score is something that your organization tracks, establish a Net Promoter Score for L&D and include it in your story. If increasing sales is a priority, identify how sales training is contributing to that effort.

Strike a Balance

It can be tempting to share only successes, however, it is vital to also include opportunities for improvement. Why? Because demonstrating transparency is the key to establishing trust. A strong approach is to share an opportunity for improvement and also include a few specific actions the department is planning to take to improve this. Doing this will provide a two-fold benefit. First, it will demonstrate that you are aware of opportunities to work on. Second, it exemplifies that you have proactively mapped out a plan to address those areas.

If you are finding that your story is entirely positive, consider looking for differences within the population you support. For example, does every region/department/tenure bracket report the same level of impact? Often a team may find that on a holistic level they are doing well; however, when you dig into varied demographics, there may be an area that can drive greater value. By transparently sharing your data to outline both successes and opportunities, the learning organization can become the best at getting better.

CEB Metrics that Matter is committed to helping organizations achieve more precision in strategic talent decisions, moving beyond big data to optimizing workforce learning investments against the most business-critical skills and competencies. To learn more about how we help bridge the gap between L&D and business leaders, download a copy of our white paper, Aligning L&D’s Value with the C-Suite.

About CEB (now Gartner)

Leading organizations worldwide rely on CEB services to harness their untapped potential and grow. Now offered by Gartner, CEB best practices and technology solutions equip clients with the intelligence to effectively manage talent, customers, and operations. More information is available at gartner.com/ceb.

TDRp Excellence Awards

TDRp Excellence Awards

Demonstrate and Celebrate Your Contributions to L&D Through TDRp.

The Center for Talent Reporting is pleased to announce the first annual Talent Development Reporting principles (TDRp) Excellence Award. This unique award focuses on the forward thinking, transformational work of L&D teams to show impact and value through management, measurement, evaluation and reporting of L&D programs and the function overall.

There are several components to TDRp. This award recognizes excellence in five areas:

  • Relationship with Business Units
  • Business Planning and Goal Setting
  • L&D Governance
  • L&D Measurement
  • Reporting Methodology

You may apply to a single award category or multiple awards. For each award category to which you apply, you will be asked to provide documentation showing examples of current practices in your organization. You will also be asked to answer several supporting questions. Each application will be considered for each category where information is provided. You do not need to provide information for all categories to be considered for a specific award.

Applications are due by midnight EST on Friday, December 1, 2017.

The Center for Talent Reporting will recognize finalists in each of the five categories and in overall excellence. Finalists will be notified by Monday, January 15, 2018 of their selection and the winners will be announced at the CTR Annual Conference on Wednesday, February 21, 2018.

Have questions? Please email us at excellenceaward@centerfortalentreporting.org

Apply Now