CTR Exchange and Blog

Can you Justify your Learning and Development Projects?

By Jack J. Phillips Ph.D., Chairman, ROI Institute

Daily headlines in the business press focus on the economy. While it is booming in some areas, other areas are slowing, and economic uncertainty exists everywhere. During uncertainty, executives must take steps to make sure the organization can weather the storm—whenever or wherever it occurs.

One way executives meet this challenge is to ensure that expenditures represent investments and not just costs. If an expenditure is considered a cost, it will be frozen, reduced, or in some cases eliminated. If it is considered an investment that is producing a return, it will be protected and possibly even enhanced during difficult times. For example, many learning and development budgets are now being frozen or reduced, even with record profits. While this seems illogical, it happens. Now is the time to reflect on your budget and your programs. Can you withstand top executive scrutiny? Are you ready for ROI?

The most used evaluation system in the world is the ROI Methodology® to measure impact and ROI for a few major programs. The ROI Certification® is the process to develop this valuable capability. This valuable certification provides participants with the skills needed to analyze the return-on-investment in practical financial terms. The results are CEO and CFO friendly. Over 15,000 managers and professionals have participated in this certification since it was launched in 1995, underscoring the user-friendly nature of this system.

Don’t have the time or budget? Several approaches are available to reduce the amount of time and cost needed to develop this capability. For more information on ROI Certification, contact Brady Nelson at brady@roiinstitute.net.

ROI Institute is the global leader in measurement and evaluation including the use of return on investment (ROI) in non-traditional applications. This methodology has been used successfully by over 5,000 organizations and in over 70 countries.

Bridge the Gap from Training to Application with Predictive Learning Analytics

by Ken Phillips, CEO, Phillips Associates

In my previous blog post, I discussed the concept of scrap learning and how it is arguably the number one issue confronting the L&D profession today. I also provided a formula you could use to estimate the cost of scrap learning associated with your training programs.

In this post, I’ll share with you a revolutionary methodology I’ve been working on for the past several years called Predictive Learning Analyticts™ (PLA). The method enables you to pinpoint the underlying causes of scrap learning associated with a training program. It consists of three phases and nine steps to provide you with the data you need to take targeted corrective actions to maximize training transfer (see figure below). 

While the specific questions and formulae for the scores are proprietary, I hope you can apply the concepts in your organization using your own survey questions and your own weighting for the indexes. Even if you adopt a simpler process, the concepts will guide you and the article will give you an idea of what is possible.

Phase 1: Data Collection and Analysis

Unlike other training transfer approaches which focus mostly on the design and delivery of training, PLA offers a holistic approach to increasing training transfer. Built on a foundation of three research-based training transfer components and 12 research-based training transfer factors (see chart below), PLA targets the critical connection among all these elements. In short, PLA provides L&D professionals with a systematic, credible and repeatable process for optimizing the value of corporate learning and development investments by measuring, monitoring, and managing the amount of scrap learning associated with those investments.

Training Transfer Components & Training Transfer Factors

Phase 1: Data Collection & Analysis

The objective of phase one, Data Collection & Analysis, is to pinpoint the underlying causes of scrap learning associated with a training program using predictive analytics and data. Five metrics are produced and provide L&D professionals with both direction and insight as to where corrective actions should be targeted to maximize training transfer. The five measures are:

  • Learner Application Index™ (LAI) scores
  • Manager Training Support Index™ (MTSI) scores
  • Training Transfer Component Index™ (TTCI) scores
  • A scrap learning percentage score
  • Obstacles preventing training transfer

Data for calculating the first three measures: LAI, MTSI, and TTCI scores, is collected from program participants immediately following a learning program using a survey. The survey consists of 12 questions based on the 12 training transfer factors mentioned earlier. Data for calculating the final two measures are collected from participants 30 days post program using either a survey or focus groups and consists of the following three questions:

  1. What percent of the program material are you applying back on the job?
  2. How confident are you that your estimate is accurate?
  3. What obstacles prevented you from utilizing all that you learned if you’re not applying 100%?

Waiting 30 days post program is critical because it allows for the “forgetting curve” effect—the decline of memory retention over time—to take place and provides more accurate data.

LAI Scores

LAI scores predict which participants attending a training program are most likely to apply, at risk of not applying and least likely to apply what they learned in the program back on the job. Participants who fall into the at-risk and least likely to apply categories are prime candidates for follow-up and reinforcement activities. Examples include email reminders, micro-learning or review modules, and coaching or mentoring to try and move them into the most likely to apply category.

MTSI Scores

MTSI scores predict which managers of the program participants are likely to do a good or poor job of supporting the training they directed their employees to attend. Managers identified as likely to do a poor job of supporting the training are prime candidates for help and support in improving their approach. This help might take the form of one-on-one coaching; a job aid explaining what a manager should do before, during, and after sending an employee to training; or creating a training program which teaches managers how to conduct pre- and post-training discussions with employees.

TTCI Scores

TTCI scores identify which of the three training transfer components and the 12 training transfer factors affiliated with them are contributing the most and least to training transfer. Any components or factors identified as impeding or not contributing to training transfer are prime candidates for corrective action.

Scrap Learning Percentage

The scrap learning percentage score identifies the amount of scrap learning associated with a training program. It provides a baseline score against when follow-up scrap learning scores can be compared to determine the effect targeted corrective actions had on increasing training transfer.

The obstacles data identifies barriers participants encountered in the 30 days since attending the training program that prevented them from applying what they learned back on the job. Waiting 30 days to collect the data allows for the full range of training transfer obstacles to emerge. For example, some are likely to occur almost immediately—I forgot the things I learned—while others are likely to occur laters—I never had an opportunity to apply what I learned. Frequently mentioned obstacles are prime candidates for corrective actions to mitigate or eliminate them.

Phase 2: Solution Implementation

The objective of phase two: Solution Implementation, is to identify, implement, and monitor the effectiveness of corrective actions taken to mitigate or eliminate the underlying causes of scrap learning identified during phase one. Here is where the “rubber meets the road,” and you have an opportunity to demonstrate your creative problem-solving skills and ability to manage a critical business issue to a successful conclusion. Following the implementation of the corrective actions, it is now time to recalculate the amount of scrap learning associated with the training program. You can then compare the results to the baseline scrap learning percentage calculated during phase one.

Phase 3: Report Your Results

The objective of the third phase: Report Your Results, is to share your results with senior executives. Using the data you collected during phases one and two, it is time to show that you know how to manage the scrap learning problem to a successful conclusion.

In Sum

Scrap learning has been around forever, however what is different today is that there are now ways to measure, monitor, and manage it. One of those ways is through Predictive Learning Analytics™. Alternatively, you might employ the concepts to build your own simpler model. Either way, we an an opportunity to reduce scrap learning.

If you would like more information about the Predictive Learning Analytics™ methodology, email me at: ken@phillipsassociates.com. I have an ebook that covers the method and a case study illustrating how a client used the process to improve the training transfer of a leadership development program.

Business Alignment: A Critical Success Factor for L&D Organizations

by Peggy Parskey, Associate Executive Director, Center for Talent Reporting

Business Alignment
Signpost with Alignment wording

If you have attended a Learning and Development (L&D) industry conference within the past three years or listened to a panel of senior L&D leaders, it’s likely that someone has raised the topic of alignment. Numerous blogs from ATD, SHRM or even technology vendors confirm that alignment is clearly top of mind.

Given the attention paid to the topic of alignment, you might think that L&D has nailed down the principles, process, and outputs of business alignment. Unfortunately, you would be wrong. We haven’t nailed down this process by a long shot. And therein lies the problem: The L&D industry does not have a consistent definition of what alignment is, let alone how to achieve it.

Do a quick Google search on L&D business alignment and you will get thousands of articles on the topic. Click some of the links and you will find as many suggested approaches to alignment as search results. Some authors suggest that business alignment is about getting the right KPIs to support business goals. Other suggest that alignment is about doing a gap analysis between the current and future state to identify the needed training programs. Some bloggers recommend conducting interviews with business leaders as input to an L&D strategy. And a few organizations with whom we have worked view alignment as a simple mapping exercise: “We need to grow sales this year, we have a bunch of sales programs. Voila, Alignment!”

With all of these possible approaches, how should an L&D leader navigate the alignment process?

Principles of Effective Alignment

At least four principles should guide leaders who are grappling with achieving business alignment:

  1. Alignment isn’t a nice to have, it’s a must have
  2. Alignment requires engagement and commitment from both L&D and business leaders
  3. Alignment has both strategic and tactical components
  4. Alignment isn’t simply about what L&D offers but also how it offers them

1. Alignment Isn’t a Nice to Have

Strategic and tactical alignment is critical to ensure that L&D is investing its scarce resources in the right place at the right time on the right programs. When L&D doesn’t execute this process or doesn’t manage it effectively, the consequences can be significant. If L&D delivers the wrong programs to the wrong audiences, the organization will lack the needed capability to achieve its goals. Furthermore, other organizations now have to pick up the slack created by L&D. Non-L&D functions may develop shadow learning organizations, or hire external resources to fill the gap left by L&D. If L&D wants to fulfill its purpose to develop the needed capability for the current and future workfoce and improve L&D performance, then effective alignment is the critical success factor.

2. Alignment Requires Mutual Engagement

Discussions about L&D alignment often imply that the business has a peripheral role to play. L&D leaders get the strategic goals from above or they interview a few business leaders to understand priorities. At that point, the business seems to disappear from view.

If L&D leaders want to achieve effective alignment, the business can’t be a bystander. Senior business leaders must engage with L&D not only to communicate their priorities but also to ensure L&D has a role to play in achieving those priorities. Moreover, the business is an influential voice to ensure that L&D has the appropriate resources and support to deliver on its promise. Furthermore, the business will have the primary responsibility for reinforcement, without which there is likely to be little application to the job. If the business doesn’t understand its role, then it is incumbent on L&D leadership to spell it out and create shared accountability for success.

3. Alignment Has Both Strategic and Tactical Components

The alignment process must start at the strategic level. Business leaders establish strategic priorities, set goals and then allocate resources to achieve them. Based on these priorities. L&D then determines if and where it plays a role. If the organization plans to launch a new product line, L&D and the business must agree that L&D should own the process to train employees on these new products. If the organization wants to capture new demographic for its products, the business and L&D may agree that L&D has a minimal role to play. Regardless of the objective, business and L&D leaders need to clarify if L&D has a role as well as the importance and urgency of that role.

Having reached agreement on the strategic priorities, L&D must also ensure it aligns tactically. As an example, imagine that $500K is allocated to develop customer experience competencies. L&D practitioners then dive into the details to assess specific organizational needs. During this process, performance consultants discover that a primary root cause of underdeveloped customer experience capability is the lack of quality resources for call center personnel. At this point, L&D discovers that its requirements have changed. Training, while necessary, is not at the heart of the under performance. At this point, L&D may simply turn over the requirement to the business to develop the needed content for call center employees and invest its scarce resources elsewhere.

The point of this example is that what appears to be development need at the strategic level, may not translate into a development requirement when practitioners study the requirements more fully. It is incumbent on L&D practitioners to adjust their approach to ensure they stay aligned.

4. Alignment Isn’t Just About What L&D Offers

Alignment isn’t just about what programs L&D offers, but increasingly, how it offers them. In a fast-paced world, instructor-led training (ILT) has a long development cycle, is expensive to produce, and requires a large time investment for learners. Yet, according to ATD’s 2018 State of the Industry Report, ILT methods still comprise 67% of all training hours with self-paced at 29% and mobile learning a mere 2%.

Learners need content at their fingertips. They need a rich reservoir of material in different forms (white papers, how-to guides, videos, checklists, case studies) that are easy to find and easy to consume. Content must be high quality and useful on the job.

Alignment requires not simply that L&D builds capability and improves performance, but also does it in the most efficient and effective manner appropriate to the need.

Conclusion

Alignment is critical to ensure the L&D function meets the needs of the business by building the necessary skills to run the business and achieve its strategic objectives. This post focused on the key principles at the heart of alignment. In subsequent posts we will explore the importance of treating alignment as a continuous process and building skills to manage the end-to-end process effectively.

I recommend you learn more about alignment from respected industry leaders who can provide guidance on how to achieve meaningful alignment with the business. Below are four resources you should check out:

The Business of Learning by Dave Vance. See Chapter 4 which discusses strategic alignment in depth.

Attend the CTR Measurement and Reporting Workshop, which addresses the importance of alignment and how to engage business partners in the process.

Read the recently published IDC PlanScape document on the importance of L&D alignment to maximize the impact of training investment.

Read the upcoming 2nd Edition of “Learning Analytics,” which will be published in February 2020. (The current edition may be found here. Look for the second edition at www.explorance.com in February 2020. The second edition has a chapter devoted to the topic of strategic and tactical alignment.)

We’d like to hear from you. If you’re having business alignment challenges, don’t hesitate to email Dave Vance or Peggy Parskey.

The Greatest Issue Facing L&D Today

Scrap Learning

by Ken Phillips

What is arguably the top issue facing the L&D profession today?

The answer is scrap learning or the gap or difference between training that is delivered but not applied back on the job. It’s the flip side of training transfer and is a critical issue for both the L&D profession and the organizations L&D supports because it wastes money and time—two precious organization resources.

Now, you might be wondering, “How big is the problem?”

Two empirical studies, one by KnowledgeAdvisors in 2014 and one by Rob Brinkerhoff and Timothy Mooney in 2008, found scrap learning to be 45 percent and 85 percent respectively in the average organization. To add further credibility to these percentages, I’ve conducted three scrap learning studies over the past few years and found the scrap learning percentages associated with three different training programs in three separate organizations to be 64 percent, 48 percent, and 54 percent respectively. Added together, these five research studies results in an average scrap learning figure of approximately 60 percent in the average organization.

To further highlight the magnitude of the scrap learning problem, consider its effect on the amount of wasted organizational dollars and time. According to the 2018 ATD State of the industry research report, the average per employee organization training expenditure in 2017 was $1,296 and the average number of training hours consumed per employee was 34.1. Using the KnowledgeAdvisors and Brinkerhoff scrap learning percentages mentioned above, you can see in the table below just how much scrap learning costs the average organization in wasted dollars and time.

Cost of Scrap Learning in Wasted Dollars & Time

Average per employee training expenditure (=) $1,296(X) 45% scrap learning(=) 583 wasted $
The average per employee training expenditure (=) $1,296(X) 85% scrap learning(=) 1,102 wasted $
The average number of training hours consumed per employee (=) 34.1(X) 45% scrap learning(=) 15 wasted hours
Average per employee training expenditure (=) $1,296(X) 85% scrap learning(=) 29 wasted hours

Taking all of this data into account reminds me of James Lovell’s famous quote during his Apollo 13 mission to the moon when an oxygen tank aboard the space capsule exploded putting both the flight and crew in great peril: “Houston, we have a problem!”

If you would like to take a crack at estimating the cost of scrap learning associated with one of your training programs, you can use the Estimating the Cost of Scrap Learning Formula below. To gain the most useful insight, you should make every effort to collect the most accurate data possible for each of the input variables. Also, when selecting an estimated percentage of scrap learning associated with the program, variable 4 in the formula, you should get input from several people familiar with the program such as other L&D colleagues, participants who previously attended the program or perhaps even the managers of program participants and then compute an average of their estimates. Gaining the input of others will increase both the accuracy and credibility of the estimate and remove any concerns that the scrap learning percentage is merely your opinion.

Estimating the Cost of Scrap Learning Formula

Wasted Participant Dollars

The length of a learning program in hours _____
X the number of programs delivered over 12 months _____
X the average number of participants attending one program _____
= the cost of scrap learning in wasted time _____
X the average hourly participant salary + benefits _____
= the cost of wasted participant dollars _____ (A)

Wasted L&D Department Dollars

Administrative expenditures (e.g., materials, travel, facility, facilitator, delivery platform, food, etc.) for one program _____
X the number of programs delivered over a 12-month period _____
X the estimated percent of scrap learning (45-85%) associated with the program _____
= the cost of wasted L&D department dollars _____ (B)

Total Cost of Scrap Learning

Cost of wasted participant dollars (A) _____
+ cost of wasted L&D department dollars (B) _____
= total cost of scrap learning _____

With this information in hand, you are now ready to pinpoint the underlying causes of scrap learning and take targeted corrective actions to mitigate or eliminate these causes and maximize training transfer. How to do this will be part two of this blog article.

Change the Conversation about Funding Measurement

The L&D profession continues to struggle to get budget and attention for more robust measurement strategies. Learning professionals, particularly those with measurement or analytics responsibilities, appreciate the value of measurement and know that further budget would be well spent but often cannot convince senior leaders in learning to make the investment, let alone senior leaders outside learning. So, how do we make the case for more resources?

My advice is that we take a stealth approach. While some senior leaders will readily see the reason for more robust measurement, in my experience most will not. Senior leaders always have many more requests for budget and staff than they can grant, and typically measurement by itself doesn’t rise to the top of the priority list. And, since you’re already providing learning, measurement seems like an add-on or a ‘nice to have,’ but not something that is essential. This is especially true if most of the measurement currently being done is simply to report activity or historical results.

The alternative is to change the conversation. Instead of talking about measurement, talk about what will be required to deliver the planned results from the learning initiative. In other words—talk about management rather than measurement. Of course, you cannot manage without measures, so management will be the trojan horse that gets in measurement. This approach makes measurement the means to the end. It also focuses on the most important purpose or use of measurement which is to manage.

How would this work? Start with programs aligned to your organization’s key goals or needs. For these initiatives you need to partner closely with the goal owner, like the head of sales or manufacturing. Both parties need to agree on program specifics like learning objectives, target audience, and completion date. Most importantly, they need to agree on mutual expectations for the impact of the learning, which may be the isolated impact of the Phillip’s approach or the more subjective expectation of the Kirkpatrick approach.

In either case, both parties will need to agree on all the relevant measures and their targets that are required to deliver the planned impact. These measures would include efficiency measures like number of participants, completion rate, completion date, and cost as well as effectiveness measures like participant reaction, learning, and application rate. These measures will have to be managed to plan the development, delivery, reinforcement, and impact stages in order to deliver the ultimate measure, which is impact. (ex. The program will have to be completed by all participants by April 30 with a 90 percent initial pass rate a 90-day application rate of 85%.)

Notice that you now have a robust measurement strategy which is an integral part of the management for this initiative. In fact, you will not be able to meet expectations without it and consequently you should refuse to do the learning if a senior leader suggests stripping out the measurement. However, since measurement was not presented separately (no budget or staff identified for it, simply part of the plan), the whole issue of stripping it out is not likely to come up. You can follow the same approach for other learning initiatives including those not directly aligned to the goals of the organization. In every case, you should identify some measure of success and should identify the relevant measures and the targets required to deliver the success.

This is the stealth approach—treat measurement as a means to the end—embed measurement in all key initiatives by getting agreement with goal owners and senior leaders upfront on the planned outcome and on targets for all the relevant efficiency and effectiveness measures. In addition to measures to ensure expectations are met, you may need to employ measurement and analytics upfront to better understand the issue, optimum learning solution, or modality. Again, measurement will be integrated into the management of the initiative. It will not be a stand-alone item and will not be presented as such.

You may already have a dedicated measurement group to serve as an integral support staff; however, every program manager needs to work with goal owners and leaders to agree on a measure of success and the relevant efficiency and effectiveness measures and their targets. They need to be comfortable with measurement. If you want to expand your measurement efforts, look for ways to do this as part of key programs and initiatives (the end) rather than just as expanding your measurement group (the means).

In conclusion, try changing the conversation from measurement (which senior leaders generally do not understand or value) to measurement or to help you with your measurement strategy. They really don’t care. Instead, engage them to manage their program to deliver planned results which they do care about and which, by necessity, will include measures. Try this approach and see how it works for you.

 

2019 CTR Conference a Great Success

We are just weeks back from our Sixth Annual CTR Conference
in Dallas. We weren’t sure it could top last year’s but I think it did. We had
117 participants and they were very enthusiastic and engaged. Lots of great
sessions and discussions. And of course some delicious food in Grapevine!

Patti Phillips kicked it off with a talk on ROI, emphasizing
how important impact and ROI are and how they can be calculated. She included
some very interesting history and the story of how she met Jack. When we came
back together later in the morning after breakouts, Peggy facilitated a panel
on scrap reporting and measurement literacy, and then Brenda Sugrue, 2018 CLO
of the Year, shared some thoughts from her career. Six more breakout sessions
followed in the afternoon leading up to the awards ceremony at the end of the
day.

CTR recognized Jack and Patti Phillips as the first
recipients of the Distinguished Contributor Award, acknowledging their truly
significant contributions to our field including the 1983 Handbook of Training
Evaluation, the concepts of isolated impact and ROI, and the successful
implementation of ROI in over 70 countries. Then we recognized the winners of
the TDRp Excellence Awards: SunTrust Bank for L&D Governance, and Texas
Health Resources and Heartland Dental for L&D Measurement. Jack and Patti
recognized three winners for ROI studies as well.

Day two began with Steven Maxwell of the Human Capital Management Institute sharing his perspective of what CEOs and CFOs want from learning as well as his thoughts on the newly released Human Capital Reporting standards, both of which got people thinking. After six more breakout sessions, Justin Taylor, General Manager and EVP of MTM, led a panel on the impact of learning to wrap up the conference. With a focus on just the measurement, reporting and management of L&D, everyone could focus on the issues of most concern to them and share ideas and questions with each other in detail. If you didn’t make this event, you missed hearing from industry thought leaders and leading practitioners.

Next year we are moving the conference to the fall, so mark your calendars for October 28-29, 2020. We will continue to begin CTR Week with a pre-conference workshop on measurement and reporting (October 26-27) and end the week with post-conference workshops..  Next year will be the 10th anniversary of TDRp and we hope you can join us for the celebration.

Human Capital Reporting Standards

The International Standards Organization (ISO) released its first standard for human capital reporting in December. It is titled “Human Resource Management – Guidelines for Internal and External Human Capital Reporting.” The document is 35 pages long with guidance for internal and external reporting by both large and small organizations. The standard is the culmination of a great deal of work by an ISO working group representing experts from numerous countries over the last three years led by Stefanie Becker from Germany.

Although a number of measures still need to be defined in greater detail, the standard is a major achievement and an important first step towards bringing standardization of measures and reporting to the HR field. Accountants have long had definitions of measures and standard reports as well as guidance about how the measures should be used. This has served their profession well and a similar discipline would serve us well. Imagine that in the not-to-distant future everyone in our field might be using the same language for measures and would share a common understanding of how to use the measures in reports to better manage their operations. Imagine the rich benchmarking and best practice sharing that this standardization and reporting would allow. And imagine the potential for university students in our profession to graduate with this common knowledge just as accountants do today.

Of course, there are compelling business reasons to move in this direction as well. Today about 84% of the value of S&P 500 firms is attributable to intangible assets which have little visibility on the balance sheet. Just forty years ago the percentage was reversed with 17% of the value in intangibles and 83% of the value in tangible assets. So, in the “old days” an investor could look at a balance sheet and get a good feel for the underlying value of a company; namely, its physical assets like building, equipment, land, and investments. Today, human capital drives the value of the intangible assets which makes up most of the value of many companies. Human capital, however, does not appear on the balance sheet and appears on the income statement only as an expense to be minimized. This is why it is so important to provide visibility and transparency to human capital. Investors, customers and employees need to know more about the human capital in an organization.

The standards are completely voluntary although the hope is that leading organizations will adopt them to provide this greater transparency for their investors and employees. The working group recognized that measurement and reporting can be burdensome for small organizations so only the most important and basic (and easy to calculate) measures are recommended for reporting. The working group also recognized that some measures, while important to properly manage human capital internally, may not be appropriate for public sharing and thus are recommended for internal use only.

There are 54 measures in all with 23 recommended for external reporting by large organizations. Many large organizations are already measuring most of these but not reporting them publicly. It is hoped they will begin using the ISO definitions as they become available and that they will begin to publicly report some of the measures. In some countries the government may mandate adoption of the ISO standard but that is not the case for the United States where organizations will be free to decide which, if any, of the recommended measures they report internally or externally.

Of the 23 measures recommended for public reporting by large organizations, there are five for recruitment and turnover, five for diversity, three for compliance, and three for health, safety and well-being. Productivity and workforce availability each have two, and cost, leadership and skills & capability each have one. The one for L&D (skills & capability) is total training cost which is recommended for external reporting for both large and small organizations. While not an ideal measure since it focuses on inputs rather than outcomes, it is at least a beginning and gives some indication of how much an organization values its employees.

Four other L&D measures are recommended for internal reporting by large organizations: percentage of employees who participate in training, average formal training hours per employee, percentage of employees who participated in training by category, and workforce competency rate. The first two are also recommended for small organizations.

I recommend that everyone in our profession become familiar with the standard and, if your organization is not already reporting these measures internally today that you consider them for your measurement and reporting strategy going forward. In the future investors and employees will begin asking for these and you really should be reporting on them internally to best manage your human capital. You can read more about them here in Workforce magazine, and the standard is available for purchase from ISO or ANSI.

Making an Impact Through Learning

I hope you can join us in Dallas for our Annual Conference in five weeks. It will definitely be worth your time and investment. You will find a community of like-minded learning professionals who are passionate about the measurement, reporting and management of L&D. Leading industry thought leaders and practitioners will be there to share their thoughts and experiences with you. And, of course, you will learn a lot about what works and what doesn’t from fellow participants.

Our theme for this year’s conference is Making an Impact through Learning. We have selected speakers and topics to ensure that you will have the latest thinking from the best people in our profession on this always important topic. Like last year, we have scheduled plenty of time for networking so you can get your questions answered and learn from your colleagues.

Our keynoters this year are Patti Phillips and Jeff Higgins. I am sure you are familiar with Patti and Jack Phillips and their work on isolating the impact of learning and calculating ROI. (They will each also be conducting a breakout session.) You may not know Jeff who has a very interesting background as both an accountant and CFO and as a senior HR executive so he brings a unique perspective to our field. He has also been a contributor to the International Standards Organization’s (ISO) work on Human Capital Reporting standards which are just now being implemented around the world. You will learn a lot just from these two speakers!

You will also have the opportunity to hear from the current CLO of the Year, Brenda Sugrue, who will share her thoughts on her personal journey and on EY’s transformation. Other industry thought leaders include Jack Phillips on ROI, Ken Phillips on measuring application, Roy Pollock on ensuring results, John Mattox on the future of measurement, Laurent Balague on the best use of surveys, Peggy Parskey, Adri Moralis, and Justin Taylor on reducing scrap measurement and reporting, Jeff Carpenter on data analytics and adaptive learning, Cushing Anderson on adding value, and Kevin Yates on impact measurement.

In addition, we have some great practitioners lined up to share with you including Gary Whitney (formerly with IHG), Toni DeTuncq and Teddy Lynch (NSA), Laura Riskus (Caveo), Mark Lewis (Andeavor), and Dean Kothia (HP).

Last, please do consider joining us for our pre and post-conference workshops which round out CTR Week. We offer a two-day workshop on measurement and reporting and one on the Six Disciplines of Breakthrough Learning. And after the conference we offer four half-day workshops. So, come early or stay after the conference to invest in our own development.

If you have come to one of conferences or workshops in the past, welcome back. If this will be your first, we look forward to meeting you. Either way, you are an important part of this community and everyone will benefit from your questions and contributions. As you know, the Center for Talent Reporting is a nonprofit and our sole mission is to advance the profession. Each year’s CTR Week is an opportunity to come together, learn from each other, and grow. You will leave energized! Find out more about the conference and workshops at ctrconference.com. Hope to see you in Dallas!

A Look Back (and Ahead) on the Measurement and Management of L&D

The end of December is always a good time of the year to take a look back and ahead of the L&D profession. Looking back, I think we are blessed to have some great thought leaders who have provided a terrific foundation for both the measurement and management of L&D.

On the measurement side, I am particularly thinking of Don Kirkpatrick, who gave us the Four Levels and Jack Phillips, who gave us isolated impact for Level 4 and ROI for Level 5. I am also thinking about all of the work done by the Association for Talent Development (ATD) to promote measurement and to benchmark key measures through their annual industry survey.

On the management side, I’m grateful again for the contributions from Don Kirkpatrick and Jack Phillips and now, Jim and Wendy Kirkpatrick and Patti Phillips; as well as for their guidance in how to manage—particularly with respect to partnering closely with goal owners and focusing on what it takes to achieve Level 3 application. In addition, I appreciate the work by Roy Pollock and his associates in giving us The Six Disciplines of Breakthrough Learning, which is a must-read book for anyone focusing on the measurement and management of L&D.

And, there are many more—like Ken Phillips and John Mattox and others too. And beyond L&D, the HR profession, in general, has benefited tremendously from thought leaders like Jac Fitz-enz, Jeff Higgins, John Boudreau, and Wayne Cassio. Like Kirkpatrick and Phillips did for L&D, these thought leaders basically invented measurement for the rest of HR.

We are very fortunate to have this strong foundation built over the last 30+ years. Looking ahead, the question is, where do we go from here? As a profession, we now have well over 170 measures for L&D and over 700 for HR (in general). I don’t think we need more measures. What we do need, however, is a better way to utilize some of the measures we have—especially Levels 3 (application) and 4 (results or impact) and 5 (ROI) for L&D. Level 3 is the starting point and should be measured for all key programs. Research by Phillips clearly indicates that CEOs want to see impact and ROI more than any other measures, which will become increasingly urgent as the next recession draws closer (pencil in 2020 or 2021 for planning purposes). While some progress has been made over the last 10 years, it is not enough, so this remains a high priority for the profession moving ahead.

Another priority for us is to do a much better job managing learning. By this I mean running learning with business discipline, which starts by partnering closely with goal owners and agreeing on specific, measurable goals or targets for the learning (assuming of course that learning has a constructive role to play in achieving the business goal). And, once specific plans have been made, to execute those plans with the same discipline your colleagues in other departments use. This requires monthly reports and comparing results to plan so that corrective action can be taken as soon as possible, to get back on plan and deliver promised results.

Managing learning this way is hard for many and some simply do not want accountability. But, it is an area where the payoff of better performance and greater value delivered per dollar is huge. In fact, I would contend that it has a bigger payoff than even measuring at Levels 3-5.

To summarize, I think there is an opportunity to structure our departments differently to enable better management overall. To develop a close partnership with goal owners (like the head of sales, for example) and to really run learning like a business, there needs to be one L&D professional in charge of the program(s) identified to meet the business need. This person would:

  1. Meet with the goal owner initially and oversee the needs analysis
  2. Get agreement up-front with the goal owner on specific measurable plans for the learning program as well as roles and responsibilities for both parties
  3. Supervise the design, development, and delivery of the program
  4. Meet regularly with the goal owner to manage the successful deployment of the learning, including reinforcement by leaders in the goal owner’s organization

I know this may be a challenge in some organizations, but I think it is indispensable for a successful partnership and for accountability within L&D.

I truly enjoy being a part of the great profession and the opportunity to work with all of you. We have come a long way in a relatively short period of time and I believe the future is very bright if we continue to build on foundation that has been laid to take learning measurement, reporting, and management to the next level.

I look forward to what we can accomplish working together! Happy New Year and Best Wishes for the coming year!

Impact and ROI…The Discussion Continues

This month, I continue the discussion  on ROI (level 5) and impact (level 4). After publishing last months article, some expressed that ROI was unnecessary, too confusing or just too complicated. I argued that it is a simple, straightforward calculation once you have isolated the impact of learning which can always be done using the industry standard participant estimation methodology.

So, let’s take a look at why impact and ROI are so important. L&D exists for many reasons, but I would suggest the most important is to help our organizations accomplish its business goals—like higher sales, increased productivity, reduced costs and greater customer or patient satisfaction. Of course, L&D is positioned to help achieve HR goals as well, like higher employee engagement, which in turn contributes indirectly to realizing all the business goals. Most L&D departments are also responsible for leadership development which is sometimes itself a business goal.

These are all very important, high-level business or HR goals. Organizations invest considerable time and money in learning to achieve these goals, so it follows that CEOs would want to know what they are getting for their investment. What difference does learning make? Is it worth doing? Jack Phillips asked CEOs what they most wanted to see from L&D and over 90% said impact and over 80% said ROI. Less than 10% were receiving any impact data at the time and only slightly more were getting any ROI information. The best practice here is to carefully align your learning to these goals, plan it in partnership with the goal owner, set targets for impact and ROI, and then jointly manage the program to ensure the targets are achieved, including the level of application required to deliver the planned impact and ROI.

From both a planning and a follow-up point of view, some measure of isolated impact and ROI are the only ways to ensure learning (not L&D, but learning, which must be co-managed by the goal owner and L&D in partnership) delivered on its promise. The CEO, head of L&D, program manager, and goal owner will want to know how much difference learning made and what opportunities can be identified for improvement or optimization. The upfront agreement on impact will drive a discussion on what is possible and what level of effort by all parties will be required. The upfront agreement on ROI is nothing other than the business case for the investment. Whether it is ROI or net benefits, all parties will want to be sure that benefits are expected to exceed the costs. If not, why would you proceed? Afterwards, everyone will be interested in what impact and ROI were achieved. Actuals will never exactly match plan, and that is okay, but hopefully they will be close to plan. If not, there is a learning opportunity since either the plan was off or execution of the plan was off.

So, impact and ROI are important for high-level goals. How about lower-level goals and day-to-day business needs? Here, I think the answer depends. Most organizations have limited resources, so those resources should be focused first on the most important organizational goals. Plus, lower level goals may not have easily identifiable outcomes and thus it becomes harder to identify impact. Some learning like compliance training, on-boarding, and basic skills training is simply essential and there is no need for a business case or ROI. For this essential learning the goal should be to conduct it as effectively and efficiently as possible. So, employ levels 1-3 to measure effectiveness and focus on minimizing the time and cost to be efficient.

In conclusion, impact and ROI are important for both learning professionals, organizational goals owners, and senior leaders like the CEO. These measures are important upfront to help plan the level of effort required and to reach agreement with the goal owner on roles and responsibilities. The higher the desired impact and ROI, the greater the effort will have to be. Once the planning is completed and the program is underway, the targets for impact and ROI can be compared to year-to-date results to determine if mid-course corrections are necessary. In other words, this is not a set it and forget it exercise. Programs must be managed monthly to achieve the planned impact and ROI. Last, at the end of the program once the impact and ROI have been estimated, there will be an opportunity to learn from any variances to plan and improve.

ROI: Still Evoking Confusion and Controversy

The Return on Investment (ROI) concept for learning has been around since Jack Phillips introduced it about forty years ago. You might think by now that at least the confusion over its use would have diminished even if the controversy had not. Not even close. I continue to see the concept abused and misused on at least a monthly basis.

Here is an example from a recent blog: “Anyone who is involved in the business world is familiar with the concept of ROI. It’s punchy, with its relatively simple calculation, and can make or break a purchasing decision. But when it comes to learning initiatives, gathering the necessary data to calculate ROI is difficult, to put it mildly.” The author goes on to say that learning initiatives implemented as an integral part of business strategy can be measured by the success of that strategy.

There are several issues with the blog. First, the author appears to be confusing ROI used in the business world with ROI used in the learning world. Typically, a financial ROI is calculated to make the business case for an investment, like a new product line or facility. The definition of this financial ROI is not the same as the learning ROI. The numerator of the financial ROI is the net present value (NPV) of the increase in profit due to the investment. The denominator is the cost of the asset required for the project, such as the cost of a new facility. This will be capitalized as an asset on the balance sheet and depreciated each year.

Contrast this with the learning ROI which has (usually) one year of contribution to profit in the numerator (so no need for NPV) and no asset whatever in the denominator. Instead, the denominator is the cost of the learning initiative which is an expense item from the income statement. So, two different definitions and the calculation for the financial ROI is actually more complicated than that for the learning ROI. Interestingly, it was exactly this difference in formulas which led my colleagues in accounting at Caterpillar to tell me that I could not keep referring to learning ROI as “ROI” since it was confusing our leaders. So, I renamed it Return on Learning or ROL in 2002. Take away here is to remember the two are not the same and let those in your organization know that learning ROI is calculated differently.

The next point by the author is that learning ROI is difficult to calculate. The ROI calculation itself is very easy: simply, the net benefit of the initiative divided by the total cost. Net benefit is the gross benefit minus total cost. Generally, total cost is easy to calculate, but the sticky part is the gross benefit which is the increase in profit before subtracting the cost of the initiative. The gross benefit, in turn, depends on the isolated impact of the learning initiative, like a 2% increase in sales due just to the learning. Likely, this is what the author had in mind when complaining about the difficulty of calculating ROI.

However, isolation need not be that difficult. The learning team can work with senior leaders to agree on a reasonable impact for planning purposes. And, once the program is completed, there are several methods available to estimate actual impact which do not require special training or hiring a consultant, Often, it will suffice to simply ask the participants and the supervisors what they believe the impact was and then reduce the estimate some to allow for the inherent error in any estimate. While not precise enough for an academic journal article, this is usually all we need in the business world to make appropriate decisions (like go/no-go after a pilot) or identify opportunities for improvement. It will also be close enough to determine if the investment in learning more than covered the total costs.

Last, the author suggests that if the learning is integrated into the business strategy, its success can be measured by the success of the business goal. I strongly agree that we should always start with the end in mind (the business goal) and design the learning so it directly supports the business goal. Further, we need to work in close partnership with the business goal owner to ensure that the learning is provided to the right target audience at the right time and then reinforced to ensure the planned impact is realized. While this does provide a compelling chain of evidence that learning probably had the intended impact, it does not tell us how much impact the learning had or whether the investment in learning was worthwhile. Instead of a measurement, then, we are simply left with a “mission accomplished” statement.

The question remains how much sales would have risen without any training. If sales would have increased by 10 percent without training, the training clearly was not worth doing. How about if sales would have increased 9% without training? Was it worth doing? We still need to isolate the impact of training and calculate net benefit or ROI to answer this ultimate question of value – not to “prove” the value of the training but to enable us to make better decisions about programs going forward and avoid investing when the return does not justify the cost.

So, the debate goes on. A friend asked me recently if I still believe in ROI. Yes, I do, but we need to use it wisely. It should not be used defensively to try to prove the value of an initiative or an entire department. In other words, it is not an exercise to deploy at the end of a program or fiscal year when you are under fire. Rather it should be used to help plan programs to ensure they deliver value and to identify opportunities for improvement going forward. It should be used to help decide which programs to continue and to identify ways to optimize programs. ROI will never take the place of careful planning, building relationships with goal owners or smart execution. You will always have to get the basics right first. Once you have done that, then ROI can be a powerful tool to make you a better manager.

The Portfolio Approach to Managing Learning

I just listened to a great webinar by Cristina Hall, Director Product Strategy for Metrics That Matter (MTM), which is an organization that specializes in helping companies gather, analyze and report data on learning. They have developed a very helpful approach for thinking about and managing the mix of courses an organization offers.

At CTR we spend most of our time focusing on the key programs in support of key organization goals as well as the key initiatives of the L&D department head to improve internal efficiency and effectiveness. Consequently, we advocate the use of program reports by the program manager and the goal owner which contain detailed data on the programs in support of key company goals like increasing sales. We also recommend summary reports for sharing with the CEO and other senior leaders to show alignment of learning to their goals and the expected impact from that learning. Last, we have operations reports for the L&D department head to use in managing improvement of a few select efficiency and effectiveness measures. So, the focus is primarily on targeted programs which are actively managed on a monthly basis to deliver planned results.

But what about all of the other courses an organization offers? Some companies have hundreds of courses in their learning management system (LMS). Most are not strategically aligned to the top goals of the company and are not actively managed, although they may be reviewed periodically. While not strategic, they can still be very important, and organizations want be sure they offer the right mix of courses and that the offered courses add value. So, the question is, “How should all of these programs be managed?” This where the MTM approach offers a very valuable contribution to the field.

MTM recommends assigning each course to one of four portfolios or categories. The four portfolios are: Drive Growth, Operational Efficiency, Mitigate Risk, and Foundational Skills. The portfolios reflect the reason for the learning and force the L&D group to answer the question of why the course is being offered. Is it to improve revenue? If so, it falls in the Drive Growth portfolio. Is it to reduce cost or improve productivity? If so, it falls in the Operational Efficiency portfolio. Is it to ensure compliance with regulations or standards or reduce the organization’s exposure to potential harm? If so, it falls in the Mitigate Risk portfolio. All other courses are assigned to the Foundational Skills category which would include all types of basic and advanced work skills, from very specific job training to communications skills and team building.

The beauty of this approach is in its simplicity. We could imagine more portfolios and we could argue some courses may fall in more than one portfolio, but assigning each course to just one of four portfolios forces the question of what the course is primarily designed to accomplish. Once all the courses have been assigned, imagine a grid with four quadrants—one for each portfolio. Now populate the box for each quadrant with measures that will help you manage your portfolios. First, show the

percentage of courses and hours as well as the amount of L&D budget dedicated to the portfolio to determine if your mix is right. For example, it may be that 5% of the L&D budget is being spent on driving growth, 35% on improving efficiencies, 10% on risk and 50% on foundational skills. This may be the right mix, especially for an L&D department with responsibility for a lot of basic skills training. On the other hand, a CEO might look at the mix and be surprised that more effort isn’t being allocated to driving growth and improving efficiency. Just sharing this data can lead to a great discussion about the purpose of L&D and on priorities.

Measures could be added for each portfolio to show the number of participants, level reaction, level 3 application, and some indicators of level 4 impact or outcome. The data could also be color coded to show comparison to last year or better yet to plan (or target) for this year. This portfolio approach can also be incorporated directly into TDRp’s operations report where the four portfolios serve as headings to organize the measures.

In conclusion, I strongly recommend an approach like MTM’s to better understand the existing mix of courses and to ensure alignment with the priorities of the CEO in terms of budget and staffing. I also believe the portfolio approach will be helpful in monitoring the courses throughout the year for replacement or revision.

Great Advice From the Kirkpatricks

By Dave Vance,

In their July 25 newsletter the Kirkpatricks ask if you are a Training Fossil. I love the imagery and their message. Here is what they say:

“The role of training professionals has changed as technology has advanced. Fifteen years ago, it may have been enough to design or deliver a great training class. If that’s all you do these days, however, you are in danger of being replaced by a smartphone app or other technology that can do the same things quickly and for free, or nearly free.

In a similar way, training professionals need to design and build “roads and bridges” into the job environment before, during and after training, and then monitor them regularly. Instructional designers need to do more than design formal classroom or virtual instruction. Trainers need to do more than deliver it. “

Their concern is a good one and I think in the near future we will see aggregators who can provide generic content on any platform so efficiently that internally developed content will not be able to compete. So, if a training department is focused solely on providing generic content, often in an effort to boost employee engagement, it risks becoming extinct just like the dinosaurs.

Unlike the dinosaurs, however, we can avoid extinction by making sure we are adding true value to our organizations. As Jim and Wendy explain, this goes far beyond providing generic content. It starts with understanding your organization’s goals and priorities and includes establishing a very close, strategic relationship with senior business leaders. Put simply, start with the “business” end in mind which the Phillips and all leading authors in our field recommend. Next, design the entire learning experience and define the roles that the business goal owner and L&D must play in order for the learning to be appreciated, applied and impactful. This starts by engaging the business goal owner to agree on reasonable expectations from the learning if both of you fulfill your roles. Neither party can make the learning a success on their own. While the learning department will play the biggest role in designing and delivering the learning, the business goal owner and supervisors will need to play the biggest role in communicating the need for the learning and then reinforcing the learning to ensure application.

Roy Pollok describes what is necessary for successful learning in The Six Disciplines of Breakthrough Learning which I highly recommend to understand learning as a process and not an event. In a similar fashion, Jack and Patti Phillips are emphasizing the importance of design thinking in their latest books. All agree that learning cannot be just an “event”. For true impact it must be carefully planned and managed in close partnership with the business, and the follow up after the “event” to ensure application and impact is just as important as the design and delivery of the “event” itself.

This process oriented approach coupled with a close strategic partnership with the business is what will allow the profession to add true value. It also will prevent us from becoming extinct since we add value at every stage of the process from consulting with the business upfront to helping the business reinforce the learning with their employees on the backend. While we may purchase generic content where it makes sense, our value add goes far beyond the design and delivery of content. We will not be in danger of becoming a training fossil as long as we focus on partnering with the business to meet their needs by delivering the entire learning experience required to make a difference in results.

Impact and ROI of Learning: Worth Pursuing or Not?

Several articles in the last month have suggested the profession should step back from trying to isolate the impact of learning and the resulting return on investment (ROI). The authors argue that it is difficult or impossible to isolate the impact of learning from other factors and no one will believe the results anyway so why bother. Instead, they call for showing the alignment of learning to business goals, focusing on easier to measure levels 1-3 (participant reaction, amount learned, and application), and finally focusing on employee engagement with learning (consumption of learning and completion rates). Aligning learning to business goals and measuring levels 1-3 are always good ideas, so no disagreement there. And, depending on your goals for learning and the type of learning, measuring average consumption and completion rates may also make sense. However, for certain types of learning there is still a need to measure impact and ROI (levels 4 and 5).

The primary reason is that senior corporate leaders (like the CEO and CFO) want to see it and so should the department head and program director. Research by Jack Phillips in 2010 showed that CEOs most want to see impact and ROI but instead are often provided with only participation data (number of learners, number of courses, etc.), participant reaction (level 1) and cost. While these measures are helpful they don’t answer the CEO’s question of what they are getting for their investment. CEOs and CFOs want to know what difference the training made and whether it was worth the time and effort. The program director and CLO should also be curious about this, not from a defensive point of view (like proving the value of training), but from a continuous improvement perspective where they are always asking what we learned from this project and how we can improve next time.

It is true that level 4, the isolated impact of learning on a goal, is harder to determine than levels 1-3. Sometimes there will be a naturally occurring control group which did receive the training. In this case, any difference in performance between the two groups must be due to the training. In other cases, statistical analysis like regression may be used to estimate the isolated impact from the training. The most common approach, however, is participant and leader estimation which is generally good enough to roughly determine the impact of learning and definitely good enough to learn from the project and to identify opportunities for improvement. In a nutshell, the methodology calls for asking participants to estimate the impact from just the training and to also share their confidence in that estimate. The two are multiplied together to provide a confidence-adjusted estimate of the isolated impact of learning. For example, one participant may say that the training led to a 40% increase in performance (like higher sales) but is only 50% confident in the 40%. The confidence adjusted impact would be 40% x 50% = 20% increase in performance. Repeat for others and average. Best practice would be to ask supervisors what they believe as well. Then share with the initiative’s sponsor and other stakeholders and modify as necessary. Once the level 4 isolated impact is determined, ROI is very straight-forward.

The Existential Question: Why Do We Measure?

There are several excellent answers to the question: Why Do We Measure? And, the answers will provide much needed direction to your measurement strategy. The most common answers are to:

  1. Answer questions
  2. Show results
  3. Demonstrate value
  4. Justify our budget (or existence)
  5. Identify opportunities for improvement
  6. Manage results

We will examine each in turn and comment on the implications of the answer for your strategy.

The most basic reason for measuring is to answer questions about programs and initiatives. For example, someone wants to know how many courses were offered in the year, how many participated in a particular course, or what the participants thought about the course. Assuming this information is already being collected and stored in a data base or captured it in an excel spreadsheet, simply provide the answer to the person who asked for it. If it is one-time-only (OTO) request, there is no need to create a scorecard or dashboard. If someone wants to see the same information every month, then it does make sense to create a scorecard to show the data by month.

The second most common reason is to show results. L&D departments produce a lot of learning each year and usually want to share their accomplishments with senior management. In this case, results generally translates to activity with departments sharing their results in dashboards or scorecards which show measures by month (or quarter) and year-to-date. The scorecard might also show results for the previous year to let senior management know they are producing more learning or improving.

The third reason is to demonstrate value. It is also very common and is just an extension of the second. Some believe that simply showing results demonstrates value while others believe that demonstrating value requires a comparison of activity or benefit to cost. For example, a department might demonstrate value by calculating cost per participant or cost per hour of development and showing that their ratios are lower than industry benchmarks or perhaps last year’s ratios.  Some adopt a higher standard for value and show the net benefit or ROI of a program. Net benefit is simply the dollar value of the impact less the total cost of the program. Any value above zero indicates that the program has more than paid for itself. Return on investment (ROI) is simply net benefit divided by total cost expressed as a percentage, and any positive percentage indicates the program more than paid for itself. Measures to demonstrate value are usually shared at the end of a program or annually rather than monthly.

The fourth reason is to justify the L&D budget or the department’s existence. This is an extension of the third reason where justification is the reason behind demonstration of value. In my experience this is almost always a poor reason for measuring. Typically, a department measuring to justify their budget is a department which is not well aligned to the goals of the business, lacks strong partnerships with the business, and has poor or nonexistent governing bodies. Not only is it a poor reason for measuring, but the effort in most cases is doomed to fail even with high ROIs. In this situation, energy would be better spent addressing the underlying problems.

The fifth reason is to identify opportunities for improvement. This is a great reason for measuring and indicates a focus on continuous improvement. In this case scorecards may be generated to show measures across units, courses and instructors with the goal of discovering the best performers so that the lessons learned from them can be broadly shared. There may also be a comparison to best-in-class benchmarks, again with an eye toward identifying areas for internal improvement. Another approach would be to create a scorecard, graph or bar chart with monthly data to determine if a measure is improving or deteriorating through time.

The last reason to measure is to manage. This is perhaps the most powerful, and least appreciated, reason to measure. A well-run L&D department will have specific, measurable plans or targets for its key measures. These plans will be set at the start of the fiscal year and the L&D leaders will be committed to delivering the planned results. By definition, this approach requires the use of measures, and time will be spent selecting the appropriate measures to manage and then setting realistic, achievable plans for each. Once the plans are done, reports need to be generated each month comparing year-to-date results to plan in order to answer two fundamental questions: 1) Are we on plan, and 2) Are we likely to end the year on plan? If the answer is “no” to either question, the leaders need to take appropriate action to end the year as close to plan as possible. In this case, reports must be generated each month showing plan, year-to date results, and ideally a forecast of each measure is likely to end the year.

A good measurement and reporting strategy will address all the reasons above except number four (justify existence). Reports, dashboards or scorecards are required in some cases but not in others just as monthly reporting is required in some cases but not in others. If there are limited resources, it is best to generate regular reports only for those measures to be actively managed (reason six) or those measures used to show results (reason two). Reports can be generated on an as-needed basis in other cases and most measures can simply be left in the data base until they are needed to answer a specific question.

What Finance and Accounting Should Expect from HR

Finance and accounting often do not hold HR to the same standards as other departments because it is believed that HR is different and people initiatives cannot be measured or managed like other business initiatives. However, having different standards for HR is a disservice to both HR and the company, resulting in lower expectations and less return from HR expenditures.  HR professionals are quite capable of playing by the same rules as their colleagues in other departments and can deliver tremendous business value, but finance and accounting professionals need to have a realistic sense of what may be expected from HR.

I believe that HR, including L&D, should be held to the same business standards as other departments. This means that finance and accounting should expect the same from HR as any other department – not more, not less – just the same. At a minimum, finance and HR should have five expectations for HR.

First, since HR is a support function, it should align some of its initiatives to the organization’s top business goals. This means that HR, through its many departments like L&D, talent acquisition and total rewards, should have programs or initiatives that directly support business goals like increasing sales, reducing defects, improving patient quality, etc. Of course, HR will also have programs in support of HR goals like increasing employee engagement, improving leadership and reducing regrettable turnover. While these programs are very important and will indirectly contribute to achieving all the business goals, the expectation should be for HR to directly contribute to the business goals as well, which it can easily do.

The second expectation is that HR will prepare a business case for major programs or initiatives, just like other departments are required to do. The business case will bring together the planned benefits and costs of the initiative, making explicit all the key assumptions and risks. The business case should include the HR ROI (return on investment) which is simply the net benefit divided by the total cost, which is commonly used in L&D. This will help finance and accounting make better decisions about funding and will allow comparisons among different HR requests, although it will not allow a direct comparison to the financial ROIs from other areas since the formulas are different for HR and financial return on investment.

The third expectation is that HR will create a business plan for the coming fiscal year, just like other departments do. The plan may be written or PowerPoint depending on the organization’s culture and should include at least an executive summary, a review of last year’s accomplishments, the business case for the coming year for at least the major initiatives, and the budget, staffing, and other resources required to deliver the plan. The plan will include specific, measurable goals for all key initiatives. The process of creating a plan is just as important as the finished product. A good business planning process will ensure the right questions have been raised and the right people have been involved to yield the organization’s best thinking about what can be accomplished in the coming year.

The fourth expectation is that HR will execute its approved business plan with discipline. Now that specific, measurable goals (targets, plans, KPIs – whatever you prefer to call them) have been set in the business plan, HR needs a process to ensure these planned results are delivered. Disciplined execution requires monthly reports comparing year-to-date (YTD) results to plan and ideally a forecast for how the year is likely to end compared to plan as well. The reports should include all the key measures identified in the business plan and should be used in a monthly meeting of senior leaders dedicated to actively managing results to come as close to plan as possible by the end of the year. Disciplined execution requires that two questions be answered every month: 1) Are we on plan year to date? and 2) Are we going to end the year on plan? Answers to these two questions will drive management efforts for the rest of the year.

The fifth expectation is that HR will be accountable for results. The approved business plan contained the specific, measurable goals for the year. Now HR needs to execute with discipline and be willing to be held accountable for achieving its plans, just the same as any other department.

These are the five most important expectations that finance and accounting should have for HR. Each is a realistic expectation that is being met by some HR departments today. Some in and outside HR have argued that HR is different and cannot meet these expectations. They claim that since HR works with people, specific and measurable goals cannot be set and HR initiatives cannot be managed the same way as in other departments. Consequently, they don’t believe in creating business cases or business plans or reports comparing progress to plan. As an HR insider, I believe they are wrong. HR can meet these expectations. HR is no more difficult to manage than sales, manufacturing, quality or other departments, and in some ways it may be easier since we often have more control over the outcome.

Meeting these expectations will strengthen HR’s seat at the table and vastly improve our credibility in the eyes of our colleagues. Let’s show them we can play by the same rules and realize our full potential to contribute to our organization’s success.

The Future of Measurement and Reporting for Learning and Development

2018 CTR Conference Summary

We had our best conference yet February 21-22 in Dallas. We had more than 90 participants despite widespread flight cancellations and delays due to terrible weather. The energy level, sharing, and participation more than made up for the weather.

Roy Pollock, co-author of The Six Disciplines of Breakthrough Learning, was our first keynoter and really set the tone for the conference by reminding people that learning must be connected to the business.  Our panel session before lunch gave Kimo Kippen, Adri Morales, Terrence Donahue, and Peggy Parskey an opportunity to share their thoughts on the greatest opportunities ahead of us. All agreed we live in a very exciting time for L&D and that advances in technology and learning platforms will allow us to reach more people in more impactful ways than ever before. Adri Morales, 2016 CLO of the Year, then stayed on the stage to share her personal journey and thoughts with the group.

The afternoon of Day One offered six more breakout sessions for a total of nine the first day. Predictive analytics, measuring informal learning, ROI Goal Setting, xAPI, spending levels, and the application of the 6D’s at Emerson were just a few of the topics that generated a lot of interest. Participants came back together late in the afternoon to learn about  a new maturity model for learning developed by Peggy Parskey, Cushing Anderson and Kevin Yates which was in turn used to help judge the First Annual TDRp Excellence Awards. The 2018 winner was USAA. We closed out Day One by having speakers meet with participants in the café for wine and lots of good discussion.

Day Two began with a keynote by Marianne Parry and Lee Webster on the brand new ISO standards for human capital reporting. The audience was very interested to hear about the recommended measures and next steps in the process to adopt these. More breakout sessions followed to engage participants on measurement strategy, isolating impact, level 3 design, alignment and the future of the corporate university. Leaders from Humana, Siemens and USAA also shared their measurement strategies and journeys. We wrapped up with a panel hosted by Patti Phillips to learn what Lorrie Lykins, Paul Leone and John Mattox saw for the future of measurement. Much like Wednesday’s panel, the group predicted a bright future for measurement enabled by technology, xAPI, and advances in the field, especially in predictive analytics.

Everyone left feeling recharged and excited after a day and a half of great speakers and provocative, informative sessions. And it just feels good to be around like-minded people who understand your challenges and who all want to improve.

Planning for next year’s conference is already underway. It will be February 20-21 in Dallas at the same venue. Patti Phillips, CEO of the ROI Institute, and Jeff Higgins, CEO of the Human Capital Management Institute (HCMI) will keynote and Lee Webster will return to give us an update on the ISO standards for human capital reporting. CTR is poised to help drive an ISO effort to establish TDRp as the framework for the types of measures and reports for our field so we will have an update on that as well.

Hope to see you in Dallas in 2019!

Alignment Revisited

Tim Harnett in a recent Human Capital Media Industry Insights article (undated) reminds us of the importance of aligning L&D initiatives to organizational goals. He shares some sobering research indicating that “only 8% of L&D professionals believe their mission is aligned to the company strategy” and “only 27% of HR professionals believe there is a connection between existing [learning] KPIs and business goals”. So despite perennial intentions to do a better job of alignment as a profession we still have a long way to go.

I am afraid, though, that he does not go far enough in recommending the actions we need to take. He suggests that “identifying and tracking KPIs related to L&D initiatives is the best way to align L&D to organizational goals and make the business case for development programs”. For KPIs (key performance indicators) he is thinking of measures like level 1 participant satisfaction, learning hours, level 3 application and employee satisfaction. While these are all important measures and can indeed help make the case for learning, they actually have nothing to do with alignment.

Here is why. Alignment is the proactive, strategic process of planning learning to directly support the important goals and needs of the organization. Alignment requires L&D leaders to discover the goals and needs of the organization and then go talk to the goal owners to determine if learning has a role to play. If it does, the two parties need to agree on the specifics of the learning initiative including target audience, timing, type of learning, objectives, cost, and measures of success (ideally the outcome or impact of the initiative on the goal or need). They must also agree on the mutual roles and responsibilities required from each party for success including communication before the program and reinforcement afterward.

Measures or KPIs will come out of this process but the measures are NOT the process. It is entirely conceivable to have a learning program with great effectiveness and efficiency measures indicating many employees took it, liked it, learned it, and applied it, but the program was NOT aligned to the goals of the organization and should never have been funded. This is true even if it had a high ROI. Great numbers do not take the place of a great process, and alignment is not determined to have existed by looking back on measures or KPIs.

Conversely, you can easily imagine a program that is definitely aligned to one of the organization’s top goals but was executed so poorly that its effectiveness numbers came in very low. So, alignment is about the process of working with senior organizational leaders to plan learning initiatives which directly address their goals and needs. It must start with the organization’s goals and not with existing learning initiatives.

Last, there is much discussion these days about using employee engagement as an indicator of alignment. It is not for all the reasons discussed above. It is simply another measure and not a process. For engagement to be an indicator of alignment you would have to assume that employees know the organization’s goals, as well as the senior leaders do and that learning about those goals is the primary driver of engagement. Both of these assumptions are likely to be false. A focus on employee engagement would be appropriate only if engagement is the highest priority goal of the organization. In most organizations business goals like higher revenue, lower costs, and greater productivity are more important although higher engagement is always a good thing and will contribute indirectly to achieving the business goals.

In conclusion, I am happy to see a focus on this important topic of alignment, but success will require us to work the process of alignment with senior leaders. At the end of this process, every L&D department should have a one-pager listing the organizational goals in the CEO’s priority order and, whenever it is agreed that learning has a role to play, a list under each goal of the key learning programs that are planned. This is how you demonstrate alignment, not through KPIs or measures.

Personalized Learning: Means to an End or the End Itself?

The learning field is currently focused on personalized learning which might be defined as providing learners with individualized, custom content in the way each prefers to learn. Advances in digital learning and platforms combined with an explosion in learning content make this advancement not only possible but highly desirable. It has the potential to contribute significantly to better learning experiences and higher application rates leading to better outcomes. This said, there is a danger that some will consider personalized learning not just a strategy to improve learning but as the goal itself, the reason or mission for the learning department. This brings us to a discussion of means versus ends and the importance of keeping the two straight.

I would suggest that personalized learning is best considered as a means to the end and that it will almost never be an end in itself. Over the past several years some in our profession have advocated that it is the end. They have redefined their mission as a learning department or corporate university to provide learners with whatever they want in whatever form they want it, which is an extension of our definition of personalized learning above. At its heart this issue of means versus ends is far from an issue of semantics; rather, it is a fundamental question about the reason for the existence of corporate training. Imagine a discussion with your CFO or CEO. They ask what your strategy is for next year. You say it is personalized learning and that the majority of your resources will be dedicated to providing more and better-personalized learning. They ask why. You tell them learners will be more engaged, will learn more, and will retain more. I guarantee you that in their mind you never really answered the question. Your answer is good as far as it goes but doesn’t get to the business reason for learning. You described a process improvement for them, one that will deliver learning more effectively and efficiently, and that is good but not enough. Basically, you are improving the means to an end by personalizing the learning, but they want to know what the end is. In their mind, the end may be higher sales, greater productivity or quality, fewer accidents, lower operating costs, or higher employee engagement, but you didn’t connect the dots for them. By not appreciating the difference between means and ends, you focused just on the means when you needed to also focus on the end. Better to tell them that you will improve learning in order to effect higher sales, lower costs, or whatever their goals are. These are the ends they care about and once they know that you are working toward the same ends, they will be more receptive to your request for resources to improve the means (personalized learning).

As a profession, we must continue to make great strides in process improvement and personalized learning is one such process improvement. But it is not and never will be an end in itself any more than e-learning, blended learning or mobile learning are ends in themselves. We don’t provide learning just to provide learning. The learning must serve a higher need. It must serve an end and that end should be one of your organization’s high-level goals or needs. With this understanding we also can see that personalized learning is not the opposite of company learning which has been defined as learning directed by the company (not the employee) to meet company needs. Instead, personalized learning should support the company goals and needs even if it is directed or mandated by the company. If at the discretion of the employee it is most likely to improve employee engagement which is a company goal in almost all organizations. If directed by the company, the personalized learning will support one of the other company goals like higher sales. So, personalized learning may be at the discretion of the employee or at the discretion of the company, but in either case, it is a means to an end.

Are You Ready for the Next Recession?

I know this seems like a silly question to ask, as the stock market continues to set new records and the unemployment rate is the lowest it has been in more than 10 years. However, speaking as a former economist, I know that this is precisely the time to ask the question because good times never last forever and the next downturn is likely to come without warning. Economists have a poor track record of accurately predicting recessions, so by the time a consensus believes that a recession is coming, we will probably already be in one. What we do know though, is that we are well into the current expansion and soon we will be living on borrowed time. The last recession ended in June 2009, so we are no 8 plus years into the current expansion. The longest expansion in modern economic history was the 9 year run from 1991 to 2000. The next longest was 8 years from 1982 to 1990. So, in another we will hopefully be in record territory for the longest economic expansion in our history.

This means that a recession is due sooner rather than later. Are you ready for it? Have you done all that you can to demonstrate your value by carefully aligning your discretionary programs to the CEOs most important goals and then showing the impact of learning on those key goals? Do you have strong relationships with the goal owners and senior leaders so they will speak up on your behalf? Are you running your basic skills and compliance training as efficiently and effectively as possible and do you demonstrate that to your senior leaders? Are you setting specific, measurable goals and plans for key measures and then use monthly reports to compare year-to-date progress against this plan? Do your senior leaders see these reports and know you are running learning like a business?

If you‘re not doing these kinds of things now, why should your CEO continue your current level of funding and staffing when the next recession comes. Competition for funds and staff will be fierce, and every department may well see a reduction, but historically, L&D takes a disproportionately large hit which reflects the CEO’s lack of confidence in or understanding of the value L&D provides. So no is the time to put the business processes in place to demonstrate alignment, value and rigor so that your senior leaders will have a better appreciation for the value added by learning and they will have greater confidence in you as a learning leader. You cannot prevent the next recession, but you can position your department to weather the storm.

Now is the time to prepare.

The Future of Learning

At the May ATD International Conference in Atlanta one speaker said the following about learning and development functions: “We don’t own content and aren’t managing learning. That power long ago shifted from the learning department to individual employees.” The speaker was making the point that since learning has shifted to employees there is an opportunity for CLOs to assume broader responsibility for other talent processes such as talent acquisition and organizational development. While I agree that CLOs are well positioned by their experience to expand their scope and probably the best positioned to lead an integrated talent acquisition and development function, the broad assertion about the end of L&D as we know it still troubles me.

The statement may be true for an L&D department which has responsibility only for general learning which is not aligned to any company goals other than perhaps employee engagement. In this case the L&D department would offer a catalog of courses, either internally developed or externally purchased, and employees would select what they want to improve or are interested in. Learning aligned with company goals other than employee engagement and learning for onboarding, basic skills training and compliance would be managed by their respective departments. For example, the sales department would manage all sales-related learning, quality would manage all quality-related learning, and HR would manage all compliance-related learning. In this model I would say that the L&D department never managed learning to begin with. They simply offered a catalog and tracked what employees took. The department did own the content, but I agree with the speaker that advances in digital learning offered outside the company will soon make this model obsolete. In the future, employees will be able to find all their general learning outside the company, and the L&D department will no longer own that content.

The statement however is not true now nor should it be true in the future for L&D departments which offer learning aligned to company goals, important business, or HR needs. In many companies the L&D department is responsible for the content and management of learning to increase sales, improve quality, reduce injuries, improve leadership, achieve compliance targets, onboard employees, provide basic skills, and meet other company goals. Why would a company have its L&D department abandon this important role as a strategic business partner? Why would it be better to have employees try to figure out on their own what they need to be successful or in compliance, especially new hires or employees new to a position, when experienced leaders already know what employees will need to be successful in their job? Furthermore, some knowledge is proprietary and simply not available outside the company. And let’s face it, real impact on company goals requires more than access to content. It requires a well-conceived and executed program to target the particular need, convey the required knowledge or skills, and then reinforce the desired behavior. It actually takes a lot of effort to manage learning for results, none of which occurs in the employee self-directed model.

In conclusion, I agree that in the future we will not need to own general content which is unaligned to company goals and that employees will be able to find general content of interest to them outside the company. However, I do believe we should continue to own critical content and that there will always be an important role for L&D departments to manage learning aligned to company goals and needs.

Do We Need New Measurements and Reporting for the Digital Learning Revolution?

Digital learning is revolutionizing the corporate learning environment by making a vast amount of content available to learners. Some of this content can be accessed through internal learning management systems, but the fastest growth is likely to come from content available outside the organization’s firewalls and systems. This is content directly available to anyone with internet access. In a sense we have had digital learning since the first computer-based training (CBT) became available in the 1980s and even more so with the advent of e-learning (WBT) in the late 1990s, but the current revolution dwarfs these past initiatives in terms of breadth, reach, and sheer volume.

Most of our current measures and reports / dashboards were designed for traditional classroom learning and have been applied to e-learning as well, but some have always asked if we need special measures for e-learning. The answer is basically been no. Traditional efficiency measures about number of participants, classes, hours, etc., combined with traditional effectiveness measures for participant reaction, test scores, application, impact and ROI can easily be applied to e-learning. The old post event survey was delivered at the end of class and with e-learning it is generally sent to participants immediately following class. So, no revolution was required in measurement or the reporting of those measures for e-learning.

The question we face now at the start of this new digital revolution is the same as the on e at the inception of e-learning. Will new measures and reporting be required? My initial take is that the answer remains the same as well. The standard efficiency, effectiveness and outcome measures will still serve us well, and the three standard reports recommended by Talent Development Reporting principles (TDRp) will still help us to better manage our programs and departments as well as demonstrate our alignment to and impact on key company goals. However, the measurement strategy itself may have to change dramatically because an increasing amount of content will be accessed outside the company and outside any learning management system (LMS). Think of employees taking free courses at the Kahn Academy and universities like MIT, learning from You Tube or their Google search, or taking advantage of aggregators like Coursera, Udemy, Udacity and others. Since this learning takes place outside the LMS there is no way to automatically capture efficiency measures like number of participants, courses, or hours, and there is no way to automatically send a survey to gauge satisfaction, amount learned, application, or other effectiveness measures let alone any outcome measure.

Instead, much more thought is going to have to be given to what measures are really important for this new digital learning. Do we really need to know how many courses employees take outside the LMS? Do we really need to know how satisfied they were with each course or how much they learned? I don’t think so. So, let’s go back to basics and determine what we want to know, why we want to know it, and what we would do with the data if we had it. Learning and development is increasingly important to millennials so an employer certainly has an interest in finding out whether employees are satisfied with their learning but not necessarily with each micro instance of learning (like a Kahn Academy course). Why not send a quarterly or semi-annual learning survey to employees (or expand your current employee engagement survey) to ask about their satisfaction with learning in general? You could ask about how they have learned in the past  three months (company offerings, MOOCs, free universities, Coursera-type providers, Kahn, etc.) and perhaps how many times per week they access different sources. A summary question would be something like, How satisfied are you with all the opportunities you have to learn? This should provide the information to determine whether employees are sufficiently engaged with learning.

In conclusion, the digital revolution is not likely to require new measures or reports, but is likely to require rethinking your measurement strategy with regard to data acquisition.

3 Principles of Effective Learning Metrics & Measurement

Contributed by Caveo Learning

As more and more talent development leaders take a serious look at implementing meaningful metrics and measurement across their learning and development organizations, the business relationships and conversations between L&D professionals and stakeholders are changing for the better.

There are times when talent development leaders need to root themselves in foundational principles of talent development metrics. It’s easy to get caught up in new thinking, models, and frameworks, and lose focus on the fundamentals of how to run a learning organization.

No matter what model, framework, system, tool, methodology, or new approach we want to adapt, adopt, and deploy, there are at least three fundamentals that we should never lose sight of—principles that should be applied to all learning metrics.

  1. Differentiate between metrics you actively manage and those you monitor.

The principle is so simple, yet so rarely considered. Just like in medicine, some “vital statistics” are always monitored—the moment something changes in the wrong direction, an alarm sounds, allowing for the “metric” to be actively managed. Similarly, when selecting your metrics, determine which metrics you intend to monitor, and which you intend to actively manage. Further, know what the target thresholds are for the monitored metrics that will trigger the “alarm” for active management. For example, you may want to monitor L&D staff utilization, and only actively manage it should the metric fall out of the acceptable range.

  1. Align to industry formulas, where practical.

Another issue that often comes up is defining the specific formula for a metric. Frequently, the formula is not considered deeply enough to add the full value that it can. This can result in metrics with little credibility or validity, and which should not be informing any decisions. It’s true that each organization has its own characteristics, its own language, and specifics that need to be considered. However, using a standard formula defined by an existing industry benchmark can be very helpful when planning your metric strategy, goals, and even budget. Industry benchmarks are only valuable for planning and comparison if you are comparing apples with apples—to do that, the formulas need to match.

CTR does a fantastic job in its L&D metrics library of not only providing the recommended formula, but also noting which industry organization defined it, or even which other industry benchmark it is similar to. A good balance between very-company-specific and industry-aligned formulas will allow for at least some comparison, planning, and target setting against industry metrics. Whether it is CTR’s Talent Development Reporting Principles (TDRp), Bersin by Deloitte, Training magazine, ATD, SHRM, or any others, consider aligning at least some meaningful metrics to an industry definition and formula. With our above example of staff utilization, one TDRp formula we could use is “Learning staff, % of total hours used.”

  1. Measure to inform decisions and actions.

Whether you are monitoring or managing a particular metric, there must be an underlying business reason to do so. Having a metric that does not inform a decision or an action is of no value. Also, consider why you are measuring something and what that metric will influence. Learning metrics are there to help us improve what we do as L&D professionals. Whether it is improving the efficiency of our learning organization, the effectiveness of our learning solutions, alignment to our stakeholders, or the contribution our interventions have on the strategic business goals of the organization, metrics play a critical role in influencing the value we bring to our organizations and, almost more critically, the credibility of L&D in the eyes of external stakeholders and the C-suite.

It’s better to have a few good metrics that inform meaningful decision making and allow for agility in improving the value L&D brings to your organization, rather than hundreds of metrics that offer limited value. Sticking with our example, we could monitor the utilization of learning staff and start actively addressing this metric should the percentage increase above, say, 90%, by engaging a learning solutions provider to assist with some of the workload until it returns to the acceptable range.

Learning leaders are starting to take more notice of the deep value that metrics can bring toward the constant improvement of everything we do in L&D. No matter what the specific model, framework, and approach your organization chooses for learning metrics, there remain some fundamental principles that will help ensure that we ultimately have a metrics strategy that guides us, helps us improve, changes conversations with our stakeholders, and increases our credibility as business leaders.

Learning metrics are our friend, our source of feedback and intelligence, ensuring we are constantly focused on maximizing the value we bring to our organizations.

Caveo Learning is a learning consulting firm, providing learning strategies and solutions to Fortune 1000 and other leading organizations. Caveo’s mission is to transform the learning industry into one that consistently delivers targeted and recurring business value. Since 2004, Caveo has delivered ROI-focused strategic learning and performance solutions to organizations in a wide range of industries, including technology, healthcare, energy, financial services, telecommunications, manufacturing, foodservice, pharmaceuticals, and hospitality. Caveo was named one of the top content creation companies of 2017 by Training Industry Inc. For more information, visit www.caveolearning.com

Take a Business-Minded Approach to Sourcing Learning Partners

By: Gary Schafer
President, Caveo Learning

One of the most important tasks talent development leaders face is selecting outsourcing partners and product vendors. It also happens to be one of the most daunting.

The learning and development organization’s relationship with providers can be a major factor in the success of the business. Learning leaders must weigh many variables in the purchasing process, from the factual (pricing, experience) to the intangible (flexibility, dedication).

Navigating the procurement process can be tremendously difficult. How can a provider’s ability to flex with the challenging demands of the business be analyzed through a formal procurement process? How does one tell if an external learning partner is going to react to changing environments and truly be aligned with the business? How is the commitment of the supplier to the mission, vision, and values of the business measured? What about issues of scalability, global capability, and communication?

How to Develop a Learning Sourcing Strategy

Start by establishing a factual market intelligence base—understand the array of variables around learning services, from expertise to capability, and the rates associated with those services. Create your supplier portfolio, determine a list of qualification criteria, and then winnow your list of potential suppliers.

Identify the types of services your organization needs—learning strategy, audience analysis, curriculum design, instructor-led training, eLearning, change management, etc. Then, determine the demand across the organization, broken out by role.

Next, perform learning-spend and target-cost analyses. You’ll want to analyze supplier rates using both internal and external data sources.

  • Conduct some internal benchmarking with the same supplier—are rates equal across multiple projects? Does the firm charge premiums based on experience? Is pricing consistent across roles?
  • Do the same internal benchmarking across multiple suppliers. Find out if rates are similar for comparable roles and skill levels at different organizations.
  • Develop some external benchmarks using publicly available market data, comparing rates from market analyses and publicly available salary data. Factor for loaded cost (about 1.3 times base salary) to cover benefits, PTO, company-paid taxes, etc., and also allow some room for supplier margin.

Investigate the Intangibles

Before moving forward with sourcing service partners and learning products providers, conduct a supplier quality assessment. Create quality profiles for each prospective partner by evaluating the following factors:

  • Strategic planning—Will they assist with communicating and messaging to senior stakeholders throughout the business? Do they provide intelligence around industry trends and best practices? What do they offer in the way of post-project analysis, such as lessons learned and next-steps recommendations?
  • Experience—Does their team have expertise in relevant areas? What is the screening process for potential employees and contractors? How cohesive is the team, and how closely do they adhere to defined processes?
  • Responsiveness—Do they show a commitment to your business needs through effective communication? Is there a proven track record of adherence to deadlines? Do they offer strategic learning support?
  • Quality of work—What processes are in place to ensure solid instructional design? What methodology is used for quality reviews (test plans, style guides, etc.)? Does the supplier set clear review expectations for training deliverables and correct issues in a timely manner?
  • Flexibility—Do the suppliers have the ability and willingness to react to new or changing business needs? What is the capacity to scale a team with appropriate roles and volume?
  • Support—Do they have ready access to tools and templates necessary to speed development? Are there documented methodologies for executing common tasks?

Pricing as a Determining Factor

Suppliers’ rates often (though certainly not always) correlate with their quality profiles. If budget were no object, the learning sourcing strategy would simply mean identifying the most qualified provider; of course, budget is oftentimes the overriding factor, and so we must balance the need for quality with fiscal realities. Thus, try to optimize supplier usage based on rates, hiring high-quality suppliers for critical projects and project management, and settling for more cost-effective options, such as staff augmentation, for lower-importance projects and commodity roles.

Likewise, negotiate for spending reductions based on the type of support required. Each of the three main services models comes with its own potential cost-savings.

  • A project-based agreement can reduce per-unit costs and may be further cost-efficient due to greater opportunity to leverage offshore assets.
  • Contract labor ensures compliance across the organization, and the consolidated labor structure means negotiated volume rates are an option.
  • A managed services model will likely have optimized service levels and adjustable staffing levels, along with efficiencies gleaned from custom redesigned processes.

Be cautious with regard to electronic procurement. There are several tools now available and in use by procurement professionals to streamline the proposal process, but they are not always ideal for learning organizations. These tools are great for providing a uniform response with efficiency, but for learning services, not everything is uniform. E-procurement tools often create barriers for providers to be able to tell their full story, essentially reducing the proposal process to 100-word bites of information that make it difficult to recognize the provider’s true value. A good procurement professional can get around some of these limitations by still offering a face-to-face meeting opportunity of the top candidates for consideration.

Finally, take care to negotiate a favorable agreement with the external learning partner. Have a negotiating strategy before initiating contract talks, and be ready and willing to walk away if a better partnership can be found elsewhere.

Creating and implementing a learning sourcing strategy for learning will greatly reduce the stress of the services procurement process, helping identify the ideal partners for your learning organization’s initiatives while optimizing budget. Rather than an intimidating task, a well-prepared strategy plan can make sourcing service partners a cornerstone in your organization’s success.

Gary Schafer is president of Caveo Learning, recently named a top custom content company by Training Industry Inc. Gary was formerly a management consultant at McKinsey & Co., and he holds an MBA from Northwestern University’s Kellogg School of Management.

 

Will Corporate Universities Become Extinct?

Michael Fernandes, a friend of mine who is presently conducting an international survey on corporate universities, recently posed this question. He asked whether corporate universities would go the way of the dinosaur. Just as an asteroid brought an end to the age of dinosaurs, is it possible that the digital revolution in learning currently underway may bring an end to corporate universities as we know them today? (By “digital revolution” I am referring to the exploding amount of learning content available to employees through the web outside the organization’s learning management system.)

It is a great question and obviously a very important one. I think the answer depends on the mission of the corporate university. If the mission is simply to provide learning opportunities for employees then I think the answer may be yes. A significant number of corporate universities today have just such a mission. Some provide learning as a mechanism to increase employee engagement or improve attraction and retention of the best employees. Others simply want to provide learning opportunities for their employees without regard to higher motives. In either case, the digital learning revolution over the next five years is likely to produce learning aggregators who can provide more learning opportunities at lower cost and with greater ease than any corporate university. Presently, corporate universities are busy looking for ways to make it easier for their employees to find and access this content, and the new term capturing this role is “curator” which is meant to convey the shift away from learning creation. It seems to me that this role is not sustainable in the long run for each corporate university. Industry aggregators, working at scale globally, will be able to provide this service much less expensively and at the same time offer a much greater breadth of learning opportunities. Why, then, would an organization continue to fund a corporate university to provide learning opportunities?

However, if the mission of the corporate university is to help the organization achieve its business goals, then there will always be a place for it. First, organizations often want customized programs which will require management from the corporate university even if an outside provider is selected to help. Second, some of the content is likely to be proprietary and will never be available from outside providers or aggregators. Third, even if the desired content were available outside the organization, there will always be an important role for the corporate university to work with the stakeholder to determine whether learning is the solution, identify the right target audience, set the appropriate learning objectives and content, and most importantly to work hand-in-hand with the stakeholder to ensure the learning is transferred in the work place. The issue of learning transfer or application, so vital to results, cannot be accomplished by an aggregator so this will remain the realm of the corporate university.

So the corporate university need not suffer the fate of the dinosaurs. A good corporate university with a mission to help the organization achieve its business goals by partnering effectively with stakeholders and senior leaders should live long and prosper. On the other hand, a corporate university whose only mission is to provide general, unaligned learning for its employees faces a much less certain future. In fact, it is hard to imagine how it will survive. This should serve as a warning for all those who are currently rushing toward becoming a first-class curator of digital content. If this is your only mission, you are likely rushing towards extinction.