Latest Resources

2019 CTR Conference a Great Success

We are just weeks back from our Sixth Annual CTR Conference in Dallas. We weren’t sure it could top last year’s but I think it did. We had 117 participants and they were very enthusiastic and engaged. Lots of great sessions and discussions. And of course some delicious food in Grapevine!

Patti Phillips kicked it off with a talk on ROI, emphasizing how important impact and ROI are and how they can be calculated. She included some very interesting history and the story of how she met Jack. When we came back together later in the morning after breakouts, Peggy facilitated a panel on scrap reporting and measurement literacy, and then Brenda Sugrue, 2018 CLO of the Year, shared some thoughts from her career. Six more breakout sessions followed in the afternoon leading up to the awards ceremony at the end of the day.

CTR recognized Jack and Patti Phillips as the first recipients of the Distinguished Contributor Award, acknowledging their truly significant contributions to our field including the 1983 Handbook of Training Evaluation, the concepts of isolated impact and ROI, and the successful implementation of ROI in over 70 countries. Then we recognized the winners of the TDRp Excellence Awards: SunTrust Bank for L&D Governance, and Texas Health Resources and Heartland Dental for L&D Measurement. Jack and Patti recognized three winners for ROI studies as well.

Day two began with Steven Maxwell of the Human Capital Management Institute sharing his perspective of what CEOs and CFOs want from learning as well as his thoughts on the newly released Human Capital Reporting standards, both of which got people thinking. After six more breakout sessions, Justin Taylor, General Manager and EVP of MTM, led a panel on the impact of learning to wrap up the conference. With a focus on just the measurement, reporting and management of L&D, everyone could focus on the issues of most concern to them and share ideas and questions with each other in detail. If you didn’t make this event, you missed hearing from industry thought leaders and leading practitioners.

Next year we are moving the conference to the fall, so mark your calendars for October 28-29, 2020. We will continue to begin CTR Week with a pre-conference workshop on measurement and reporting (October 26-27) and end the week with post-conference workshops..  Next year will be the 10th anniversary of TDRp and we hope you can join us for the celebration.

2019 TDRp Award Winners

The Center for Talent Reporting is pleased to announce the following 2019 TDRp Award winners:

Distinguished Contributor Award

Congratulations to Jack and Patti Phillips, CTR’s first Distinguished Contributor Award winners. This award recognizes outstanding and significant contributions to the measurement and management of human capital. Particularly noteworthy are Jack’s groundbreaking book, Handbook of Training Evaluation and his concepts on isolated impact and ROI. Jack and Patti’s ROI methodology has been implemented in hundreds of organizations in over 70 countries. They have forever changed the standard by which investments in human capital are measured.

2019 TDRp Award Winner for Learning Governance

Congratulations to Sun Trust, CTR’s 2019 TDRp Award winner for Learning Governance. Sun Trust earned this award for their best practices in running L&D like a business. In particular, Sun Trust provided evidence that L&D participates in a continuous robust enterprise-wide governance process that ensures L&D has a voice and input into enterprise priorities and funding. The governance council moreover has clear guidelines for funding initiatives and exercises ongoing oversight through disciplined financial budgeting, reporting, and forecasting.

2019 TDRp Award Winners for Learning Measurement

Congratulations to Texas Health and Heartland Dental who have been awarded the 2019 TDRp Award for Learning Measurement. 

Texas Health has demonstrated a depth and breadth of commitment to measurement through a Learning and Education Cabinet which directs, guides, monitors, evaluates, and reports quarterly on learning according to strategic business goals. In addition, they have established learning oriented measures not only at the program level but also at the CEO level. They have well-defined roles for measurement and strategic responsibilities.  They leverage Metrics that Matter™ technology to organize programs into business aligned portfolios that drive measurement planning and reporting.

Daniel Gandarilla, VP and Chief Learning Officer said this about the importance of measurement to Texas Health, “Measurement is one of the most critical aspects of the learning process. It allows us to understand what has happened, the effect, and to tell a more compelling story of how we play a critical role in the success of the organization. The best stories can tell both quantitative results and qualitative experiences. This method of storytelling makes a compelling case and lets everyone know that learning is a part of the business.”

Heartland Dental has demonstrated a commitment to measurement through well-defined measurement roles, the use of a portfolio model to segment their offerings and drive consistent decisions about what, when and how to measure their solutions. They focus on linking learning to business outcomes and demonstrating the “so what” to their business partners.  In addition, they have established specific goals for their measures and incorporate their goals into their regular reporting. Finally, they leverage Metrics that Matter (MTM) to gather and report data to their stakeholders.

Meredith Fulton, Heartland Dental’s Director of Education described their measurement journey, “Prior to establishing our robust measurement approach, we collected satisfaction and ad-hoc feedback from learners through simple “Smile Sheets”. We felt great about what we were doing, but we did not have a measurement strategy in place for collecting learner or manager feedback that could drive decision making for any course or program. Shifts in our organizational leadership and focus on company growth have helped pave the way to develop an evidence-based strategy.”

After working with MTM to establish a strong measurement process, Heartland has been able to see dramatic changes and an increased savings in Learning & Development. “We have also been able to create efficiencies in automating data collection and reporting. By incorporating MTM data and insights into company metrics we have been able to tell a deeper story of the impact of Learning & Development. Additional benefits include being able to socialize best practice metrics from the programs across various stakeholders and business leaders. Our compelling insights, shown in custom report, infographics, and dashboards have allowed us to elevate the conversation and our involvement in company decisions,” said Fulton.

Human Capital Reporting Standards

The International Standards Organization (ISO) released its first standard for human capital reporting in December. It is titled “Human Resource Management – Guidelines for Internal and External Human Capital Reporting.” The document is 35 pages long with guidance for internal and external reporting by both large and small organizations. The standard is the culmination of a great deal of work by an ISO working group representing experts from numerous countries over the last three years led by Stefanie Becker from Germany.

Although a number of measures still need to be defined in greater detail, the standard is a major achievement and an important first step towards bringing standardization of measures and reporting to the HR field. Accountants have long had definitions of measures and standard reports as well as guidance about how the measures should be used. This has served their profession well and a similar discipline would serve us well. Imagine that in the not-to-distant future everyone in our field might be using the same language for measures and would share a common understanding of how to use the measures in reports to better manage their operations. Imagine the rich benchmarking and best practice sharing that this standardization and reporting would allow. And imagine the potential for university students in our profession to graduate with this common knowledge just as accountants do today.

Of course, there are compelling business reasons to move in this direction as well. Today about 84% of the value of S&P 500 firms is attributable to intangible assets which have little visibility on the balance sheet. Just forty years ago the percentage was reversed with 17% of the value in intangibles and 83% of the value in tangible assets. So, in the “old days” an investor could look at a balance sheet and get a good feel for the underlying value of a company; namely, its physical assets like building, equipment, land, and investments. Today, human capital drives the value of the intangible assets which makes up most of the value of many companies. Human capital, however, does not appear on the balance sheet and appears on the income statement only as an expense to be minimized. This is why it is so important to provide visibility and transparency to human capital. Investors, customers and employees need to know more about the human capital in an organization.

The standards are completely voluntary although the hope is that leading organizations will adopt them to provide this greater transparency for their investors and employees. The working group recognized that measurement and reporting can be burdensome for small organizations so only the most important and basic (and easy to calculate) measures are recommended for reporting. The working group also recognized that some measures, while important to properly manage human capital internally, may not be appropriate for public sharing and thus are recommended for internal use only.

There are 54 measures in all with 23 recommended for external reporting by large organizations. Many large organizations are already measuring most of these but not reporting them publicly. It is hoped they will begin using the ISO definitions as they become available and that they will begin to publicly report some of the measures. In some countries the government may mandate adoption of the ISO standard but that is not the case for the United States where organizations will be free to decide which, if any, of the recommended measures they report internally or externally.

Of the 23 measures recommended for public reporting by large organizations, there are five for recruitment and turnover, five for diversity, three for compliance, and three for health, safety and well-being. Productivity and workforce availability each have two, and cost, leadership and skills & capability each have one. The one for L&D (skills & capability) is total training cost which is recommended for external reporting for both large and small organizations. While not an ideal measure since it focuses on inputs rather than outcomes, it is at least a beginning and gives some indication of how much an organization values its employees.

Four other L&D measures are recommended for internal reporting by large organizations: percentage of employees who participate in training, average formal training hours per employee, percentage of employees who participated in training by category, and workforce competency rate. The first two are also recommended for small organizations.

I recommend that everyone in our profession become familiar with the standard and, if your organization is not already reporting these measures internally today that you consider them for your measurement and reporting strategy going forward. In the future investors and employees will begin asking for these and you really should be reporting on them internally to best manage your human capital. You can read more about them here in Workforce magazine, and the standard is available for purchase from ISO or ANSI.

Making an Impact Through Learning

I hope you can join us in Dallas for our Annual Conference in five weeks. It will definitely be worth your time and investment. You will find a community of like-minded learning professionals who are passionate about the measurement, reporting and management of L&D. Leading industry thought leaders and practitioners will be there to share their thoughts and experiences with you. And, of course, you will learn a lot about what works and what doesn’t from fellow participants.

Our theme for this year’s conference is Making an Impact through Learning. We have selected speakers and topics to ensure that you will have the latest thinking from the best people in our profession on this always important topic. Like last year, we have scheduled plenty of time for networking so you can get your questions answered and learn from your colleagues.

Our keynoters this year are Patti Phillips and Jeff Higgins. I am sure you are familiar with Patti and Jack Phillips and their work on isolating the impact of learning and calculating ROI. (They will each also be conducting a breakout session.) You may not know Jeff who has a very interesting background as both an accountant and CFO and as a senior HR executive so he brings a unique perspective to our field. He has also been a contributor to the International Standards Organization’s (ISO) work on Human Capital Reporting standards which are just now being implemented around the world. You will learn a lot just from these two speakers!

You will also have the opportunity to hear from the current CLO of the Year, Brenda Sugrue, who will share her thoughts on her personal journey and on EY’s transformation. Other industry thought leaders include Jack Phillips on ROI, Ken Phillips on measuring application, Roy Pollock on ensuring results, John Mattox on the future of measurement, Laurent Balague on the best use of surveys, Peggy Parskey, Adri Moralis, and Justin Taylor on reducing scrap measurement and reporting, Jeff Carpenter on data analytics and adaptive learning, Cushing Anderson on adding value, and Kevin Yates on impact measurement.

In addition, we have some great practitioners lined up to share with you including Gary Whitney (formerly with IHG), Toni DeTuncq and Teddy Lynch (NSA), Laura Riskus (Caveo), Mark Lewis (Andeavor), and Dean Kothia (HP).

Last, please do consider joining us for our pre and post-conference workshops which round out CTR Week. We offer a two-day workshop on measurement and reporting and one on the Six Disciplines of Breakthrough Learning. And after the conference we offer four half-day workshops. So, come early or stay after the conference to invest in our own development.

If you have come to one of conferences or workshops in the past, welcome back. If this will be your first, we look forward to meeting you. Either way, you are an important part of this community and everyone will benefit from your questions and contributions. As you know, the Center for Talent Reporting is a nonprofit and our sole mission is to advance the profession. Each year’s CTR Week is an opportunity to come together, learn from each other, and grow. You will leave energized! Find out more about the conference and workshops at Hope to see you in Dallas!

A Look Back (and Ahead) on the Measurement and Management of L&D

The end of December is always a good time of the year to take a look back and ahead of the L&D profession. Looking back, I think we are blessed to have some great thought leaders who have provided a terrific foundation for both the measurement and management of L&D.

On the measurement side, I am particularly thinking of Don Kirkpatrick, who gave us the Four Levels and Jack Phillips, who gave us isolated impact for Level 4 and ROI for Level 5. I am also thinking about all of the work done by the Association for Talent Development (ATD) to promote measurement and to benchmark key measures through their annual industry survey.

On the management side, I’m grateful again for the contributions from Don Kirkpatrick and Jack Phillips and now, Jim and Wendy Kirkpatrick and Patti Phillips; as well as for their guidance in how to manage—particularly with respect to partnering closely with goal owners and focusing on what it takes to achieve Level 3 application. In addition, I appreciate the work by Roy Pollock and his associates in giving us The Six Disciplines of Breakthrough Learning, which is a must-read book for anyone focusing on the measurement and management of L&D.

And, there are many more—like Ken Phillips and John Mattox and others too. And beyond L&D, the HR profession, in general, has benefited tremendously from thought leaders like Jac Fitz-enz, Jeff Higgins, John Boudreau, and Wayne Cassio. Like Kirkpatrick and Phillips did for L&D, these thought leaders basically invented measurement for the rest of HR.

We are very fortunate to have this strong foundation built over the last 30+ years. Looking ahead, the question is, where do we go from here? As a profession, we now have well over 170 measures for L&D and over 700 for HR (in general). I don’t think we need more measures. What we do need, however, is a better way to utilize some of the measures we have—especially Levels 3 (application) and 4 (results or impact) and 5 (ROI) for L&D. Level 3 is the starting point and should be measured for all key programs. Research by Phillips clearly indicates that CEOs want to see impact and ROI more than any other measures, which will become increasingly urgent as the next recession draws closer (pencil in 2020 or 2021 for planning purposes). While some progress has been made over the last 10 years, it is not enough, so this remains a high priority for the profession moving ahead.

Another priority for us is to do a much better job managing learning. By this I mean running learning with business discipline, which starts by partnering closely with goal owners and agreeing on specific, measurable goals or targets for the learning (assuming of course that learning has a constructive role to play in achieving the business goal). And, once specific plans have been made, to execute those plans with the same discipline your colleagues in other departments use. This requires monthly reports and comparing results to plan so that corrective action can be taken as soon as possible, to get back on plan and deliver promised results.

Managing learning this way is hard for many and some simply do not want accountability. But, it is an area where the payoff of better performance and greater value delivered per dollar is huge. In fact, I would contend that it has a bigger payoff than even measuring at Levels 3-5.

To summarize, I think there is an opportunity to structure our departments differently to enable better management overall. To develop a close partnership with goal owners (like the head of sales, for example) and to really run learning like a business, there needs to be one L&D professional in charge of the program(s) identified to meet the business need. This person would:

  1. Meet with the goal owner initially and oversee the needs analysis
  2. Get agreement up-front with the goal owner on specific measurable plans for the learning program as well as roles and responsibilities for both parties
  3. Supervise the design, development, and delivery of the program
  4. Meet regularly with the goal owner to manage the successful deployment of the learning, including reinforcement by leaders in the goal owner’s organization

I know this may be a challenge in some organizations, but I think it is indispensable for a successful partnership and for accountability within L&D.

I truly enjoy being a part of the great profession and the opportunity to work with all of you. We have come a long way in a relatively short period of time and I believe the future is very bright if we continue to build on foundation that has been laid to take learning measurement, reporting, and management to the next level.

I look forward to what we can accomplish working together! Happy New Year and Best Wishes for the coming year!

New Sample Reports for L&D Now Available

CTR publishes sample L&D reports to provide guidance and ideas to practitioners. We are happy to announce that we’ve completely updated and revised these reports to reflect our latest thinking and the evolution of Talent Development Reporting principles (TDRp).

Get the latest reports here.

If you’re a CTR member, you have access to the editable, Excel versions, which may be used as a starting point or template to customize for your own purposes. Non-members may download a PDF version of the reports to use as a reference.

What’s New

The revised resources include samples for both private and government sectors. Samples are also provided for qualitative, quantitative and mixed outcome measures, including:

  • Lists for effectiveness, efficiency and outcome measures
  • Statements for effectiveness, efficiency and outcome measures
  • Four different Summary Reports, each using different types of outcome measures
  • Four different Program Reports which vary in complexity
  • One Operations Report

Some Tips for Use

We recommend practitioners start by compiling lists of the measures they anticipate using and then create one list for each type of measure. If the measure will be managed to a target or plan during the year, then one of the three reports is recommended. This will be the case for most of your important measures.

If the measure is to answer a question, determine if a trend exists or provide basic information, then a statement or scorecard is appropriate. These contain actual results, usually by month or quarter.

In contrast, reports will include a plan or target for each measure as well as year-to-date results and ideally a forecast of the measure is expected to end the year.

More Updates To Come

Work is underway to update the lists, statements and reports for the other HR disciplines, like leadership development and talent acquisition. If you have any questions or would like more information about these reports, please contact David Vance, Executive Director,


Impact and ROI…The Discussion Continues

This month, I continue the discussion  on ROI (level 5) and impact (level 4). After publishing last months article, some expressed that ROI was unnecessary, too confusing or just too complicated. I argued that it is a simple, straightforward calculation once you have isolated the impact of learning which can always be done using the industry standard participant estimation methodology.

So, let’s take a look at why impact and ROI are so important. L&D exists for many reasons, but I would suggest the most important is to help our organizations accomplish its business goals—like higher sales, increased productivity, reduced costs and greater customer or patient satisfaction. Of course, L&D is positioned to help achieve HR goals as well, like higher employee engagement, which in turn contributes indirectly to realizing all the business goals. Most L&D departments are also responsible for leadership development which is sometimes itself a business goal.

These are all very important, high-level business or HR goals. Organizations invest considerable time and money in learning to achieve these goals, so it follows that CEOs would want to know what they are getting for their investment. What difference does learning make? Is it worth doing? Jack Phillips asked CEOs what they most wanted to see from L&D and over 90% said impact and over 80% said ROI. Less than 10% were receiving any impact data at the time and only slightly more were getting any ROI information. The best practice here is to carefully align your learning to these goals, plan it in partnership with the goal owner, set targets for impact and ROI, and then jointly manage the program to ensure the targets are achieved, including the level of application required to deliver the planned impact and ROI.

From both a planning and a follow-up point of view, some measure of isolated impact and ROI are the only ways to ensure learning (not L&D, but learning, which must be co-managed by the goal owner and L&D in partnership) delivered on its promise. The CEO, head of L&D, program manager, and goal owner will want to know how much difference learning made and what opportunities can be identified for improvement or optimization. The upfront agreement on impact will drive a discussion on what is possible and what level of effort by all parties will be required. The upfront agreement on ROI is nothing other than the business case for the investment. Whether it is ROI or net benefits, all parties will want to be sure that benefits are expected to exceed the costs. If not, why would you proceed? Afterwards, everyone will be interested in what impact and ROI were achieved. Actuals will never exactly match plan, and that is okay, but hopefully they will be close to plan. If not, there is a learning opportunity since either the plan was off or execution of the plan was off.

So, impact and ROI are important for high-level goals. How about lower-level goals and day-to-day business needs? Here, I think the answer depends. Most organizations have limited resources, so those resources should be focused first on the most important organizational goals. Plus, lower level goals may not have easily identifiable outcomes and thus it becomes harder to identify impact. Some learning like compliance training, on-boarding, and basic skills training is simply essential and there is no need for a business case or ROI. For this essential learning the goal should be to conduct it as effectively and efficiently as possible. So, employ levels 1-3 to measure effectiveness and focus on minimizing the time and cost to be efficient.

In conclusion, impact and ROI are important for both learning professionals, organizational goals owners, and senior leaders like the CEO. These measures are important upfront to help plan the level of effort required and to reach agreement with the goal owner on roles and responsibilities. The higher the desired impact and ROI, the greater the effort will have to be. Once the planning is completed and the program is underway, the targets for impact and ROI can be compared to year-to-date results to determine if mid-course corrections are necessary. In other words, this is not a set it and forget it exercise. Programs must be managed monthly to achieve the planned impact and ROI. Last, at the end of the program once the impact and ROI have been estimated, there will be an opportunity to learn from any variances to plan and improve.

ROI: Still Evoking Confusion and Controversy

The Return on Investment (ROI) concept for learning has been around since Jack Phillips introduced it about forty years ago. You might think by now that at least the confusion over its use would have diminished even if the controversy had not. Not even close. I continue to see the concept abused and misused on at least a monthly basis.

Here is an example from a recent blog: “Anyone who is involved in the business world is familiar with the concept of ROI. It’s punchy, with its relatively simple calculation, and can make or break a purchasing decision. But when it comes to learning initiatives, gathering the necessary data to calculate ROI is difficult, to put it mildly.” The author goes on to say that learning initiatives implemented as an integral part of business strategy can be measured by the success of that strategy.

There are several issues with the blog. First, the author appears to be confusing ROI used in the business world with ROI used in the learning world. Typically, a financial ROI is calculated to make the business case for an investment, like a new product line or facility. The definition of this financial ROI is not the same as the learning ROI. The numerator of the financial ROI is the net present value (NPV) of the increase in profit due to the investment. The denominator is the cost of the asset required for the project, such as the cost of a new facility. This will be capitalized as an asset on the balance sheet and depreciated each year.

Contrast this with the learning ROI which has (usually) one year of contribution to profit in the numerator (so no need for NPV) and no asset whatever in the denominator. Instead, the denominator is the cost of the learning initiative which is an expense item from the income statement. So, two different definitions and the calculation for the financial ROI is actually more complicated than that for the learning ROI. Interestingly, it was exactly this difference in formulas which led my colleagues in accounting at Caterpillar to tell me that I could not keep referring to learning ROI as “ROI” since it was confusing our leaders. So, I renamed it Return on Learning or ROL in 2002. Take away here is to remember the two are not the same and let those in your organization know that learning ROI is calculated differently.

The next point by the author is that learning ROI is difficult to calculate. The ROI calculation itself is very easy: simply, the net benefit of the initiative divided by the total cost. Net benefit is the gross benefit minus total cost. Generally, total cost is easy to calculate, but the sticky part is the gross benefit which is the increase in profit before subtracting the cost of the initiative. The gross benefit, in turn, depends on the isolated impact of the learning initiative, like a 2% increase in sales due just to the learning. Likely, this is what the author had in mind when complaining about the difficulty of calculating ROI.

However, isolation need not be that difficult. The learning team can work with senior leaders to agree on a reasonable impact for planning purposes. And, once the program is completed, there are several methods available to estimate actual impact which do not require special training or hiring a consultant, Often, it will suffice to simply ask the participants and the supervisors what they believe the impact was and then reduce the estimate some to allow for the inherent error in any estimate. While not precise enough for an academic journal article, this is usually all we need in the business world to make appropriate decisions (like go/no-go after a pilot) or identify opportunities for improvement. It will also be close enough to determine if the investment in learning more than covered the total costs.

Last, the author suggests that if the learning is integrated into the business strategy, its success can be measured by the success of the business goal. I strongly agree that we should always start with the end in mind (the business goal) and design the learning so it directly supports the business goal. Further, we need to work in close partnership with the business goal owner to ensure that the learning is provided to the right target audience at the right time and then reinforced to ensure the planned impact is realized. While this does provide a compelling chain of evidence that learning probably had the intended impact, it does not tell us how much impact the learning had or whether the investment in learning was worthwhile. Instead of a measurement, then, we are simply left with a “mission accomplished” statement.

The question remains how much sales would have risen without any training. If sales would have increased by 10 percent without training, the training clearly was not worth doing. How about if sales would have increased 9% without training? Was it worth doing? We still need to isolate the impact of training and calculate net benefit or ROI to answer this ultimate question of value – not to “prove” the value of the training but to enable us to make better decisions about programs going forward and avoid investing when the return does not justify the cost.

So, the debate goes on. A friend asked me recently if I still believe in ROI. Yes, I do, but we need to use it wisely. It should not be used defensively to try to prove the value of an initiative or an entire department. In other words, it is not an exercise to deploy at the end of a program or fiscal year when you are under fire. Rather it should be used to help plan programs to ensure they deliver value and to identify opportunities for improvement going forward. It should be used to help decide which programs to continue and to identify ways to optimize programs. ROI will never take the place of careful planning, building relationships with goal owners or smart execution. You will always have to get the basics right first. Once you have done that, then ROI can be a powerful tool to make you a better manager.

The Portfolio Approach to Managing Learning

I just listened to a great webinar by Cristina Hall, Director Product Strategy for Metrics That Matter (MTM), which is an organization that specializes in helping companies gather, analyze and report data on learning. They have developed a very helpful approach for thinking about and managing the mix of courses an organization offers.

At CTR we spend most of our time focusing on the key programs in support of key organization goals as well as the key initiatives of the L&D department head to improve internal efficiency and effectiveness. Consequently, we advocate the use of program reports by the program manager and the goal owner which contain detailed data on the programs in support of key company goals like increasing sales. We also recommend summary reports for sharing with the CEO and other senior leaders to show alignment of learning to their goals and the expected impact from that learning. Last, we have operations reports for the L&D department head to use in managing improvement of a few select efficiency and effectiveness measures. So, the focus is primarily on targeted programs which are actively managed on a monthly basis to deliver planned results.

But what about all of the other courses an organization offers? Some companies have hundreds of courses in their learning management system (LMS). Most are not strategically aligned to the top goals of the company and are not actively managed, although they may be reviewed periodically. While not strategic, they can still be very important, and organizations want be sure they offer the right mix of courses and that the offered courses add value. So, the question is, “How should all of these programs be managed?” This where the MTM approach offers a very valuable contribution to the field.

MTM recommends assigning each course to one of four portfolios or categories. The four portfolios are: Drive Growth, Operational Efficiency, Mitigate Risk, and Foundational Skills. The portfolios reflect the reason for the learning and force the L&D group to answer the question of why the course is being offered. Is it to improve revenue? If so, it falls in the Drive Growth portfolio. Is it to reduce cost or improve productivity? If so, it falls in the Operational Efficiency portfolio. Is it to ensure compliance with regulations or standards or reduce the organization’s exposure to potential harm? If so, it falls in the Mitigate Risk portfolio. All other courses are assigned to the Foundational Skills category which would include all types of basic and advanced work skills, from very specific job training to communications skills and team building.

The beauty of this approach is in its simplicity. We could imagine more portfolios and we could argue some courses may fall in more than one portfolio, but assigning each course to just one of four portfolios forces the question of what the course is primarily designed to accomplish. Once all the courses have been assigned, imagine a grid with four quadrants—one for each portfolio. Now populate the box for each quadrant with measures that will help you manage your portfolios. First, show the

percentage of courses and hours as well as the amount of L&D budget dedicated to the portfolio to determine if your mix is right. For example, it may be that 5% of the L&D budget is being spent on driving growth, 35% on improving efficiencies, 10% on risk and 50% on foundational skills. This may be the right mix, especially for an L&D department with responsibility for a lot of basic skills training. On the other hand, a CEO might look at the mix and be surprised that more effort isn’t being allocated to driving growth and improving efficiency. Just sharing this data can lead to a great discussion about the purpose of L&D and on priorities.

Measures could be added for each portfolio to show the number of participants, level reaction, level 3 application, and some indicators of level 4 impact or outcome. The data could also be color coded to show comparison to last year or better yet to plan (or target) for this year. This portfolio approach can also be incorporated directly into TDRp’s operations report where the four portfolios serve as headings to organize the measures.

In conclusion, I strongly recommend an approach like MTM’s to better understand the existing mix of courses and to ensure alignment with the priorities of the CEO in terms of budget and staffing. I also believe the portfolio approach will be helpful in monitoring the courses throughout the year for replacement or revision.

Great Advice From the Kirkpatricks

By Dave Vance,

In their July 25 newsletter the Kirkpatricks ask if you are a Training Fossil. I love the imagery and their message. Here is what they say:

“The role of training professionals has changed as technology has advanced. Fifteen years ago, it may have been enough to design or deliver a great training class. If that’s all you do these days, however, you are in danger of being replaced by a smartphone app or other technology that can do the same things quickly and for free, or nearly free.

In a similar way, training professionals need to design and build “roads and bridges” into the job environment before, during and after training, and then monitor them regularly. Instructional designers need to do more than design formal classroom or virtual instruction. Trainers need to do more than deliver it. “

Their concern is a good one and I think in the near future we will see aggregators who can provide generic content on any platform so efficiently that internally developed content will not be able to compete. So, if a training department is focused solely on providing generic content, often in an effort to boost employee engagement, it risks becoming extinct just like the dinosaurs.

Unlike the dinosaurs, however, we can avoid extinction by making sure we are adding true value to our organizations. As Jim and Wendy explain, this goes far beyond providing generic content. It starts with understanding your organization’s goals and priorities and includes establishing a very close, strategic relationship with senior business leaders. Put simply, start with the “business” end in mind which the Phillips and all leading authors in our field recommend. Next, design the entire learning experience and define the roles that the business goal owner and L&D must play in order for the learning to be appreciated, applied and impactful. This starts by engaging the business goal owner to agree on reasonable expectations from the learning if both of you fulfill your roles. Neither party can make the learning a success on their own. While the learning department will play the biggest role in designing and delivering the learning, the business goal owner and supervisors will need to play the biggest role in communicating the need for the learning and then reinforcing the learning to ensure application.

Roy Pollok describes what is necessary for successful learning in The Six Disciplines of Breakthrough Learning which I highly recommend to understand learning as a process and not an event. In a similar fashion, Jack and Patti Phillips are emphasizing the importance of design thinking in their latest books. All agree that learning cannot be just an “event”. For true impact it must be carefully planned and managed in close partnership with the business, and the follow up after the “event” to ensure application and impact is just as important as the design and delivery of the “event” itself.

This process oriented approach coupled with a close strategic partnership with the business is what will allow the profession to add true value. It also will prevent us from becoming extinct since we add value at every stage of the process from consulting with the business upfront to helping the business reinforce the learning with their employees on the backend. While we may purchase generic content where it makes sense, our value add goes far beyond the design and delivery of content. We will not be in danger of becoming a training fossil as long as we focus on partnering with the business to meet their needs by delivering the entire learning experience required to make a difference in results.

Impact and ROI of Learning: Worth Pursuing or Not?

Several articles in the last month have suggested the profession should step back from trying to isolate the impact of learning and the resulting return on investment (ROI). The authors argue that it is difficult or impossible to isolate the impact of learning from other factors and no one will believe the results anyway so why bother. Instead, they call for showing the alignment of learning to business goals, focusing on easier to measure levels 1-3 (participant reaction, amount learned, and application), and finally focusing on employee engagement with learning (consumption of learning and completion rates). Aligning learning to business goals and measuring levels 1-3 are always good ideas, so no disagreement there. And, depending on your goals for learning and the type of learning, measuring average consumption and completion rates may also make sense. However, for certain types of learning there is still a need to measure impact and ROI (levels 4 and 5).

The primary reason is that senior corporate leaders (like the CEO and CFO) want to see it and so should the department head and program director. Research by Jack Phillips in 2010 showed that CEOs most want to see impact and ROI but instead are often provided with only participation data (number of learners, number of courses, etc.), participant reaction (level 1) and cost. While these measures are helpful they don’t answer the CEO’s question of what they are getting for their investment. CEOs and CFOs want to know what difference the training made and whether it was worth the time and effort. The program director and CLO should also be curious about this, not from a defensive point of view (like proving the value of training), but from a continuous improvement perspective where they are always asking what we learned from this project and how we can improve next time.

It is true that level 4, the isolated impact of learning on a goal, is harder to determine than levels 1-3. Sometimes there will be a naturally occurring control group which did receive the training. In this case, any difference in performance between the two groups must be due to the training. In other cases, statistical analysis like regression may be used to estimate the isolated impact from the training. The most common approach, however, is participant and leader estimation which is generally good enough to roughly determine the impact of learning and definitely good enough to learn from the project and to identify opportunities for improvement. In a nutshell, the methodology calls for asking participants to estimate the impact from just the training and to also share their confidence in that estimate. The two are multiplied together to provide a confidence-adjusted estimate of the isolated impact of learning. For example, one participant may say that the training led to a 40% increase in performance (like higher sales) but is only 50% confident in the 40%. The confidence adjusted impact would be 40% x 50% = 20% increase in performance. Repeat for others and average. Best practice would be to ask supervisors what they believe as well. Then share with the initiative’s sponsor and other stakeholders and modify as necessary. Once the level 4 isolated impact is determined, ROI is very straight-forward.

The Existential Question: Why Do We Measure?

There are several excellent answers to the question: Why Do We Measure? And, the answers will provide much needed direction to your measurement strategy. The most common answers are to:

  1. Answer questions
  2. Show results
  3. Demonstrate value
  4. Justify our budget (or existence)
  5. Identify opportunities for improvement
  6. Manage results

We will examine each in turn and comment on the implications of the answer for your strategy.

The most basic reason for measuring is to answer questions about programs and initiatives. For example, someone wants to know how many courses were offered in the year, how many participated in a particular course, or what the participants thought about the course. Assuming this information is already being collected and stored in a data base or captured it in an excel spreadsheet, simply provide the answer to the person who asked for it. If it is one-time-only (OTO) request, there is no need to create a scorecard or dashboard. If someone wants to see the same information every month, then it does make sense to create a scorecard to show the data by month.

The second most common reason is to show results. L&D departments produce a lot of learning each year and usually want to share their accomplishments with senior management. In this case, results generally translates to activity with departments sharing their results in dashboards or scorecards which show measures by month (or quarter) and year-to-date. The scorecard might also show results for the previous year to let senior management know they are producing more learning or improving.

The third reason is to demonstrate value. It is also very common and is just an extension of the second. Some believe that simply showing results demonstrates value while others believe that demonstrating value requires a comparison of activity or benefit to cost. For example, a department might demonstrate value by calculating cost per participant or cost per hour of development and showing that their ratios are lower than industry benchmarks or perhaps last year’s ratios.  Some adopt a higher standard for value and show the net benefit or ROI of a program. Net benefit is simply the dollar value of the impact less the total cost of the program. Any value above zero indicates that the program has more than paid for itself. Return on investment (ROI) is simply net benefit divided by total cost expressed as a percentage, and any positive percentage indicates the program more than paid for itself. Measures to demonstrate value are usually shared at the end of a program or annually rather than monthly.

The fourth reason is to justify the L&D budget or the department’s existence. This is an extension of the third reason where justification is the reason behind demonstration of value. In my experience this is almost always a poor reason for measuring. Typically, a department measuring to justify their budget is a department which is not well aligned to the goals of the business, lacks strong partnerships with the business, and has poor or nonexistent governing bodies. Not only is it a poor reason for measuring, but the effort in most cases is doomed to fail even with high ROIs. In this situation, energy would be better spent addressing the underlying problems.

The fifth reason is to identify opportunities for improvement. This is a great reason for measuring and indicates a focus on continuous improvement. In this case scorecards may be generated to show measures across units, courses and instructors with the goal of discovering the best performers so that the lessons learned from them can be broadly shared. There may also be a comparison to best-in-class benchmarks, again with an eye toward identifying areas for internal improvement. Another approach would be to create a scorecard, graph or bar chart with monthly data to determine if a measure is improving or deteriorating through time.

The last reason to measure is to manage. This is perhaps the most powerful, and least appreciated, reason to measure. A well-run L&D department will have specific, measurable plans or targets for its key measures. These plans will be set at the start of the fiscal year and the L&D leaders will be committed to delivering the planned results. By definition, this approach requires the use of measures, and time will be spent selecting the appropriate measures to manage and then setting realistic, achievable plans for each. Once the plans are done, reports need to be generated each month comparing year-to-date results to plan in order to answer two fundamental questions: 1) Are we on plan, and 2) Are we likely to end the year on plan? If the answer is “no” to either question, the leaders need to take appropriate action to end the year as close to plan as possible. In this case, reports must be generated each month showing plan, year-to date results, and ideally a forecast of each measure is likely to end the year.

A good measurement and reporting strategy will address all the reasons above except number four (justify existence). Reports, dashboards or scorecards are required in some cases but not in others just as monthly reporting is required in some cases but not in others. If there are limited resources, it is best to generate regular reports only for those measures to be actively managed (reason six) or those measures used to show results (reason two). Reports can be generated on an as-needed basis in other cases and most measures can simply be left in the data base until they are needed to answer a specific question.

What Finance and Accounting Should Expect from HR

Finance and accounting often do not hold HR to the same standards as other departments because it is believed that HR is different and people initiatives cannot be measured or managed like other business initiatives. However, having different standards for HR is a disservice to both HR and the company, resulting in lower expectations and less return from HR expenditures.  HR professionals are quite capable of playing by the same rules as their colleagues in other departments and can deliver tremendous business value, but finance and accounting professionals need to have a realistic sense of what may be expected from HR.

I believe that HR, including L&D, should be held to the same business standards as other departments. This means that finance and accounting should expect the same from HR as any other department – not more, not less – just the same. At a minimum, finance and HR should have five expectations for HR.

First, since HR is a support function, it should align some of its initiatives to the organization’s top business goals. This means that HR, through its many departments like L&D, talent acquisition and total rewards, should have programs or initiatives that directly support business goals like increasing sales, reducing defects, improving patient quality, etc. Of course, HR will also have programs in support of HR goals like increasing employee engagement, improving leadership and reducing regrettable turnover. While these programs are very important and will indirectly contribute to achieving all the business goals, the expectation should be for HR to directly contribute to the business goals as well, which it can easily do.

The second expectation is that HR will prepare a business case for major programs or initiatives, just like other departments are required to do. The business case will bring together the planned benefits and costs of the initiative, making explicit all the key assumptions and risks. The business case should include the HR ROI (return on investment) which is simply the net benefit divided by the total cost, which is commonly used in L&D. This will help finance and accounting make better decisions about funding and will allow comparisons among different HR requests, although it will not allow a direct comparison to the financial ROIs from other areas since the formulas are different for HR and financial return on investment.

The third expectation is that HR will create a business plan for the coming fiscal year, just like other departments do. The plan may be written or PowerPoint depending on the organization’s culture and should include at least an executive summary, a review of last year’s accomplishments, the business case for the coming year for at least the major initiatives, and the budget, staffing, and other resources required to deliver the plan. The plan will include specific, measurable goals for all key initiatives. The process of creating a plan is just as important as the finished product. A good business planning process will ensure the right questions have been raised and the right people have been involved to yield the organization’s best thinking about what can be accomplished in the coming year.

The fourth expectation is that HR will execute its approved business plan with discipline. Now that specific, measurable goals (targets, plans, KPIs – whatever you prefer to call them) have been set in the business plan, HR needs a process to ensure these planned results are delivered. Disciplined execution requires monthly reports comparing year-to-date (YTD) results to plan and ideally a forecast for how the year is likely to end compared to plan as well. The reports should include all the key measures identified in the business plan and should be used in a monthly meeting of senior leaders dedicated to actively managing results to come as close to plan as possible by the end of the year. Disciplined execution requires that two questions be answered every month: 1) Are we on plan year to date? and 2) Are we going to end the year on plan? Answers to these two questions will drive management efforts for the rest of the year.

The fifth expectation is that HR will be accountable for results. The approved business plan contained the specific, measurable goals for the year. Now HR needs to execute with discipline and be willing to be held accountable for achieving its plans, just the same as any other department.

These are the five most important expectations that finance and accounting should have for HR. Each is a realistic expectation that is being met by some HR departments today. Some in and outside HR have argued that HR is different and cannot meet these expectations. They claim that since HR works with people, specific and measurable goals cannot be set and HR initiatives cannot be managed the same way as in other departments. Consequently, they don’t believe in creating business cases or business plans or reports comparing progress to plan. As an HR insider, I believe they are wrong. HR can meet these expectations. HR is no more difficult to manage than sales, manufacturing, quality or other departments, and in some ways it may be easier since we often have more control over the outcome.

Meeting these expectations will strengthen HR’s seat at the table and vastly improve our credibility in the eyes of our colleagues. Let’s show them we can play by the same rules and realize our full potential to contribute to our organization’s success.

The Future of Measurement and Reporting for Learning and Development

2018 CTR Conference Summary

We had our best conference yet February 21-22 in Dallas. We had more than 90 participants despite widespread flight cancellations and delays due to terrible weather. The energy level, sharing, and participation more than made up for the weather.

Roy Pollock, co-author of The Six Disciplines of Breakthrough Learning, was our first keynoter and really set the tone for the conference by reminding people that learning must be connected to the business.  Our panel session before lunch gave Kimo Kippen, Adri Morales, Terrence Donahue, and Peggy Parskey an opportunity to share their thoughts on the greatest opportunities ahead of us. All agreed we live in a very exciting time for L&D and that advances in technology and learning platforms will allow us to reach more people in more impactful ways than ever before. Adri Morales, 2016 CLO of the Year, then stayed on the stage to share her personal journey and thoughts with the group.

The afternoon of Day One offered six more breakout sessions for a total of nine the first day. Predictive analytics, measuring informal learning, ROI Goal Setting, xAPI, spending levels, and the application of the 6D’s at Emerson were just a few of the topics that generated a lot of interest. Participants came back together late in the afternoon to learn about  a new maturity model for learning developed by Peggy Parskey, Cushing Anderson and Kevin Yates which was in turn used to help judge the First Annual TDRp Excellence Awards. The 2018 winner was USAA. We closed out Day One by having speakers meet with participants in the café for wine and lots of good discussion.

Day Two began with a keynote by Marianne Parry and Lee Webster on the brand new ISO standards for human capital reporting. The audience was very interested to hear about the recommended measures and next steps in the process to adopt these. More breakout sessions followed to engage participants on measurement strategy, isolating impact, level 3 design, alignment and the future of the corporate university. Leaders from Humana, Siemens and USAA also shared their measurement strategies and journeys. We wrapped up with a panel hosted by Patti Phillips to learn what Lorrie Lykins, Paul Leone and John Mattox saw for the future of measurement. Much like Wednesday’s panel, the group predicted a bright future for measurement enabled by technology, xAPI, and advances in the field, especially in predictive analytics.

Everyone left feeling recharged and excited after a day and a half of great speakers and provocative, informative sessions. And it just feels good to be around like-minded people who understand your challenges and who all want to improve.

Planning for next year’s conference is already underway. It will be February 20-21 in Dallas at the same venue. Patti Phillips, CEO of the ROI Institute, and Jeff Higgins, CEO of the Human Capital Management Institute (HCMI) will keynote and Lee Webster will return to give us an update on the ISO standards for human capital reporting. CTR is poised to help drive an ISO effort to establish TDRp as the framework for the types of measures and reports for our field so we will have an update on that as well.

Hope to see you in Dallas in 2019!

Alignment Revisited

Tim Harnett in a recent Human Capital Media Industry Insights article (undated) reminds us of the importance of aligning L&D initiatives to organizational goals. He shares some sobering research indicating that “only 8% of L&D professionals believe their mission is aligned to the company strategy” and “only 27% of HR professionals believe there is a connection between existing [learning] KPIs and business goals”. So despite perennial intentions to do a better job of alignment as a profession we still have a long way to go.

I am afraid, though, that he does not go far enough in recommending the actions we need to take. He suggests that “identifying and tracking KPIs related to L&D initiatives is the best way to align L&D to organizational goals and make the business case for development programs”. For KPIs (key performance indicators) he is thinking of measures like level 1 participant satisfaction, learning hours, level 3 application and employee satisfaction. While these are all important measures and can indeed help make the case for learning, they actually have nothing to do with alignment.

Here is why. Alignment is the proactive, strategic process of planning learning to directly support the important goals and needs of the organization. Alignment requires L&D leaders to discover the goals and needs of the organization and then go talk to the goal owners to determine if learning has a role to play. If it does, the two parties need to agree on the specifics of the learning initiative including target audience, timing, type of learning, objectives, cost, and measures of success (ideally the outcome or impact of the initiative on the goal or need). They must also agree on the mutual roles and responsibilities required from each party for success including communication before the program and reinforcement afterward.

Measures or KPIs will come out of this process but the measures are NOT the process. It is entirely conceivable to have a learning program with great effectiveness and efficiency measures indicating many employees took it, liked it, learned it, and applied it, but the program was NOT aligned to the goals of the organization and should never have been funded. This is true even if it had a high ROI. Great numbers do not take the place of a great process, and alignment is not determined to have existed by looking back on measures or KPIs.

Conversely, you can easily imagine a program that is definitely aligned to one of the organization’s top goals but was executed so poorly that its effectiveness numbers came in very low. So, alignment is about the process of working with senior organizational leaders to plan learning initiatives which directly address their goals and needs. It must start with the organization’s goals and not with existing learning initiatives.

Last, there is much discussion these days about using employee engagement as an indicator of alignment. It is not for all the reasons discussed above. It is simply another measure and not a process. For engagement to be an indicator of alignment you would have to assume that employees know the organization’s goals, as well as the senior leaders do and that learning about those goals is the primary driver of engagement. Both of these assumptions are likely to be false. A focus on employee engagement would be appropriate only if engagement is the highest priority goal of the organization. In most organizations business goals like higher revenue, lower costs, and greater productivity are more important although higher engagement is always a good thing and will contribute indirectly to achieving the business goals.

In conclusion, I am happy to see a focus on this important topic of alignment, but success will require us to work the process of alignment with senior leaders. At the end of this process, every L&D department should have a one-pager listing the organizational goals in the CEO’s priority order and, whenever it is agreed that learning has a role to play, a list under each goal of the key learning programs that are planned. This is how you demonstrate alignment, not through KPIs or measures.

Portfolio Evaluation: Aligning L&D’s Metrics to Business Objectives

by Cristina Hall
Director of Advisory Services – CEB, now Gartner

Using data built on standard metrics has become table stakes for L&D organizations looking to transform and thrive in a rapidly-changing business environment.

A critical challenge that remains for many organizations, however, is how to prioritize and structure their metrics so that the numbers reinforce and showcase the value L&D is contributing to the business.  It’s important to select measurement criteria which reflect L&D’s performance and contextualize them with outcome measures used by the business.

Applying a Portfolio Evaluation approach to Learning and Development provides the linkage needed to address this challenge. It is a clear, outcome-centered framework that can be used to position L&D’s contributions in business-focused terms, at the right level of detail for executive reporting.

How Does L&D Deliver Value?

Delivering training does not, in itself, deliver value.  Training is a tool, a method, to develop employees’ knowledge and skills so they will deliver more value to the business.  The value that training programs deliver aligns to four strategic business objectives.

Driving Growth

Courses aligned to the Drive Growth objective are designed to increase top-line growth, thus growing revenue and market share.  The organization tracks metrics related to sales, renewals, upsells, customer loyalty and satisfaction, etc., while L&D can track leading indicators in each of these areas based on employees’ assessment of how much these areas will improve based on each training program they have attended.  These courses are intended to drive competitive or strategic advantage by focusing on organization-specific processes, systems, products, or skillsets.  Examples include courses that are designed to increase sales, customer retention or repeat business, new product innovation, or help managers best position their teams for business growth.

Increasing Operational Efficiency

Courses aligned to Operational Efficiency increase bottom-line profitability.  The business tracks metrics related to productivity, quality, cost, etc., while L&D can track leading indicators in each of these areas based on employees’ assessment of how much these areas will improve based on each training program they have attended.  These courses are intended to drive competitive or strategic advantage by focusing on organization-specific processes, systems, or skillsets.  Examples include courses that are designed to increase productivity, decrease costs, increase process innovation, or help managers maximize bottom line performance.

Building Foundational Skills

Courses aligned to the Foundational Skills value driver are designed to both ensure that gaps in employee skills can be addressed, and demonstrate that employees can grow and develop to provide even more value to the business; it’s frequently less expensive to fill a minor skill gap than to replace an employee who is already on-boarded and semi-productive. The business tracks metrics related to bench strength, employee engagement, turnover, promotion rates, etc., while L&D can track leading indicators in each of these areas based on employees’ assessment of how much these areas will improve based on each training program they have attended. These courses tend to be off the shelf content, rather than custom designed content specific to the business. Examples include time management, MS Office, and introductory/generalized coaching or sales courses.

Mitigating Risk

Courses aligned to the Mitigate Risk value driver are designed to shield the business from financial or reputational risk by ensuring employee compliance with specific policies or maintenance of specific industry certifications.  The business tracks metrics related to safety, legal costs, etc., while L&D can track leading indicators in each of these areas based on employees’ assessment of how much these areas will improve based on each training program they have attended.  These courses tend to be focused on compliance, regulatory, and safety training, and tend to incorporate content similar to that of other courses in the organization’s industry.

Become a Portfolio Manager

Every learning asset, whether informal or formal, can be tied back to one of the four drivers of value.  The variety and depth of metric collection and the performance expectations associated with those metrics differ across each of these value drivers, which is why grouping courses or learning assets into Portfolios is helpful.  L&D leaders become investment managers; monitoring and managing assets that are expected to produce comparable results to effect the performance of people, who in turn effect the performance of the business.

Getting Started

  1. Align metrics to Portfolios: what is most important? What data is needed?
  2. Align learning assets to Portfolios: this ensures that the right metrics are collected.
  3. Gather the data: gather training effectiveness data from learners and their managers and combine it with efficiency data from the LMS and Finance and outcome data from the business.
  4. Review, interpret, and share: use the metrics to communicate L&D’s contribution to business goals, confirm alignment, and inform strategic decision-making.

For additional detail regarding the Portfolio Evaluation approach, download our white paper, Aligning L&D’s Value with the C-Suite.

About CEB, Now Gartner

Leading organizations worldwide rely on CEB services to harness their untapped potential and grow. Now offered by Gartner, CEB best practices and technology solutions equip clients with the intelligence to effectively manage talent, customers, and operations. More information is available at


Personalized Learning: Means to an End or the End Itself?

The learning field is currently focused on personalized learning which might be defined as providing learners with individualized, custom content in the way each prefers to learn. Advances in digital learning and platforms combined with an explosion in learning content make this advancement not only possible but highly desirable. It has the potential to contribute significantly to better learning experiences and higher application rates leading to better outcomes. This said, there is a danger that some will consider personalized learning not just a strategy to improve learning but as the goal itself, the reason or mission for the learning department. This brings us to a discussion of means versus ends and the importance of keeping the two straight.

I would suggest that personalized learning is best considered as a means to the end and that it will almost never be an end in itself. Over the past several years some in our profession have advocated that it is the end. They have redefined their mission as a learning department or corporate university to provide learners with whatever they want in whatever form they want it, which is an extension of our definition of personalized learning above. At its heart this issue of means versus ends is far from an issue of semantics; rather, it is a fundamental question about the reason for the existence of corporate training. Imagine a discussion with your CFO or CEO. They ask what your strategy is for next year. You say it is personalized learning and that the majority of your resources will be dedicated to providing more and better-personalized learning. They ask why. You tell them learners will be more engaged, will learn more, and will retain more. I guarantee you that in their mind you never really answered the question. Your answer is good as far as it goes but doesn’t get to the business reason for learning. You described a process improvement for them, one that will deliver learning more effectively and efficiently, and that is good but not enough. Basically, you are improving the means to an end by personalizing the learning, but they want to know what the end is. In their mind, the end may be higher sales, greater productivity or quality, fewer accidents, lower operating costs, or higher employee engagement, but you didn’t connect the dots for them. By not appreciating the difference between means and ends, you focused just on the means when you needed to also focus on the end. Better to tell them that you will improve learning in order to effect higher sales, lower costs, or whatever their goals are. These are the ends they care about and once they know that you are working toward the same ends, they will be more receptive to your request for resources to improve the means (personalized learning).

As a profession, we must continue to make great strides in process improvement and personalized learning is one such process improvement. But it is not and never will be an end in itself any more than e-learning, blended learning or mobile learning are ends in themselves. We don’t provide learning just to provide learning. The learning must serve a higher need. It must serve an end and that end should be one of your organization’s high-level goals or needs. With this understanding we also can see that personalized learning is not the opposite of company learning which has been defined as learning directed by the company (not the employee) to meet company needs. Instead, personalized learning should support the company goals and needs even if it is directed or mandated by the company. If at the discretion of the employee it is most likely to improve employee engagement which is a company goal in almost all organizations. If directed by the company, the personalized learning will support one of the other company goals like higher sales. So, personalized learning may be at the discretion of the employee or at the discretion of the company, but in either case, it is a means to an end.

Informal Learning Evaluation: A Three-Step Approach

by John R. Mattox, II, PhD
Managing Consultant
CEB, now Gartner

You may recall that roughly 20 years ago, eLearning was seen as the next new thing. Learning leaders were keen to try out new technologies, while business leaders were happy to cut costs associated with travel and lodging. The eLearning cognoscenti claimed that this new learning method would deliver the same results as instructor-led training. They passionately believed that eLearning would become so prevalent that in-person classrooms would disappear like floppy discs, typewriters, and rotary telephones. The learning pendulum was ready to swing from the in-person classroom experience to the digital self-paced environment.

In time, the fervor surrounding eLearning waned and practical experience helped shape a new learning world where the pendulum was not aligned to one extreme or the other. Effective formal learning programs now employ a blended approach comprised of multiple components, including in-person instructor-led classes, virtual instructor-led events, self-paced web-based modules and maybe, just maybe, an archaic but valuable resource like a book.

Informal learning is the new hot topic amongst leaders in the L&D field. Three things appear to be driving the conversation: the 70/20/10 model, technology, and learners themselves. While the 70/20/10 model is by no means new—it was developed in the 1980s at the Center for Creative Leadership—it has become a prominent part of the conversation lately because it highlights a controversial thought: all oft he money and effort invested to create formal learning accounts for only 10% of the learning that occurs among employees. Only 10%! Seventy percent of the learning comes from on-the-job experience and 20% comes from coaching and mentoring.

These proportions make business leaders ask tough questions like, “Should I continue to invest so much in so little?” and “Will formal learning actually achieve our business goals or should I rely on informal learning?” L&D practitioners are also wondering, “Will my role become irrelevant if informal learning displaces formal learning?” or “How can L&D manage and curate informal learning as a way of maintaining relevance?”

The second influencer—technology—drives informal learning to a large extent by making content and experts easy to access. Google and other search engines make fact finding instantaneous. SharePoint and knowledge portals provide valuable templates and process documents. Content curators like Degreed and Pathgather provide a one-stop shop for eLearning content from multiple vendors like SkillSoft, Udemy, Udacity, and Lynda.

Employees are driving the change as well because they are empowered, networked, and impatient when it comes to learning:

  • 75% of employees report that they will do what they need to do to learn effectively
  • 69% of employees regularly seek out new ways of doing their work from their co-workers
  • 66% of employees expect to learn new information “just-in-time”

As informal learning becomes more prominent, the question that both L&D and business leaders should be asking is simple, “How do we know if informal learning is effective?” The new generation of learners might respond, “Duh! If the information is not effective, we go learn more until we get what we need.” A better way to uncover the effectiveness of information learning is to measure it.

Here’s a three-step measurement process that should provide insight about the effectiveness of most types of informal learning.

1. Determine what content you need to evaluate

This is actually the most difficult step if you intend to measure the impact of informal learning systematically across an organization. If you intend only to measure one aspect of informal learning—say a mentoring program then the work is substantially less. When undertaking a systematic approach, the universe of all possible learning options needs to be defined. Rather than give up now, take one simple step: create categories based on types of learning provided.

For example, group the following types of learning as:

  • Technology-Enabled Content: eLearning modules, videos, podcasts, online simulations or games
  • Documents: SharePoint resources, standard operating procedures and process documents, group webpages, wikis, and blogs
  • Internet Knowledge Portal

Create as many categories as needed to capture the variety of informal learning occurring in your organization.

2. Determine what measurement tool is best suited for each learning type

Common tools include surveys, focus groups, interviews, web analytics, and business systems that already gather critical operational data like widgets produced or products sold. Web analytics, business systems, and surveys tend to be highly scalable, low-effort methods for gathering large amounts of data. Focus groups and interviews take more time and effort, but often provide information rich details.

3. Determine when to deploy the measurement tool to gather information

For an eLearning module, it seems appropriate to include a web-based survey link on the last page of content. Learners can launch the survey and provide feedback immediately after the module is complete. If the content is curated by a vendor–preventing the insertion of a link on the final page of materials–then the completion of the module when registered in the LMS should trigger the distribution of an email with a link to the survey evaluation. Regardless of the type of learning (instructor led, virtual, self-paced, etc.), the timing and the tool will vary according to the content.

Is it easy to implement this measurement approach to evaluate the impact of informal learning? For some organizations, maybe. For others, not at all. However, measurement is a journey and it begins by taking the first step.

For More Information…

For guidance about measuring informal learning, contact John Mattox at CEB, now Garner To learn more about how to improve L&D measurement initiatives, download Increasing Response Rates white paper.

About CEB, now Gartner

Leading organizations worldwide rely on CEB services to harness their untapped potential and grow. Now offered by Gartner, CEB best practices and technology solutions equip clients with the intelligence to effectively manage talent, customers, and operations. More information is available at

Telling a Story With Data

Data Story

by Christine Lawther, PhD
Senior Advisor—CEB (now Gartner)

Data is everywhere. In our personal lives, we are continually exposed to metrics. The number of “likes” on social media, usage metrics on every bill, and the caloric breakdown of burgers at the most popular fast food chains are all examples of common metrics that society is exposed to on a regular basis.

Looking at data within a business context, data insight is in high demand. More organizations are focusing on doing more with less, and so data often becomes the key element that determines decisions on goals, resources, and performance. This increase in data exposure acts as an opportunity for learning and development (L&D) professionals to showcase their efforts and to truly transition the conversation from being viewed as a cost center to an essential contributor to the organization’s goals.

One common challenge is that L&D teams are often not staffed with team members that have a rich background in analytics. When instructional designers, facilitators, program managers, and learning leaders hold the responsibility of sharing data, it can be rather challenging to translate stereotypical L&D metrics into a compelling story that resonates with external stakeholders. It’s because of this that tapping into some foundational best practices in telling a story with data can be valuable to consider.

Structure Your Story: The Funnel Approach

If you visualize a funnel, imagine a broad opening where contents are poured in and a stem that becomes increasingly narrow. Apply this visualization as the framework to craft your story: start with broad, generic insights, and then funnel down to specifics. Doing this enables the recipient of the story to understand the natural flow of moving through diverse metrics, but still understand the overarching picture of L&D performance as a comprehensive whole. For example, it may be helpful to start by outlining overall satisfaction or utilization metrics, and then transition into something slightly more specific such as breaking out scores of key courses within your portfolio that are the biggest contributors to those overall metrics. From there, you can move into more detailed metrics by delving into components such as highest/lowest rated items within that course, time to apply training, barriers to training application, and insightful qualitative sentiments. At the very end of the story, one can conclude with specific actions that the team plans to take. Following this approach not only paints a comprehensive picture, but it also creates momentum for next steps.

Speak Their Language

Metrics that L&D often focuses on (e.g., activity, cost per learner) may not easily translate into insights that external stakeholders innately resonate with. Each department within an organization may have their own custom metrics. However, it is imperative that a function can demonstrate the linkage back to the broader organization. Doing this enables one to exhibit that they are being good stewards of the resources granted to them and also reveals how their day-to-day efforts align with the broader organization.

So, how can you demonstrate that leadership should be confident with your decisions? Communicate your impact with metrics that resonate with decision makers. If there are any core metrics that the company tracks, identify ways to directly demonstrate L&D’s linkage to them. If you are unsure, look for organizational metrics that are announced at company-wide meetings or shared on a regular basis. For example, if Net Promoter Score is something that your organization tracks, establish a Net Promoter Score for L&D and include it in your story. If increasing sales is a priority, identify how sales training is contributing to that effort.

Strike a Balance

It can be tempting to share only successes, however, it is vital to also include opportunities for improvement. Why? Because demonstrating transparency is the key to establishing trust. A strong approach is to share an opportunity for improvement and also include a few specific actions the department is planning to take to improve this. Doing this will provide a two-fold benefit. First, it will demonstrate that you are aware of opportunities to work on. Second, it exemplifies that you have proactively mapped out a plan to address those areas.

If you are finding that your story is entirely positive, consider looking for differences within the population you support. For example, does every region/department/tenure bracket report the same level of impact? Often a team may find that on a holistic level they are doing well; however, when you dig into varied demographics, there may be an area that can drive greater value. By transparently sharing your data to outline both successes and opportunities, the learning organization can become the best at getting better.

CEB Metrics that Matter is committed to helping organizations achieve more precision in strategic talent decisions, moving beyond big data to optimizing workforce learning investments against the most business-critical skills and competencies. To learn more about how we help bridge the gap between L&D and business leaders, download a copy of our white paper, Aligning L&D’s Value with the C-Suite.

About CEB (now Gartner)

Leading organizations worldwide rely on CEB services to harness their untapped potential and grow. Now offered by Gartner, CEB best practices and technology solutions equip clients with the intelligence to effectively manage talent, customers, and operations. More information is available at

TDRp Excellence Awards

TDRp Excellence Awards

Demonstrate and Celebrate Your Contributions to L&D Through TDRp.

The Center for Talent Reporting is pleased to announce the first annual Talent Development Reporting principles (TDRp) Excellence Award. This unique award focuses on the forward thinking, transformational work of L&D teams to show impact and value through management, measurement, evaluation and reporting of L&D programs and the function overall.

There are several components to TDRp. This award recognizes excellence in five areas:

  • Relationship with Business Units
  • Business Planning and Goal Setting
  • L&D Governance
  • L&D Measurement
  • Reporting Methodology

You may apply to a single award category or multiple awards. For each award category to which you apply, you will be asked to provide documentation showing examples of current practices in your organization. You will also be asked to answer several supporting questions. Each application will be considered for each category where information is provided. You do not need to provide information for all categories to be considered for a specific award.

Applications are due by midnight EST on Friday, December 1, 2017.

The Center for Talent Reporting will recognize finalists in each of the five categories and in overall excellence. Finalists will be notified by Monday, January 15, 2018 of their selection and the winners will be announced at the CTR Annual Conference on Wednesday, February 21, 2018.

Have questions? Please email us at

Apply Now

Are You Ready for the Next Recession?

I know this seems like a silly question to ask, as the stock market continues to set new records and the unemployment rate is the lowest it has been in more than 10 years. However, speaking as a former economist, I know that this is precisely the time to ask the question because good times never last forever and the next downturn is likely to come without warning. Economists have a poor track record of accurately predicting recessions, so by the time a consensus believes that a recession is coming, we will probably already be in one. What we do know though, is that we are well into the current expansion and soon we will be living on borrowed time. The last recession ended in June 2009, so we are no 8 plus years into the current expansion. The longest expansion in modern economic history was the 9 year run from 1991 to 2000. The next longest was 8 years from 1982 to 1990. So, in another we will hopefully be in record territory for the longest economic expansion in our history.

This means that a recession is due sooner rather than later. Are you ready for it? Have you done all that you can to demonstrate your value by carefully aligning your discretionary programs to the CEOs most important goals and then showing the impact of learning on those key goals? Do you have strong relationships with the goal owners and senior leaders so they will speak up on your behalf? Are you running your basic skills and compliance training as efficiently and effectively as possible and do you demonstrate that to your senior leaders? Are you setting specific, measurable goals and plans for key measures and then use monthly reports to compare year-to-date progress against this plan? Do your senior leaders see these reports and know you are running learning like a business?

If you‘re not doing these kinds of things now, why should your CEO continue your current level of funding and staffing when the next recession comes. Competition for funds and staff will be fierce, and every department may well see a reduction, but historically, L&D takes a disproportionately large hit which reflects the CEO’s lack of confidence in or understanding of the value L&D provides. So no is the time to put the business processes in place to demonstrate alignment, value and rigor so that your senior leaders will have a better appreciation for the value added by learning and they will have greater confidence in you as a learning leader. You cannot prevent the next recession, but you can position your department to weather the storm.

Now is the time to prepare.

The Future of Learning

At the May ATD International Conference in Atlanta one speaker said the following about learning and development functions: “We don’t own content and aren’t managing learning. That power long ago shifted from the learning department to individual employees.” The speaker was making the point that since learning has shifted to employees there is an opportunity for CLOs to assume broader responsibility for other talent processes such as talent acquisition and organizational development. While I agree that CLOs are well positioned by their experience to expand their scope and probably the best positioned to lead an integrated talent acquisition and development function, the broad assertion about the end of L&D as we know it still troubles me.

The statement may be true for an L&D department which has responsibility only for general learning which is not aligned to any company goals other than perhaps employee engagement. In this case the L&D department would offer a catalog of courses, either internally developed or externally purchased, and employees would select what they want to improve or are interested in. Learning aligned with company goals other than employee engagement and learning for onboarding, basic skills training and compliance would be managed by their respective departments. For example, the sales department would manage all sales-related learning, quality would manage all quality-related learning, and HR would manage all compliance-related learning. In this model I would say that the L&D department never managed learning to begin with. They simply offered a catalog and tracked what employees took. The department did own the content, but I agree with the speaker that advances in digital learning offered outside the company will soon make this model obsolete. In the future, employees will be able to find all their general learning outside the company, and the L&D department will no longer own that content.

The statement however is not true now nor should it be true in the future for L&D departments which offer learning aligned to company goals, important business, or HR needs. In many companies the L&D department is responsible for the content and management of learning to increase sales, improve quality, reduce injuries, improve leadership, achieve compliance targets, onboard employees, provide basic skills, and meet other company goals. Why would a company have its L&D department abandon this important role as a strategic business partner? Why would it be better to have employees try to figure out on their own what they need to be successful or in compliance, especially new hires or employees new to a position, when experienced leaders already know what employees will need to be successful in their job? Furthermore, some knowledge is proprietary and simply not available outside the company. And let’s face it, real impact on company goals requires more than access to content. It requires a well-conceived and executed program to target the particular need, convey the required knowledge or skills, and then reinforce the desired behavior. It actually takes a lot of effort to manage learning for results, none of which occurs in the employee self-directed model.

In conclusion, I agree that in the future we will not need to own general content which is unaligned to company goals and that employees will be able to find general content of interest to them outside the company. However, I do believe we should continue to own critical content and that there will always be an important role for L&D departments to manage learning aligned to company goals and needs.

Do We Need New Measurements and Reporting for the Digital Learning Revolution?

Digital learning is revolutionizing the corporate learning environment by making a vast amount of content available to learners. Some of this content can be accessed through internal learning management systems, but the fastest growth is likely to come from content available outside the organization’s firewalls and systems. This is content directly available to anyone with internet access. In a sense we have had digital learning since the first computer-based training (CBT) became available in the 1980s and even more so with the advent of e-learning (WBT) in the late 1990s, but the current revolution dwarfs these past initiatives in terms of breadth, reach, and sheer volume.

Most of our current measures and reports / dashboards were designed for traditional classroom learning and have been applied to e-learning as well, but some have always asked if we need special measures for e-learning. The answer is basically been no. Traditional efficiency measures about number of participants, classes, hours, etc., combined with traditional effectiveness measures for participant reaction, test scores, application, impact and ROI can easily be applied to e-learning. The old post event survey was delivered at the end of class and with e-learning it is generally sent to participants immediately following class. So, no revolution was required in measurement or the reporting of those measures for e-learning.

The question we face now at the start of this new digital revolution is the same as the on e at the inception of e-learning. Will new measures and reporting be required? My initial take is that the answer remains the same as well. The standard efficiency, effectiveness and outcome measures will still serve us well, and the three standard reports recommended by Talent Development Reporting principles (TDRp) will still help us to better manage our programs and departments as well as demonstrate our alignment to and impact on key company goals. However, the measurement strategy itself may have to change dramatically because an increasing amount of content will be accessed outside the company and outside any learning management system (LMS). Think of employees taking free courses at the Kahn Academy and universities like MIT, learning from You Tube or their Google search, or taking advantage of aggregators like Coursera, Udemy, Udacity and others. Since this learning takes place outside the LMS there is no way to automatically capture efficiency measures like number of participants, courses, or hours, and there is no way to automatically send a survey to gauge satisfaction, amount learned, application, or other effectiveness measures let alone any outcome measure.

Instead, much more thought is going to have to be given to what measures are really important for this new digital learning. Do we really need to know how many courses employees take outside the LMS? Do we really need to know how satisfied they were with each course or how much they learned? I don’t think so. So, let’s go back to basics and determine what we want to know, why we want to know it, and what we would do with the data if we had it. Learning and development is increasingly important to millennials so an employer certainly has an interest in finding out whether employees are satisfied with their learning but not necessarily with each micro instance of learning (like a Kahn Academy course). Why not send a quarterly or semi-annual learning survey to employees (or expand your current employee engagement survey) to ask about their satisfaction with learning in general? You could ask about how they have learned in the past  three months (company offerings, MOOCs, free universities, Coursera-type providers, Kahn, etc.) and perhaps how many times per week they access different sources. A summary question would be something like, How satisfied are you with all the opportunities you have to learn? This should provide the information to determine whether employees are sufficiently engaged with learning.

In conclusion, the digital revolution is not likely to require new measures or reports, but is likely to require rethinking your measurement strategy with regard to data acquisition.

3 Principles of Effective Learning Metrics & Measurement

Contributed by Caveo Learning

As more and more talent development leaders take a serious look at implementing meaningful metrics and measurement across their learning and development organizations, the business relationships and conversations between L&D professionals and stakeholders are changing for the better.

There are times when talent development leaders need to root themselves in foundational principles of talent development metrics. It’s easy to get caught up in new thinking, models, and frameworks, and lose focus on the fundamentals of how to run a learning organization.

No matter what model, framework, system, tool, methodology, or new approach we want to adapt, adopt, and deploy, there are at least three fundamentals that we should never lose sight of—principles that should be applied to all learning metrics.

  1. Differentiate between metrics you actively manage and those you monitor.

The principle is so simple, yet so rarely considered. Just like in medicine, some “vital statistics” are always monitored—the moment something changes in the wrong direction, an alarm sounds, allowing for the “metric” to be actively managed. Similarly, when selecting your metrics, determine which metrics you intend to monitor, and which you intend to actively manage. Further, know what the target thresholds are for the monitored metrics that will trigger the “alarm” for active management. For example, you may want to monitor L&D staff utilization, and only actively manage it should the metric fall out of the acceptable range.

  1. Align to industry formulas, where practical.

Another issue that often comes up is defining the specific formula for a metric. Frequently, the formula is not considered deeply enough to add the full value that it can. This can result in metrics with little credibility or validity, and which should not be informing any decisions. It’s true that each organization has its own characteristics, its own language, and specifics that need to be considered. However, using a standard formula defined by an existing industry benchmark can be very helpful when planning your metric strategy, goals, and even budget. Industry benchmarks are only valuable for planning and comparison if you are comparing apples with apples—to do that, the formulas need to match.

CTR does a fantastic job in its L&D metrics library of not only providing the recommended formula, but also noting which industry organization defined it, or even which other industry benchmark it is similar to. A good balance between very-company-specific and industry-aligned formulas will allow for at least some comparison, planning, and target setting against industry metrics. Whether it is CTR’s Talent Development Reporting Principles (TDRp), Bersin by Deloitte, Training magazine, ATD, SHRM, or any others, consider aligning at least some meaningful metrics to an industry definition and formula. With our above example of staff utilization, one TDRp formula we could use is “Learning staff, % of total hours used.”

  1. Measure to inform decisions and actions.

Whether you are monitoring or managing a particular metric, there must be an underlying business reason to do so. Having a metric that does not inform a decision or an action is of no value. Also, consider why you are measuring something and what that metric will influence. Learning metrics are there to help us improve what we do as L&D professionals. Whether it is improving the efficiency of our learning organization, the effectiveness of our learning solutions, alignment to our stakeholders, or the contribution our interventions have on the strategic business goals of the organization, metrics play a critical role in influencing the value we bring to our organizations and, almost more critically, the credibility of L&D in the eyes of external stakeholders and the C-suite.

It’s better to have a few good metrics that inform meaningful decision making and allow for agility in improving the value L&D brings to your organization, rather than hundreds of metrics that offer limited value. Sticking with our example, we could monitor the utilization of learning staff and start actively addressing this metric should the percentage increase above, say, 90%, by engaging a learning solutions provider to assist with some of the workload until it returns to the acceptable range.

Learning leaders are starting to take more notice of the deep value that metrics can bring toward the constant improvement of everything we do in L&D. No matter what the specific model, framework, and approach your organization chooses for learning metrics, there remain some fundamental principles that will help ensure that we ultimately have a metrics strategy that guides us, helps us improve, changes conversations with our stakeholders, and increases our credibility as business leaders.

Learning metrics are our friend, our source of feedback and intelligence, ensuring we are constantly focused on maximizing the value we bring to our organizations.

Caveo Learning is a learning consulting firm, providing learning strategies and solutions to Fortune 1000 and other leading organizations. Caveo’s mission is to transform the learning industry into one that consistently delivers targeted and recurring business value. Since 2004, Caveo has delivered ROI-focused strategic learning and performance solutions to organizations in a wide range of industries, including technology, healthcare, energy, financial services, telecommunications, manufacturing, foodservice, pharmaceuticals, and hospitality. Caveo was named one of the top content creation companies of 2017 by Training Industry Inc. For more information, visit

Take a Business-Minded Approach to Sourcing Learning Partners

By: Gary Schafer
President, Caveo Learning

One of the most important tasks talent development leaders face is selecting outsourcing partners and product vendors. It also happens to be one of the most daunting.

The learning and development organization’s relationship with providers can be a major factor in the success of the business. Learning leaders must weigh many variables in the purchasing process, from the factual (pricing, experience) to the intangible (flexibility, dedication).

Navigating the procurement process can be tremendously difficult. How can a provider’s ability to flex with the challenging demands of the business be analyzed through a formal procurement process? How does one tell if an external learning partner is going to react to changing environments and truly be aligned with the business? How is the commitment of the supplier to the mission, vision, and values of the business measured? What about issues of scalability, global capability, and communication?

How to Develop a Learning Sourcing Strategy

Start by establishing a factual market intelligence base—understand the array of variables around learning services, from expertise to capability, and the rates associated with those services. Create your supplier portfolio, determine a list of qualification criteria, and then winnow your list of potential suppliers.

Identify the types of services your organization needs—learning strategy, audience analysis, curriculum design, instructor-led training, eLearning, change management, etc. Then, determine the demand across the organization, broken out by role.

Next, perform learning-spend and target-cost analyses. You’ll want to analyze supplier rates using both internal and external data sources.

  • Conduct some internal benchmarking with the same supplier—are rates equal across multiple projects? Does the firm charge premiums based on experience? Is pricing consistent across roles?
  • Do the same internal benchmarking across multiple suppliers. Find out if rates are similar for comparable roles and skill levels at different organizations.
  • Develop some external benchmarks using publicly available market data, comparing rates from market analyses and publicly available salary data. Factor for loaded cost (about 1.3 times base salary) to cover benefits, PTO, company-paid taxes, etc., and also allow some room for supplier margin.

Investigate the Intangibles

Before moving forward with sourcing service partners and learning products providers, conduct a supplier quality assessment. Create quality profiles for each prospective partner by evaluating the following factors:

  • Strategic planning—Will they assist with communicating and messaging to senior stakeholders throughout the business? Do they provide intelligence around industry trends and best practices? What do they offer in the way of post-project analysis, such as lessons learned and next-steps recommendations?
  • Experience—Does their team have expertise in relevant areas? What is the screening process for potential employees and contractors? How cohesive is the team, and how closely do they adhere to defined processes?
  • Responsiveness—Do they show a commitment to your business needs through effective communication? Is there a proven track record of adherence to deadlines? Do they offer strategic learning support?
  • Quality of work—What processes are in place to ensure solid instructional design? What methodology is used for quality reviews (test plans, style guides, etc.)? Does the supplier set clear review expectations for training deliverables and correct issues in a timely manner?
  • Flexibility—Do the suppliers have the ability and willingness to react to new or changing business needs? What is the capacity to scale a team with appropriate roles and volume?
  • Support—Do they have ready access to tools and templates necessary to speed development? Are there documented methodologies for executing common tasks?

Pricing as a Determining Factor

Suppliers’ rates often (though certainly not always) correlate with their quality profiles. If budget were no object, the learning sourcing strategy would simply mean identifying the most qualified provider; of course, budget is oftentimes the overriding factor, and so we must balance the need for quality with fiscal realities. Thus, try to optimize supplier usage based on rates, hiring high-quality suppliers for critical projects and project management, and settling for more cost-effective options, such as staff augmentation, for lower-importance projects and commodity roles.

Likewise, negotiate for spending reductions based on the type of support required. Each of the three main services models comes with its own potential cost-savings.

  • A project-based agreement can reduce per-unit costs and may be further cost-efficient due to greater opportunity to leverage offshore assets.
  • Contract labor ensures compliance across the organization, and the consolidated labor structure means negotiated volume rates are an option.
  • A managed services model will likely have optimized service levels and adjustable staffing levels, along with efficiencies gleaned from custom redesigned processes.

Be cautious with regard to electronic procurement. There are several tools now available and in use by procurement professionals to streamline the proposal process, but they are not always ideal for learning organizations. These tools are great for providing a uniform response with efficiency, but for learning services, not everything is uniform. E-procurement tools often create barriers for providers to be able to tell their full story, essentially reducing the proposal process to 100-word bites of information that make it difficult to recognize the provider’s true value. A good procurement professional can get around some of these limitations by still offering a face-to-face meeting opportunity of the top candidates for consideration.

Finally, take care to negotiate a favorable agreement with the external learning partner. Have a negotiating strategy before initiating contract talks, and be ready and willing to walk away if a better partnership can be found elsewhere.

Creating and implementing a learning sourcing strategy for learning will greatly reduce the stress of the services procurement process, helping identify the ideal partners for your learning organization’s initiatives while optimizing budget. Rather than an intimidating task, a well-prepared strategy plan can make sourcing service partners a cornerstone in your organization’s success.

Gary Schafer is president of Caveo Learning, recently named a top custom content company by Training Industry Inc. Gary was formerly a management consultant at McKinsey & Co., and he holds an MBA from Northwestern University’s Kellogg School of Management.