CTR Exchange and Blog

New Year’s Resolutions to Improve Measurement, Reporting and Management of L&D

The new year is a great time to make changes. You may have been thinking about some of these changes, so why not resolve to make them happen? Here are ten top suggestions:

  1. Start using the TDRP terminology. It costs nothing for your team to use the three categories of measures and the four reasons to measure. Make this your common language.
  2. Ensure you have a balanced set of measures for courses and programs. You should have level 1, level 2 (if applicable), and level 3 as effectiveness measures in addition to the standard efficiency measures like the number of participants, cost, completion rate, and date. For key programs, you should also have level 5 ROI. And, if the course or program supports a high-level goal, you should have a level 4 outcome measure.
  3. Make sure you have a balanced set of measures at the department level for formal learning. In addition to the common efficiency measures like the number of unique and total participants, the number of courses and hours, and the total cost, you should have the common effectiveness measures (levels 1,2,3) aggregated across all of your courses. Ideally, you also will have a summary of the outcomes and ROIs for key programs.
  4. Begin to collect data on informal learning if you are not already doing so. Efficiency measures might include the number of communities of practice, the number of performance support tools, the amount of content available online, utilization rate for each, and the number of unique and total participants for each. The primary effectiveness measure for each would be user satisfaction which would be measured periodically, perhaps semiannually.
  5. Start using the TDRP framework to select the right report for each user depending on the reason to measure. Dashboards are currently used for everything and they should not be. Do not use a dashboard to brief the results of a program or to manage a program. Scorecards and dashboards are intended ONLY for informing and monitoring.
  6. Begin to integrate different modalities into a learning program. For example, you might use online learning or online content to provide a common base of knowledge before bring the participants together in a virtual or in-person course which would be followed by providing performance support tools and enrolling the participants in a community of practice.
  7. Create a measures library to be your single, safe source for measurement definitions, formulae and sources.
  8. Pick one important program you would like to manage for the year. Meet with the goal owner at the start of the year to carefully plan the program, including agreement on an outcome measure as well as specific, measurable targets for the outcome measure and all the important efficiency and effectiveness measures. Agree on roles and responsibilities. Once the program is underway, use the program management report with plan, year-to-date (YTD) and forecast data to manage the program on a monthly basis.
  9. Meet with your CLO to reach agreement on the efficiency and effectiveness measures targeted for improvement. Get agreement on a reasonable plan or target for each and ensure that sufficient budget and staff will be made available to achieve the improvement. Create an operations report with plan, YTD, and forecast data to use monthly to manage the initiatives to successful conclusion.
  10. Last, resolve to increase your knowledge of the field. There are hundreds of books and of course I recommend the two Measurement Demystified books and Learning Analytics by John Mattox II, Peggy Parskey and Cristina Hall. To help you run learning like a business I recommend my book The Business of Learning as well The Business of Training by Edward Trolley. There are hundreds of webinars each ear – all free. And there are numerous workshops on measurement in general like our Measurement Demystified workshop and our workshop on the new ISO standard for L&D metrics. In addition, there are workshops on ROI and the Kirkpatrick approach as well as a host of other great topics.

I hope you will resolve to follow through with some of these, and we are here to help with our books, webinars, and workshops.

10 Years of Progress: The Evolution of TDRp as an Industry Standard

The Center for Talent Reporting was created ten years ago to be the home for Talent Development Reporting principles (TDRp) and serve as an advocate for measurement and reporting standards for L&D in particular and HR in general. Central to this effort has been the TDRp framework which has evolved and improved over the last ten years based on user feedback.

Drawing Inspiration from GAAP

When we were developing the TDRp framework, we knew that it would have to be simple, easy to use, and easy to remember. At a high level, we drew inspiration from the Generally Accepted Accounting Principles (GAAP) framework used by accountants in the United States. The GAAP framework includes four basic types of measures (income, expense, assets, and liabilities) and reports these in three standard statements (income statement, balance sheet, and cash flow statement). On a learning level, we appreciated how successful the Katzell/Kirkpatrick/Phillips five-level model for evaluation has been. In both cases, there are a limited number of elements in the framework which makes it both easy to use and remember.

We began with three types or categories of measures (efficiency, effectiveness, and outcome), similar to the four types of accounting measures. We also began with three types of statements (program, operations, and summary) to show actual results, similar to the three statements used in accounting. In addition to the statements, we added management reports (program, operations, and summary) to be used in the active management of programs and the department to produce planned results. These reports include columns for plan and forecast, which is similar to internal reports organizations use to manage. So, three types of measures, three statements, and three management reports.

How TDRp Has Evolved

The first significant evolution occurred as we received feedback that the statements and reports overlapped quite a bit. Each showed the actual values for the chosen measures. If the user was going to manage for results and wanted to use the management reports, which included plan and forecast data, did they really need the statements which contained only actual results? We decided they did not. So, our new recommendation was to use statements if you only want to share historical data (for example, by month, quarter, year-to-date, or year) and use the reports if you want to actively manage the measures to a plan, in which case you need the plan and forecast columns.

The second significant evolution was the addition of more report types. Most begin their reporting journey with scorecards and dashboards, not statements or management reports. In an effort to meet practitioners where they were and to provide more comprehensive reporting guidance, we added scorecards and dashboards to the framework. We also added program evaluation and custom analysis reports to help users brief the results of their program evaluation (program evaluation report) or statistical analysis to explore relationships among measures (custom analysis report). So, we now had five types of reports with all three of the original management reports being lumped into a single category of management reports.

The last significant evolution was the addition of four reasons to measure. It became increasingly obvious that practitioners needed guidance on which type of report to use and that guidance depended on the user of the data and their reasons for measuring. While we identified 15-20 very specific reasons to measure, our principle of simplicity dictated that we keep the number of reasons to measure under five. We settled on four broad reasons (inform, monitor, evaluate, and manage) and tied these directly to the five types of reports. For example, if measurement is to inform, then a scorecard or dashboard is recommended. If the reason to measure is to show the value of a program, then a program evaluation report (likely a PowerPoint) is recommended. And if the reason to measure is to manage a program to successful conclusion, then a monthly program management report is recommended.

Summing it Up

In summary, the TDRP framework now includes four broad reasons to measure, three types of measures, and five types of reports with the selection of reports tied to the reasons to measure. Users tell us it is easy to use and easy to remember, and that it provides great direction for their measurement and reporting. We hope you agree, but we also hope you will let us know how you believe it can be improved so that it will continue to evolve to meet your needs.

What Competencies Should Your Measurement Staff Possess?

Most organizations aspire to better assess, understand, analyze, and report their L&D measures but are often uncertain about the capabilities needed to move forward. Some even wonder if they need to hire a Ph.D. data scientist. The short answer is no. It is better to start by building capability up from the bottom by ensuring everyone has a basic level of competency and ensuring that those with measurement responsibility have at least an intermediate level of competency.

Your first goal should be for all your L&D staff to understand L&D Measurement and Performance consulting. This is where a common language and framework like Talent Development Reporting Principles (TDRP) can help. It provides a common language for measures and reports and reasons to measure, facilitating communication among all L&D professionals. Everyone should know the essential efficiency and effectiveness measures, including how they are defined and used. Everyone should also have a basic understanding of performance consulting and needs analysis, the importance of reaching an upfront agreement with goals owners and stakeholders on success, and the role learning can play in that success.

Your measurement staff (if you have a dedicated team for this) or others who do Measurement as part of their job need to have at least an intermediate level of expertise in Measurement. This means they should know and be comfortable using most of the L&D measures in our field, at least those for formal learning if your organization doesn’t use informal learning. They should be able to select measures for a program or department-wide use. They should know what type of report is appropriate for each user, depending on their reason to measure. They should be able to write basic survey questions and understand the different options for measuring levels 2 and 3. They should be aware of the differences between the Kirkpatrick and Phillips approaches to level 4 and be able to employ the participant estimation methodology to isolate the impact of learning. They should be able to calculate ROI.

Ideally, you should also have at least one staff member with an advanced level of mastery. This person should be familiar with all the L&D measures, including those for informal learning. This person should also know statistics and survey design and the use of control groups, trendline estimation, and regression to isolate the impact of learning. This person should be able to identify and adjust for seasonality in the data and make year-end forecasts. This person should also be able to formulate hypotheses and test them to confirm or reject their validity.

What does it take to achieve these levels of competency? The good news is that you can meet the basic level with a short introductory workshop or several webinars. A framework like TDRP is easy to understand and remember, as are the 10-15 basic L&D measures. The intermediate level requires more education and experience. You can gain knowledge by attending a workshop on Measurement and reporting (for example, the Measurement Demystified workshop offered by CTR) or a Kirkpatrick or Phillips workshop. You can supplement your education by reading a few of the many books written on the topic of Measurement and Analytics, as well as attending more advanced webinars. Applying the concepts is also essential for an intermediate level of competency.

That leaves the advanced level of competency. This requires more workshops, more reading, and in some cases, courses, for example, to require the requisite knowledge of statistics or survey design. This level also requires considerable experience in the field, applying the concepts. By this stage, the person may be considered a data scientist, but that does not imply a graduate degree and certainly not a Ph.D. Most can achieve this level of competency through continual learning and application. Moreover, a newly minted Ph.D. data scientist may not have the experience to operate at an advanced level. It takes at least five years to reach this level.

In conclusion, your measurement staff should have at least an intermediate competency level which is not hard to obtain. Moreover, all L&D staff should have at least a basic understanding of Measurement and Performance consulting, which is easy to provide. Ideally, at least one person in a large organization possesses an advanced measurement competency, which requires much more work.

The Importance of a Balanced Set of Measures

I recently spoke with someone who shared that their organization had stopped collecting efficiency measures (which are quantity metrics like the number of participants). He said they had matured to focusing only on effectiveness and outcome measures like the application rate and impact—a big mistake. Every organization should collect a balanced set of measures for holistic reporting and analysis but also to avoid unintended consequences.

First, a quick review of the three types of measures. As noted above, efficiency measures are quantity metrics, also referred to as activity measures or Level 0. Examples are the number of participants, courses, hours, costs, completions rates, and various utilization rates. Effectiveness measures are about quality. They answer questions about how good the program is. For these, we have the Kirkpatrick/Phillips five-level framework where Level 1 is participant reaction, Level 2 is learning, Level 3 is application on the job, and Level 5 is return on investment (ROI). Level 4, results or impact, constitutes the ultimate outcome of the learning and is in a category by itself called outcomes.

A balanced set of measures is required for holistic reporting and analysis. Without a balanced set, it is impossible to see the whole picture. For example, an organization that no longer collects efficiency measures will not know how many participants have taken a course, the percentage who completed it, the cost of the course, or whether it was completed on time. At an aggregate level across all programs and courses, this organization will not know which courses are being used, utilizations rates for instructors and classrooms, percentage of employees reached by learning, or total costs for learning. Frankly, it is unimaginable that a CLO would not want to know these basic and foundational measures.

Likewise, if an organization decided to collect only efficiency measures, it would have no insight into the quality of its programs. Without level 1, how do you know if the content and instructors are good? Without a level 1 question on relevance, how do you know you have the right target audience? Level 1 is your early warning if something is wrong and needs to be addressed. Level 2 measures the learning. Without it, how do you know if participants learned the required knowledge or gained the required skills? Without it, how do you demonstrate compliance with laws, regulations, and best practices?

Level 3 is application which can be measured as intent to apply at the end of the course or actual application two-three months later.  The level 3 intent-to-apply question can easily be added to the level 1 questions in your post-event survey. Without it, how do you know if the content is likely to be applied or has been applied? If content is not applied back on the job, it is considered scrap. The scrap rate is simply 1- the application rate, and unfortunately, the scrap rate for our profession is embarrassingly high at 40%-80%.

Level 4 is results or impact and answers the question, “Did this program make any difference?” Ideally, we would like the isolated impact of learning, and Phillips provides five methods to get it. With this, we can make a statement like, “Learning contributed 3% higher sales”. If the isolated impact is not possible, the Kirkpatricks provide a methodology for making a compelling case that learning contributed to the business result. Either way, level 4 is the only way to show that learning made a difference.

Last, Level 5 is ROI. This answers the question of whether the program is worth what it cost. It is possible the program was attended by the right people who liked it, learned it, and applied it and that the program had an impact, but it cost more than it was worth. If this program is to be repeated in the future, a way must be found to lower the cost or increase the impact.

Just as with the efficiency measures, it is hard to imagine running a learning department without effectiveness measures. You simply would not know if what you are doing has any value.  And lower-level effectiveness measures are required to analyze higher-level measures. For example, if Level 4 impact is small or nonexistent, clues can be found by looking at lLvels 1-3. If participants told you that it was not relevant, you probably have the wrong audience, or the content was not designed appropriately. If they cannot pass the knowledge tests for Level 2, perhaps the content is unclear. If they cannot apply it, there may be issues with their supervisor not supporting them or providing the necessary time and resources.

So, you need a balanced set of measures to manage a single program or an entire department. Programs should always have several efficiency and effectiveness measures. For most programs, these include the number of participants, completion rate, completion date, cost, and Levels 1-3. These are the minimum. If the program supports a high-level organizational goal like increasing revenue or reducing injuries, then an outcome measure should be added as well.

Last, a balanced set is required to avoid unintended consequences. If a CLO measured staff only on efficiency measures like the number of courses produced or meeting deadlines, staff might try to comply by rushing programs out the door. In this case, effectiveness and outcome would suffer. If, instead, the CLO were to measure staff only by effectiveness and outcome, staff might try to comply by spending a long time designing each course to ensure that is of the highest quality. Consequently, very few courses would be produced, deadlines would be missed, and costs would skyrocket.

In conclusion, a balanced set of measures is key to success in designing, delivering, measuring, and managing learning at both the program and department levels.

Learn more about the three types of measures and Talent Development Reporting Principles (TDRP) at our November 2 virtual conference.

How Difficult is Good L&D Measurement?

In the past, I have always said that good measurement of L&D is not that difficult. However, the more we have worked with individuals and organizations and the more workshops we teach, I am beginning to reconsider for three reasons.

1. Metrics are Not as Simple as They Appear

First, the metrics themselves. Some metrics are easy to understand and easy to measure. For example, the number of courses offered or used is easy to determine if you have Learning Management System (LMS) and not too difficult even if you use an Excel worksheet. However, many metrics are not as simple as they seem. For example, the number of participants where the user needs to know the difference between unique (no duplications allowed) and total participants (the same participant may be counted more than once). Another example is program cost, which sounds straightforward. It requires the user to understand how to calculate and use the fully-burdened labor and related rate to cost out the hours staff spend on learning development, delivery, and management.

Two more commonly used metrics fall into the category of easy-to-understand but complicated to measure. Level 2 learning, for example, sounds simple. Just calculate the average test scores, and you are done. In practice, however, many participants must keep taking the test until they receive a passing score. In this case, the final score doesn’t tell you much about how well the course or the test is designed. If you want to know whether a problem needs to be addressed, you need to report the score on the first attempt (or first-time pass rate) or the number of attempts required to pass.

Likewise, the Level 3 application rate sounds easy. Just ask participants if they applied what they learned. In many cases, however, the amount of content dictates that it would be better to ask the percentage of content they applied using a decile scale from 0%-100%. Then there is the issue of when to ask. Ideally, you will have a question about intent to apply as part of the post-event survey and a question about the actual application in a follow-up survey two to three months later.

Rounding out the challenge of being knowledgeable about measures is the category of metrics that are hard to understand, let alone measure. I think Level 4 impact falls into this category. To make life interesting, we have two very different versions of Level 4 in learning. Kirkpatrick defines Level 4 as results, meaning the business results and he advocates creating a compelling chain of evidence to show that learning contributed to the results. Phillips defines Level 4 as the isolated impact of learning—many in the profession don’t believe this can be measured. Jack and Patti Phillips provide five methods to isolate the effect of learning from all other factors, but some of these require statistics and good experimental design, which many don’t have.

2. The Number of Measurement Metrics Can Be Overwhelming

My second reason to reconsider the ease of L&D measurement is the sheer number of metrics. We have about 200 metrics for L&D alone and more than 700 for HR. This is a lot for anyone to master.

3. There Are Not Many Good Coaches/Teachers on the Topic of Learning Measurement

My third reason is the measurement staff and their leaders. Most are not exposed to L&D or HR measurement at university, so they learn it on the job. For them to learn on the job, they need a good teacher. True, many take workshops and read 100+ books on the topic, but this is usually not enough to master the concepts and practice of sound measurement. They need a good teacher or coach in their workplace to show them how to apply what they have learned and answer all the real-world questions that are going to come up.

Sadly, in many organizations, there may not be a good coach or teacher to help the person new to L&D measurement. Without this coaching, the staff is unlikely to master the measurement practice truly. Consequently, they will not be good coaches for those coming after them. So, as a profession, we do not seem to have reached a sustainable equilibrium where the experienced can teach and mentor the less experienced. In other words, those who should know either don’t know or don’t pass on what they know. This may explain why we continue to talk with so many who have only the most basic understanding of measurement and reporting, even though thousands have taken workshops and read books.

What do you think? Is good measurement difficult? What can we do about this?

We will continue this discussion at our Virtual Conference on November 2nd. Join us to learn more and share your thoughts.

How to Measure an Onboarding Program

In a recent article, I explored the most important efficiency and effectiveness measures for L&D programs and discussed the outcome measures and when they should be used. In this article, I will apply those recommendations to a particular program: onboarding. My hope is that you will be able to apply these recommendations to your own program.

Efficiency Measures

Let’s begin with efficiency measures. I recommend four measures at a minimum:

  1. Number of participants
  2. Completion rate
  3. Completion date

For onboarding, I would recommend “completions” as a measure of participants, at a minimum. If you have a significant dropout rate, then I would also measure the number of participants who started and report the completion rate.

The completion date isn’t as important for an onboarding program because most have fixed start and finish dates, but I would record the duration in weeks and hours. In other words, how many weeks and hours did the onboarding require (hours are important if they met just part of the time during the week). These last two are important because there may be pressure to shorten the program in the future, so you need to know the duration history. Lastly, I recommend capturing the cost. Calculate staff time at their fully burdened labor and related rates.

Effectiveness Measures

Next are the effectiveness measures, which indicate the quality of the program. Here I recommend level 1 participant reaction as well as level 1 sponsor reaction (from a shorter survey asking about the reliability of the L&D department, ability to deliver on time and budget and recommend to others). One or more level 2 knowledge checks would be appropriate for most onboarding programs. Some may test at least weekly.

While I recommended level 3 application for programs in general, it may be difficult if the onboarding is done prior to the employee’s first day. It’s good to ask if the employee will be able to apply what they learned on the job, but if they haven’t started their job, their answer may not be meaningful. I do recommend asking about application if they have already started their job or if you are able to do a follow-up survey 60 to 90 days after onboarding is complete. Level 5 ROI often is not done for onboarding since onboarding is compulsory; however, it would be very useful if a new onboarding program is expected to produce significantly better results than previous ones.

Outcome Measures

While every onboarding program will have individual outcomes (i.e., increased levels of competency), many do not have specified organizational outcomes. Like compliance, they simply need to be done as efficiently and effectively as possible. So, you may not have an outcome measure. That said, some do design new or improved onboarding programs with the specific goal of improving an organizational outcome like employee engagement or retention. If your needs analysis indicates that a new or revised onboarding program should lead to higher engagement or retention, these become the basis for your learning outcome measures.

The actual impact measure would be called “the impact of onboarding (learning) on engagement (or retention)”. The isolated impact can be measured using one of Phillip’s five isolation methodologies. In this case, you would be able to calculate an ROI. Or, you might adopt the Kirkpatrick approach and create a compelling chain of evidence that the new onboarding program did indeed play an important role in the improved engagement or retention scores (no ROI available in this case). Either way, you are addressing the impact of the program on important organizational goals.

Just remember that you need to be focused on the impact of your onboarding program—the delta (or change) in the level of engagement or retention, not the level itself. In other words, your onboarding for the coming year will directly contribute to the improvement in engagement or retention for the year (like a 3-point increase from last year) rather than the level itself (like a 70% favorable score for engagement).

In Sum

In conclusion, the framework from last month can easily be applied to a program like onboarding. How do these recommendations line up with your current measures for onboarding? Any new ones that you will consider going forward? I hope you will use the framework to identify two-three efficiency measures and two-three effectiveness measures for your own onboarding. And perhaps even an outcome measure.

The Most Important L&D Measures to Capture

The Most Important L&D Measures to CaptureWe have more than 200 measures in L&D which begs the question, “What measures do I really need?” You certainly don’t need 200 and there is no “extra credit” for having more measures. The goal of your measurement strategy is to have the right measures for the coming year or two. After that, your strategy will evolve and you will likely expand or at least modify the list of measures you collect. So, what do you need for the “here and now”?

Since organizations have different needs, the recommended list of measures will be unique for each organization. That said, we can identify a list of starting measures that will apply to most organizations. First, let’s explore the most commonly employed efficiency or quantity measures at the course level. The list includes:

  • Number of participants,
  • Completion rate (percentage of the target audience that completed the course),
  • Completion date (used to determine if the course was delivered on time), and
  • Cost

More practitioners are also beginning to measure activity within online courses like time spent on each task.

At the department level, the learning leader will typically want to see:

  • Aggregate course efficiency measures like total unique participants (no duplications),
  • Total participants (allows for the same person to take more than one course),
  • Average completion rates,
  • Average percentage for on-time completion of development or delivery, and
  • Total cost

At the department level there are additional basic efficiency measures like:

  • Reach (percentage of employees touched by learning),
  • Number of courses offered and utilized,
  • Number of hours utilized, average hours per employee, and
  • Number of courses (and/or hours) by type of learning (instructor-led, virtual, e-learning, blended, clusters)

There are also a number of efficiency metrics for informal learning which measure usage. If you offer knowledge sharing, then you should at least measure the number of communities of practice, the number of active communities and the number of participants. If you offer performance support, then you should measure the number of performance support tools available and used as well as the number of employees using them. If you offer online content through a portal (not e-learning), then measure the number of items available and used as well as the number of users.

A measurement strategy should always contain a balance of efficiency (quantity) and effectiveness (quality) measures to provide a holistic view and to prevent unintended consequences. The most common effectiveness measures at the course level are:

  • Level 1 participant reaction,
  • Level 1 goal owner or sponsor reaction,
  • Level 2 learning,
  • Level 3 application (intended and actual), and
  • Level 5 ROI (Level 4 is addressed below)

All courses should be measured at Level 1 and important courses should be measured at Level 3 (at least intent to apply). If a knowledge check is important, the course should be measured at Level 2. If it has a large audience or requires substantial resources, it should be measured at Level 5.

At a department level, the effectiveness measures are the same but they will typically be reported as averages across all courses for Levels 1-3, but the learning leader should also examine the distribution to see if there are a large number of low scores which are hidden by a high average. Level 5 is best reported as a list of ROIs by project.

For informal learning, Level 1 should be measured periodically (every three to six months) to determine if users are satisfied with their communities of practice, performance support tools, and online content.

In the TDRp framework, we separate Level 4 impact from the effectiveness measures and place it in its own category called outcome measures. We do this because CEOs have told us they most want to see Level 4 from their L&D department but seldom receive it. So, we give it special attention by creating a third category of measures.

Ideally Level 4 is the isolated impact of learning, and Jack and Patti Phillips describe five ways it can be calculated. If it is not possible to isolate impact, then a Kirkpatrick approach can be used to make a convincing case that at least some of the business result was due to learning.  (Note that Level 4 can only be measured using the Phillips approach and that Level 4 is required to calculate Level 5.) Level 4 impact should be measured for any important program (often consisting of multiple courses and informal learning) that is expected to help achieve an organization goal like increasing sales, reducing injuries, or improving leadership or employee engagement.

In conclusion, my recommendation is to start with the basic efficiency and effectiveness measures for formal learning. You should measure to at least Level 3 application for important courses and Levels 4 and 5 for the most important programs and initiatives. Add informal learning measures as you deploy informal learning. You are likely to be more successful if you start small and then grow through time. In other words, don’t start with 50 measures. Instead start with 10-15 and add some each year.

Learn more in Measurement Demystified: Creating Your L&D Measurement, Analytics, and Reporting Strategy (ATD Press, 2021).

Does Your Board of Directors Know What You Do?

I had the opportunity last week to talk at a conference of corporate board directors focused on human capital. In a publicly-traded company, corporate directors are the people who hire and fire your CEO and provide oversight on all critical issues. In other words—your CEO reports to the board.

I shared with them the impact learning can have on corporate results and how it can be measured. I also shared the concept of aligning (or creating) learning to directly support the company’s top goals and examples of business-like reports to be used throughout the year to ensure planned results are delivered. These are brilliant people, many CEOs in their own right, and quickly understood the concepts and the impact learning could have.

I then asked if they had ever had a presentation by their Chief Learning Officer (CLO) or whatever they call the person responsible for learning. Only one said they had. Most of the others didn’t even know if the company had a CLO or someone similar. Mind you, the average size of the company in attendance was 10,000 employees and $5 billion in revenue. I am sure most of them had someone in charge of learning at this scale, but they didn’t know it.

Then, my question to you is, “Does your board know that they have a CLO or VP of Training?” If the group I addressed is representative, they may not. Do they know what you do for the organization? Your priorities? Do they know you have created programs specifically to help the company accomplish its goals? Do they know you are responsible for leadership development and programs and offerings designed to increase employee engagement and improve retention?

While your CEO or Chief Human Resources Officer (CHRO) may mention learning and development in passing during board meetings, given my experience last week, this is not enough. Your CLO should make a presentation annually to the entire board or at least the governance committee. This would be an opportunity to tell the board about your focus and critical programs and how learning is contributing to achieving the CEO’s top goals.

There are two more items your board of directors should know. First, do you have an annual business plan for learning and development, including the business case for the requested budget and staffing? Second, do you have a high-level governing body dedicated to learning? This body would meet quarterly to provide direction, set priorities, and approve the annual plan. Ideally, the body would be chaired by the CEO and include senior leaders from across the organization. If you have both of these elements, your board of directors should know about it. They don’t need a lot of details. Still, from a governance perspective, they need to understand that the critical and strategic function of learning is aligned strategically to the company’s goals, has a business plan, and is governed by a high-level body.

The good news is that corporate directors will appreciate hearing more about learning that helps accomplish company goals. The bad news is they are not being informed of even the basics, like whether the company has a CLO. Work with your CHRO and CEO to get more visibility for what you do.

Is Data Analytics Your Highest Priority? Maybe It Shouldn’t Be

Data analytics continues to be a very hot topic in our field. Many have identified this as their number one or two priority for the coming year, but should it be?

First, let’s explore some definitions.

Data Analytics

For some, data analytics means any collection or manipulation of numbers. By this definition, all measurement is an exercise in data analytics, including finding the number of participants in a course or the number of courses offered. I think this definition is too broad and not very helpful for our discussion.

In our book Measurement Demystified we define analytics as:

“an in-depth exploration of the data, which may include advanced statistical techniques such as regression, to extract insights from the data or discover relationships among measures.”

Clearly, this definition does not encompass calculating the number of participants or finding the average application rate. In other words, analytics goes beyond arithmetic operations like calculating sums or averages and it goes beyond the simple reporting of a measure.

Analytics could entail looking at a frequency distribution of the data or in-depth analysis of the value of a measure by drilling down to finer levels of detail. It could also be the use of correlation or regression to see if two or more measures are related. (For example, is more learning correlated with higher retention?)


So, what some refer to as analytics is just measurement. We define measurement as “the process of measuring or finding values for indicators.” Finding the number of participants or the application rate for learning are examples of measurements. It is true that measurement and analytics are related. Typically, you will measure first to produce measures which may then be analyzed to better understand the value of the measure. So, the measurement comes before the analytics. But this is not always the case. Sometimes, we need to do analysis to identify additional measures we need for further analysis. Bottom line, the two concepts are closely related but synonymous.

Now, back to our question: “Should data analytics be your highest priority?” In my opinion, the answer is no for organizations that have not mastered measurement, which I define as having a comprehensive measurement and reporting strategy. A comprehensive strategy would include the regular measurement of numerous efficiency measures (level 0) for use at both the program and department levels, regular measurement of levels 1-3 for effectiveness measures for programs, and measurement of outcomes (level 4) and ROI (level 5) for selective programs. A comprehensive strategy would also include the aggregation and reporting of these measures across all programs as well as the basic efficiency and effectiveness measurements for informal learning.

If your organization does not have a comprehensive measurement strategy yet, then I believe the greatest benefit will come from building out your measurement strategy. In other words, master the basics before you tackle data analytics which is a much more advanced topic, requiring special expertise. Most organizations today are still not measuring even level 3 application regularly which means they have no idea if their learning is applied, let alone producing measurable results. On the other hand, if an organization already has a comprehensive measurement and reporting strategy, then developing a sound analytics capability is the next logical step. In this situation, a wealth of data will already exist, providing a great foundation for analysis.

Data analytics has tremendous potential for the profession, but let’s be sure we have mastered the basics first.

What Is a Carefully Planned Program?

A carefully planned program may mean many things, but our focus here will be on three key aspects to create impact:

  1. Design
  2. Plan
  3. Agreement on roles and responsibilities


1 | Program Design

To be truly meaningful, a program must be designed for impact. This begins with a proper performance assessment to identify the underlying issues and determine if learning has a role in addressing the challenge or meeting the need. Assuming learning plays a role, the appropriate learning assets must be identified and created. This may consist of e-learning pre-work followed by instructor-led training and capped off with a provision of performance support tools or enrollment in a community of practice. As good as this approach is, it is still not good enough to be impactful.

Even a well-designed program (as in the example above) that achieves high scores for participant satisfaction and learning may not be adequately applied. From the beginning, the program must be designed with application and impact in mind. This may include more practice time in the program or more examples of application. It will also require a plan to help the leaders communicate the value and expectations of the program to the participants. Even more importantly, it will require a reinforcement plan to help participants apply what they learned.

Many L&D organizations today are not structured to design or deliver this type of a holistic plan. Performance consulting, design & development, and delivery may exist as silos, and there may not be a program manager to ensure that all of the parts fit together. Furthermore, in some organizations, ILT program design is separate from e-learning which is separate from performance support so no one sees it as their responsibility to pull all the required elements together into one integrated program, let alone prepare a communication and reinforcement plan for the audience’s leaders to ensure impact.


2 | Program Plan

The second key aspect of a carefully thought-out program is the upfront planning of target values for all of the key measures. This begins by working with the goal owner to reach agreement on the expected or planned impact of learning on the goal. For example, if the goal is to increase sales by 10 percent, how much could a carefully planned sales training program contribute to the goal? The goal owner and L&D might conclude that a carefully planned program that is well designed communicated, delivered, and applied might contribute 20 percent to the 10 percent goal of increasing sales. In this case, learning is planned to contribute to 2 percent higher sales by itself.

3 | Agreement on Roles and Responsibilities

Reaching an upfront agreement is important for several reasons. First, the higher the planned impact, the earlier in the fiscal year the program must be completed. For very high impact, the program should come out in the first quarter. Second, the higher the planned result, the more effort will be required by both L&D and the goal owner’s organization. The goal owner and the supervisors in that organization need to understand that their time will be required for impact—not just to help design the program but communicate and reinforce it. Few programs are so impactful in their design and delivery that significant impact can be achieved without the active involvement of the goal owner’s organization after delivery is completed.

Planning for a high-impact program, however, doesn’t stop with agreement on a measure of success like impact. L&D and the goal owner also need to agree on plans or targets for the critical efficiency and effectiveness measures necessary to deliver the planned effects. For example, on the efficiency measures side, how many participants must complete the program by what date to have impact? What is the budget for the program, and is it sufficient to deliver the planned result? On the effectiveness measures side, what application rate must be achieved to have the desired impact? The higher the intended impact, the higher the application rate must be. A carefully planned program will have plans or targets set for these key efficiency and effectiveness measures, all designed to deliver the intended impact.

The last element in today’s discussion is agreement on roles and responsibilities between L&D and the goal owner. What precisely must each party do for them to achieve joint success in delivering the planned impact? L&D has primary responsibility to do the performance consulting, design, development, and delivery, as well as create plans for the goal owner to use in communication and reinforcement. The goal owner must assist L&D in performance consulting and design since they know their organization best. They also must make subject matter experts (SMEs) available in a timely manner. They will have the primary responsibility to communicate reasons for the learning program to their employees and their expectations that it be applied. And they have the primary responsibility ensure that it is applied. At Caterpillar we created a written document listing the roles and responsibilities for both parties to review with goal owners at the start of the engagement.

In Sum

In conclusion, creating a carefully planned program to deliver impact may not be rocket science, but it is much more complicated than it may first appear, which explains why so many programs fail to deliver significant impact. Keep these three aspects in mind when you design key programs and see what a difference they can make.

The Critical Role of Leaders in L&D Measurement

I was talking with a colleague who had been asked to create a list of metrics for their organization to use. I asked if the leader had provided context or shared areas of interest or perhaps areas of concern where metrics would be important to better understand and manage the issue. They said no, the leader just wanted a list of metrics. Apparently, any list would do.

Unfortunately, this scenario is all too common. Leaders know they should have some measures and so they ask the measurement or analytics team to come up with a list or perhaps even a complete measurement strategy. In this case, the leader is not taking ownership for measurement and may not understand the critical role they play in the measurement and reporting process.

Ideally, measurement should be an extension and enabler of a leader’s management strategy. They should view measurement as essential to their success and champion it within the organization. As the CLO for Caterpillar, I could not succeed without measures and by extension, none of our efforts to improve learning and deliver value could succeed without measurement. We had to know our existing baseline to determine where improvement was necessary. Once we put plans in place, we had to measure monthly to know if we were on track to deliver the promised results by year-end. I literally could not manage without data.

A leader who is committed to running learning with business discipline will understand this. There are, however, good leaders who do not understand their role in measurement. They think the measurement strategy should be delegated to the subject matter experts—namely, the staff with experience in measurement and analytics. In these situations, staff will need to help their leader understand their important role in selecting measures and creating a measurement and reporting strategy. hopefully, the leaders will be willing to listen.

In addition to conveying how important measurement is, a good leader will provide direction in measurement selection and reporting. A good leader will share the measures of most interest to them or at least share their area of concern or focus so the measurement analyst can recommend appropriate measures. A good leader will also share the reason to measure which will dictate the type of report to use. The measures may be used simply to inform (for example, a number of courses or participants) but may also be used to monitor to ensure measures remain within acceptable bounds (for example, ensure participant reaction remains above 80% favorable). The leader must provide direction by identifying the most important measures and setting the thresholds for monitoring.

Leaders may also want to measure to manage programs or initiatives to a successful conclusion (for example, reaching 90% of the employees with learning or improving the application rate for key programs to 80%). This takes a special kind of report. Here again, leaders will play an integral role by setting plans or targets for all the key measures. They should not delegate this very important task to measurement analysts. Last, leaders may want to know how good a program was (for example, did it have an impact and was it worth doing?). This also requires a special type of report, and the leader will play an important role in reaching an agreement with goal owners on plans for key measures at the start of the program.

In conclusion, leaders need measures to do their jobs and consequently, leaders must be heavily involved in the creation of a measurement and reporting strategy. Put differently, leaders are the most important users of the measures so they must be involved in the measurement strategy from the beginning and they must own it.

Future-Focused Training: Are You Committing Learning Malpractice?

This article is reprinted from Chief Learning Officer, January 10, 2022

The temptation is strong. Over the past year many articles, webinars, and conference sessions have highlighted the need to focus on the skills your future workforce will need. Moreover, your CEO may be pushing you to provide training to meet these future needs. After all, who wants a workforce with outdated skills? Most would say it just makes sense to start this important training now. But does it?

My answer is no, and I believe many who proceed down this path are guilty of learning malpractice. First, it is very difficult to identify true future needs. Second, even if future needs are correctly identified, the participant cannot immediately apply their learning since the need does not exist today. This second objection cannot be overcome, and I assert that any training which cannot be immediately applied is a clear case of learning malpractice. Let’s examine each issue in more detail.

Identifying future skills, knowledge, and behavior is hard. Very hard. First, you have to define the “future”. Most writers seem to talk in terms of two to five years so let’s go with that. Second, how would you go about identifying the skills needed in two to five years? Many of us struggle today to have productive performance consulting discussions with our stakeholders about their current performance needs.  Can you imagine having this discussion about their performance needs two-five years out? Remember, we are not talking about what their employees need to be able to do today—we are talking about what new skills and knowledge they will need to meet future requirements. Most will not be able to identify new organization-specific skills needed in two to five years.

Consequently, many practitioners instead rely on survey results about future skills and recommendations by consultants and vendors. You have seen these lists. They are not organization-specific and they highlight primarily soft skills like leadership, creativity, critical thinking, teaming, and presentation skills although the lists generally include a few “hard” skills like data literacy. But are these truly new skills needed for the future workforce? Aren’t these very same skills needed right now? In fact, I have not seen a generic list of “future skills” which is different from much needed “current skills”, although it is certainly possible that some organizations could identify a few specific new, hard skills.

For the sake of argument, let’s suppose we could identify skills and knowledge not needed today but very much needed in two to five years. Now we run into an insurmountable problem and that is application. As a profession, we already struggle with low application rates or conversely high scrap rates. Recent research by Jack and Patti Phillips shows a scrap rate of 80% for learning during the pandemic. That is 80% – not 8%. Even prior to Covid, researchers were reporting scrap rates of 40%-80%. So, even with learning which can be applied immediately to meet an existing need, most learning today is not applied on the job.

What application rate could we expect for learning designed to meet a future need which by definition cannot be applied until some future date? The answer is simple – about 0%. Participants will have forgotten what they learned before they have a chance to apply it. In other words, the scrap rate will be 100% and the learning will be a complete waste of time and effort This is the very essence of learning malpractice.

In summary, identifying truly new skills and knowledge needed two to five years in the future is nearly impossible, and even if it were possible, the application rate will be 0%.  Therefore, it is learning malpractice to provide training today to meet a need 2-5 years in the future. Instead, your first priority should always be to meet today’s needs and here we must focus on raising the application rate. Your second priority should be to emphasize training that will become increasingly important in the near future but which can be applied today. If you agree, let’s try to reset the conversation so we have greater clarity and transparency about what we can and cannot realistically accomplish.

Simplify Your Measurement of Leadership Development

Many find measuring the impact of leadership development especially difficult. They try to tie leadership training results to organizational goals which may be impacted by leadership. Unfortunately, this is a long list since one can reasonably argue that better leaders should contribute to achieving ALL the organization’s goals, including higher sales, greater productivity, more cost reduction, better quality, higher employee engagement, and lower turnover. I have never seen anyone do this successfully since the impact of leadership on all these goals is indirect and very difficult to quantify.

There is a better and much easier approach. Start by stepping back and asking why you are doing leadership development. If the answer is to address one goal, like sales, then measure its impact as part of your suite of learning programs to improve sales. In this case, you should have designed the leadership training specifically to help achieve that one goal (like sales), and the target audience would be only those leaders in sales.

Chances are your needs analysis also found other ways learning could help achieve the sales goal so it is likely you are doing more than just leadership training to improve sales. When it comes time to measure impact, ask the participants (and perhaps their bosses) what impact the suite of leadership training programs had on the improvement in their sales. If you are using the Phillips methodology, there are five methods you can use to isolate the impact of learning from other factors.

More often, the leadership program is developed to help improve leaders across the organization. In other words, the leadership training is not developed to help achieve a single goal but instead to increase leadership competency in general which should help achieve all the goals. In this case, the best approach is to measure the impact directly on leadership. Ask the leaders whether the training improved their leadership, but also ask their employees and the leader’s boss whether they saw an improvement in the participant’s leadership. The key is to measure impact as directly as possible.

In addition to these measures, most medium and large organizations have some version of an employee engagement survey and most of these surveys have questions about leaders. At Caterpillar, we had a 50-question survey with seven questions about leaders. Create a leadership index based on the leadership questions and use this index to gauge the improvement in leadership.

If your leadership program is well designed to address your organization’s leadership needs, the index should improve after leaders have completed the program. If you wish to isolate the impact of the leadership program on the change in the index, then you could use a naturally occurring control group (if available) or ask the participants and their bosses to estimate how much of the change in the index was due to the leadership training.

If you would like to calculate an ROI on the program, you could easily ask each leader to identify an increase in income or productivity or a decrease in cost. Then you would apply the Phillip’s methodology to convert the impact to dollars and calculate the ROI.

There is another method, which is easier at scale, which can work if a goal of the leadership training is to make leaders more efficient in terms of interactions with employees. Better leaders often will conduct more efficient staff meetings and require less time helping employees set goals and review performance. Simply add a question about time saved due to better leadership to the survey given to the participants and their employees. Then find the value of the time saved by multiplying the hours saved by the labor & related rate of the participants. (Needless to say, don’t use this method if the goal of your program is for leaders to spend more time with their employees!)

Bottom line, take the most direct approach possible to measuring leadership impact. This will simplify your measurement and will be much easier than trying to measure impact on all the organization’s goals.

The L&D Industry’s Answer to Measuring Leadership Development

by Kent Barnett, CEO, Performitiv

I think we would all agree that as an industry we do a very poor job of measuring, communicating, and improving the value of leadership development programs. I think most, if not all of us, would also agree that leadership development is one of the most strategic business processes in our respective organizations.

So, why is it that we do such a poor job in this area? The answer is because it’s hard and confusing. The good news is that dozens of leading learning organizations and experts have come together to create a systematic framework to measure leadership development. It is designed to work for large and small organizations and be flexible and easy to get started.

As may of you know, the Talent Development Optimization Council (TDOC) was created over threee years ago to address two primary issues:

  1. Better Communicate Learning’s Value
  2. Optimize Learning’s Impact

TDOC created the Net Impact System (NIS) to provide a systematic approach to address these two issues. Organizations that have started to apply the NIS principles are seeing huge gains in business results, effectiveness, scrap reduction, outcome indicators, and Net Promoter Score. TDOC is now focused on leadership development.

Attend Kent Barnett’s Session, Measuring Leadership Development

Join Kent Barnett at the 2021 CTR Annual Conference where he will teach you how to build  world-class dashboards and management reports for leadership development. If you are interested in learning how to use metrics to demonstrate value, then you can’t miss this event!

When: November 2, 2021
Time: 3:00 – 3:35 pm (EDT)
Speaker: Kent Barnett, CEO, Performitiv

Register for this Event Now!

About Performitiv

Performitiv provides the technology, methodology, and expertise to help understand learning effectiveness, business impact, and continuous improvement opportunities. Their technology helps to streamline, automate, and upgrade evaluation processes to save time, simplify data collection, and improve the overall effectiveness of L&D operations.

Make or Buy? Not as Simple as It Sounds

We are often faced with a decision whether to design, develop and deliver a course in-house or pay an outside party. This is a classic “make or buy” problem which turns out to be more complicated than most think. Since some critical costs are often omitted in the calculation of the make option, many practitioners are choosing the make option when the buy option is actually less expensive. Of course, there are other reasons than the cost for developing a course internally, but we will only focus on cost for this article.

I’ll begin with an example. Suppose I want to develop and deliver a two-hour course which will take 100 hours of staff time to design, develop, and deliver. The labor cost for the 100 hours is $30 per hour for a total of $3,000. Assume the instructor travel expense is $1,500 and materials are an additional $500. We will assume there is no room rental charge and no other direct expenses. Furthermore, suppose I have a vendor willing to deliver the same course for $7,500. Based on cost alone, which option should I choose?

At first glance, the answer appears obvious. The internal cost comes to $5,000, and if I choose this option, I save $2,500. Is that the correct decision though? Maybe not. First, the $30 per hour may represent only salary neglecting benefits like the employer-paid portion of FICA, health care, pension, unemployment, and other costs associated directly with the employee’s salary. These are called “related costs” and typically are about 25 percent of the hourly rate for management employees. Adding these in, we have a labor and related rate of $37.50 per hour and a labor and related cost of $3,750. You may be thinking that even with the additional $750, the decision still favors the make option, but we’re not finished.

We also need to add the burden rate and here is where it gets complicated. The burden rate takes two things into account. First and the more obvious of the two, burden accounts for the overhead required to support the department. This includes office rental space, utilities, computers, copy machines, office supplies, telephone, etc. as well as items like travel for conferences and education, subscriptions, fees, and consulting not directly related to a course. We include as overhead only those costs not directly associated with or attributable to projects.

This cost needs to be spread across all of the hours attributable to direct work. Typically, an employee works around 70 percent of their 2080 annual hours (52 weeks x 40 hours per week). The rest is spent on holiday, vacation, sick time, staff meetings, performance reviews, general planning, training and development, managing others, etc. So, we divide the total overhead burden for the department by the sum of the attributable hours, not the sum of all hours. This is often close to the average hourly rate, so for this example let’s say it’s $25.

But, we’re still not finished adding internal costs. The second component to burden is the cost of the non-attributable hours discussed above. In other words, what are the labor and related cost of the 30 percent of total hours employees do not spend directly working on task? Employees still need to be paid 40 hours per week even though they are not working on task for all of these hours. This money has to come from somewhere. Those of you who are consultants understand this well. You have to generate enough revenue to cover your time when you’re not on a billable project. This becomes an important part of burden and is not a small number. We divide this additional burden by the number of attributable hours to get the second component of the burden rate. Let’s say this come to $15 per hour.

In this example, the two components of burden add up to $40 per hour, which is more than the labor rate and not unusual. Now, our fully-burdened labor and related rate is $30 + $7.50 + $25 + 15 = $77.50 per hour, which is more than double the labor-only rate of $30 per hour. This pushes the make option to $7,750 for labor plus travel ($1,500) and supplies ($500) for a total cost of $9,750. It turns out that the less expensive option is to buy rather than make.

As you can see, there is more to the calculation than at first glance. Are you making these calculations in your organization? Do you know your total burden cost and the number of attributable hours? Many don’t and consequently think they are saving their organization money by developing and delivering courses using internal staff when it would be more cost-efficient to hire out.

Attend the Make or Buy Session at the CTR Annual Conference

I’ll be covering make or buy in more detail during the CTR Annual Conference on November 3. Click here to learn more and to register. Hope to see you there!

Why Does L&D Measurement Remain So Difficult?

Why does L&D measurement remain so difficult? Put a different way, why hasn’t the profession made greater progress over the last ten years? ATD research shows clear progress was made from 2001 -2009 in the percentage using higher-level program evaluations (3, 4, and 5), but little has changed since then. We continue to hear from a majority of colleagues how difficult measurement is.

Many explanations have been offered, however, the most common is that practitioners lack the knowledge. They don’t know what to measure, how to measure, and what to do with the measures once they have them. For program evaluation, though, this is hard to understand because there are so many books written and workshops on the subject.

At a broader level, however, I think the profession has lacked a comprehensive framework that addresses how and what to measure for all the reasons to measure—not just program evaluation. We know that most measurement and reporting activity is focused on informing and monitoring, not on program evaluation. Just think of all the scorecards and dashboards you use; these are not for program evaluation. And there has been no guidance whatsoever on reporting and on how to choose the most appropriate report based on the user and reason to measure.

Talent Development Reporting Principles (TDRp) were created specifically to meet the need for an overarching measurement framework. Peggy Parskey and I wrote the book Measurement Demystified: Creating Your L&D Measurement, Analytics, and Reporting Strategy to share this framework and guidance. Our hope is that practitioners will now have the answers to their questions about what and how to measure, and how to report their results once they have them.

Knowledge and lack of a comprehensive framework, however, probably don’t explain all the discomfort with measurement. For some, measurement implies accountability, and many don’t want to be accountable for results. After all, what if we measure and find that application rates are low, impact is negligible, and ROI is low or negative? What if we find we have poor rates of on-time completion for development and delivery or a high cost per learner? This is a culture issue within L&D and more likely within the entire organization that cannot be solved by reading books or attending workshops. Leaders, starting with the CLO, need to make it clear that they want to measure in order to deliver better results and learn what is working so that opportunities for improvement can be identified.

Even if the issues of knowledge and culture are addressed, there may still be a hesitancy to measure and report due to a lack of confidence. This is typical for analysts and others new to their position who simply don’t have much experience with measurement and reporting. They have read the books and attended the workshops, and they may have a supportive culture, but they just aren’t sure about the right measures to use or how to calculate and report on them. Unfortunately, the only way to gain confidence is to do the work. Mistakes will be made, but we all make mistakes and the goal is that we learn from them. A supportive boss or a good coach or mentor can help, but often that is difficult to find.

Another possible explanation is turnover. Like other professions, it takes years to acquire all the practical knowledge needed to excel at a job. I am speculating, but could it be that those responsible for measurement and reporting don’t stay in the position long enough to truly master it with confidence?

Finally, I’ve heard that measurement and reporting is difficult because they don’t have the resources to do a good job. They have a small budget for measurement and not enough staff. In many cases, this reflects a lack of appreciation for measurement and reporting by the CLO and other senior leaders which in turn may reflect a lack of knowledge or accountability on their part.

By the way, these same issues apply across all of HR regarding measurement and reporting, so it is not just L&D.

If you’re committed to making measurement and reporting a part of your L&D strategy, please join us at our virtual conference November 2-4 to explore this issue in more detail. Kimo Kippen will lead a panel discussion on this very topic on November 3 at 2.45 pm ET, and Michelle Eppler from the Human Capital Lab at Bellevue University will share results from a survey she just completed to probe the reasons. Registration is free.

Mid-Year Check Up on Measurement and Reporting

Since we are well into the month of July, I thought it the perfect time to reflect on what we have accomplished so far this year in the world of measurement and reporting.

If you are running learning like a business, that means you set specific, measurable plans or targets for your key measures. How are they doing compared to the plan? Are they where they need to be if you are going to make the plan by the end of the year? If you have started to provide forecasts for how the year will end, you can just compare the plan to the forecast. If you are not yet at the stage of making forecasts, just ask yourself if you are on track to make the plan by the end of the year. Even if it appears you will not make the plan on your current trajectory, the good news is that you still have six months to do something about it. What steps can you take now to get back on plan?

Even if you did not set plans for measures, this is a perfect time to see what you can learn from the first six months. Here are some measures to check:

  • Completion rate: Are you getting the completion rates for formal learning programs that you would expect? Are there significant differences between programs or audiences?
  • Reach: Are you reaching as many employees with learning as you hoped? What percentage of employees have been enrolled in formal learning? How many have engaged in some informal learning?
  • Cost: Do you know what your costs are for the first six months? Are you on budget? Where are the surprises and what can you learn from them?
  • Total Participants: While reach measures whether an employee has taken at least one course or touched at least one informal learning asset, total participants allow for employees to take more than one course or interact with more than one learning asset. So, your reach may be meeting expectations while total participation is not. Are you satisfied with your total participation?
  • Usage: Closely aligned to participation is your usage of online and virtual courses as well as informal learning like communities of practice, portal content, and performance support tools. Are you getting the usage you want?
  • Level 1: Is your participant’s reaction to the learning at an acceptable level? Did it drop when you switched from ILT to VILT? How do your L1 scores compare for VILT and online? Are you measuring L1 for your informal learning assets? How do they compare to formal learning? Where are your opportunities for improvement?
  • Level 2: Are you satisfied with the test scores for your VILT and online courses? Are the VILT scores lower than you were getting for ILT? If so, does the content or instruction need to change?
  • Level 3: Are your application rates where they should be or do have too much scrap learning? Did L3 decline during COVID? If it did, what could you change to improve?
  • Employee Engagement: If you are measuring employee engagement quarterly, has employee engagement with their learning changed with COVID? Are employees happy with the learning opportunities available to them? If it dropped initially after employees began to work from home, has it come back to an acceptable level?

These are just some of the questions that you should consider answering with your data. Remember that a key aspect of running learning like a business is learning from the data. Mid-year is a perfect time to see what you can learn from the first half of the year and then use that knowledge to make improvements in the second half for a strong finish.

Are We Over Using Dashboards?

Dashboards have become increasingly popular, especially those with well-designed visual elements. For many applications, they do represent a great advance from the more boring scorecards filled with rows of data. That said, the question now is whether we have gone too far and are relying too much on dashboards when in fact other types of reports would be better. My answer is yes.

Many practitioners today appear to believe that dashboards are the best, if not the only, way to share data, and this is a problem. It would be better for us as a profession to utilize many different types of reports and tailor the type of report to the specific need of the user, which in turn requires us to think more carefully about the reasons for measuring in the first place. We describe four broad reasons to measure in our new book, Measurement Demystified, and each of these four reasons is linked to the type of report best suited to meet the user’s needs. The dashboard is suited to only two of these four reasons.

The first reason to measure is to inform. This means the measures will be used to answer questions from users and discern if trends exist. The question may be about the number of participants or courses, or perhaps about the participant reaction or application rate. In any case, the user just wants to know the answer and see the data. If they want to see it by month and especially if they want to see subcategories (like courses by type or region), a traditional scorecard will be best with rows as the measures and columns as the months. If the user is interested in year-to-date summaries and more aggregate data as well as some visual representations, a dashboard will be best. So, even for this one reason (inform), the best report depends on what the user wants to see. These may be one-off reports or reports that are regularly updated.

A second reason to measure is to monitor. This occurs when a user is happy with how a measure is performing and wants to ensure the value remains in an acceptable range. For example, participant reaction scores may average 80% favorable and the CLO wants to ensure they stay above 80%. In this case, a dashboard with thresholds and color coding is a perfect way to share the measures. This may be the only element in the dashboard or it may be combined with some other elements. This type of dashboard should be generated monthly.

The third reason to measure is to evaluate a program and share the results. In this case, a program (like sales training) has been completed and the users desire to evaluate and share the results with others. This is a one-off report designed to be used at the end of a program or perhaps at the completion of a pilot. In this case, a dashboard should not be used. Instead, a program evaluation report would be best which takes the audience through the need for the training, the planned results, the activities completed, the actual results, and lessons learned. The report will probably be a PowerPoint but could be a written document.

The fourth broad reason to measure is to manage. In contrast to monitoring, managing means that a goal has been set to improve the value of a measure, perhaps increasing the application rate from 40% to 60% or reaching an additional 2000 employees with learning. If monitoring is about making sure the status quo is maintained, managing is about moving the needle and making progress. In this case, a dashboard should definitely not be used because it would not convey the key information or the detail needed to make management decisions.

Every month a manager needs to know whether their efforts are on plan and whether they are likely to end the year on plan. For this, they need a management report which includes the plan or target for the year, year-to-date results, and a forecast of how the year is likely to end if no additional actions are taken. This type of information is very difficult to share in a dashboard format which is why special-purpose management reports have been designed for L&D. These are generated monthly and focus on both specific programs and aggregated department results. In contrast to dashboards and program evaluation reports, these management reports are not meant to be “presented” but to be used in working sessions to identify where action is required.

In conclusion, dashboards have their place but should not be the only type of report generated by L&D. Dashboards are recommended in two cases: 1) when the reason to measure is to inform and the user wants summary data along with visual elements, or 2) when the reason to measure is to monitor in which case thresholds will need to be included. Dashboards are not recommended when the reason to measure is to inform and the user wants detailed, monthly data. In this case, a scorecard is preferred. Nor is a dashboard recommended to share program evaluation results or to manage programs or the department. In each of these cases, better report types exist and should be employed (program evaluation and management report respectively). Bottom line, it is important to use the right type of report which should match the reason for measuring.

When it Comes to Learning Measurement—What is Good Enough?

When it comes to learning measurement—what is good enough?

This is a question that needs to be asked much more frequently. This is especially true in the case of isolating the impact of learning (Phillips’s level 4) or talking about the accuracy of an ROI calculation. Context is key. Business economists think about the answer to this question all the time because the one thing they know for sure is that their forecast (for GDP, sales, housing starts, commodity prices, exchange rates, etc.) will be wrong. Only by chance will the forecast be exactly right. So, the question is not whether the forecast will be exactly right but rather will it be close enough to make the right decision.  For example, should we raise production, should we hire more workers, should we invest in A rather than B?

We need to apply this same type of thinking in learning. We need to start with the context and the reason for measuring. What decision are we trying to make or what will we do with the estimate once we have it? Given the answers to these questions, how close does our estimate of impact or ROI need to be? I cannot think of a single instance where the estimate needs to be perfect. It just needs to be good enough to help us make the right decision or take the right course of action.

So, let’s step back for a minute and ask why we might estimate impact or ROI? First, I think we would all agree that we want to identify opportunities for improvement. If this is the context, how accurate does our estimate of impact need to be?  In this case, the estimate just needs to be roughly right or “in the ball park”. For example, if the true (but unknown) ROI is 20% we would like an estimate to be in the 10%-30% range. Typically, we would conclude that an ROI in this range has opportunity for improvement. Similarly, if the true ROI were 100%, we would want our estimate to be in the 70%-130% range, and we would likely conclude that no improvement is necessary.

Will the standard methods to isolate impact (control group, trendline analysis, regression or participant estimation methodology) be good enough for this purpose?  I believe so. We simply need to know whether improvement is required, and we want to avoid making improvement when non is needed or failing to make an improvement when improvement is needed. In other words, if the ROI is truly 10%, we don’t want an estimate of 100% And vice versa. The standard methods are all good enough for us to make the right decision in this context.

Now, suppose the reason to measure impact and ROI is not for improvement but to demonstrate the value or effectiveness of the program. At a minimum we want to be sure we are not investing in learning that has no impact and a negative ROI. This is a bit more demanding but the same logic applies. We want the error margin around the estimate to be small enough that we can use the estimate with confidence. For example, if the estimate for ROI is 10%, we want to be confident that the error margin is not plus or minus 10% or more. If it is, we might conclude that a program had a positive ROI when in fact it was negative.

In this context, then, we want to be more confident in our estimates than in the first scenario. Stated differently, we want smaller error margins. We will use the same four methods, but we need to be more thoughtful and careful in their use. We would have the most confidence in the results obtained using a control group as long as the conditions for a valid control group are met, so extra care needs to be taken to make sure the control group is similarly situated to the experimental group. Trendline and regression also can produce very reliable estimates for the “without training” scenario if the data are not too messy and if the fit of the line or model is good. All three of these methods are generally considered objective and, when the conditions noted above are met, should produce good estimates of the impact of learning with a suitably narrow error margin.

The participant estimation method is the most widely used because no special statistical expertise is required and because often there are no naturally occurring control groups. However, it does rely on the subjective estimates of the participants. Accordingly, we will want to be sure to have 30 or more respondents and, ideally, we will obtain their estimates of impact about 90 days after the training. It is also critical to adjust their estimate of impact by their confidence in the estimate. When this methodology is used as described by the Phillips, it, too, should produce estimates reliable enough to be close to the actual but unknown impact and ROI.

The common theme in both scenarios is good enough. At Caterpillar we conducted about three impact and ROI analyses per year using the participant estimation method. We used the results to both show the value of the programs and to identify opportunities for improvement. We always presented the results with humility and acknowledged the results were estimates based on standard industry methodology with adjustments for self-reported data. We had confidence that the estimates were good enough for our purposes and we never received any pushback from senior leadership.

So, remember that your results do not need to be perfect, just good enough.

Where Measurement and Reporting Strategies Go Wrong

measuring and reporting

by Peggy Parskey, CTR Assistant Director

Imagine you are a manager in a Learning and Development function in your company. Each month, you receive several reports (or links to a dashboard) with a plethora of data. The reports focus on the function overall, but with a bit of digging you can find the data relevant to you. While the reports and dashboard provide a lot of information, you lament that they don’t help you manage your operation. Beyond the inadequacy of the reporting, you believe the organization doesn’t measure the right things or provide enough insights about improvement opportunities. You suggest to your CLO that the organization needs a measurement and reporting strategy.  The CLO agrees and delegates the job to your measurement person if you have one, or lacking that, to you. (Since you asked, you own it.)

Having never created a measurement and reporting strategy, you conduct a Google search.  Before November 2020, your search would have surfaced ads for product or service companies at the top, followed by a few white papers from these same companies or blogs with best practices and advice. Some content focused on measurement while others only addressed reporting.  Few, if any, provided a checklist of elements to include in your strategy. Lacking any meaningful guidance, you cobble something together from the disparate pieces of information found on the web.

This theoretical manager exists in organizations across the globe. Over the past 5-10 years, I have reviewed dozens of client-generated strategies. Most have some of the components of a strategy but nearly all lack the critical elements to advance measurement capability in the organization.  In November 2020, David Vance and I published Measurement Demystified with the express purpose to provide practical guidance on how to create a robust measurement and reporting strategy.

In this post, I’ll give you an overview of what comprises a well-designed strategy. But before sharing that with you, I’d like to review what a measurement and reporting strategy is not. Each bullet below represents something I’ve seen over the years as clients have shared their strategies.

  • Focused on reporting rather than measurement and reporting: You need both: what and how you measure as well as what and how you report. Reports are an output. Without the appropriate inputs (that is, the measures), you will be challenged to meet user needs.
  • Kirkpatrick’s four levels or Phillip’s five levels: Both frameworks provide important guidance and processes to select and report effectiveness measures. However, neither framework addresses the efficiency measures you should choose. Moreover, while both address outcome measures, they don’t provide detailed guidance on when and how to use them. They are an essential element of your strategy, but they are not the strategy.
  • One size fits all approach to measurement: To avoid implementing an unsustainable complex approach to measurement and reporting, many organizations have opted to go in the opposite direction and adopt an overly simplified approach. They identify a single suite of measures, methods, and reports across all programs regardless of their purpose. While this approach may meet the needs of some users, inevitably, it will frustrate others who lack the data to manage their operation.
  • A focus on one type of report: Dashboards are extraordinarily popular as a reporting tool and enable “speed of thought reporting”. The data is up-to-date, users can filter the data, and they can drill down to look for root cause of specific program issues. But dashboards can only take you so far. A robust strategy adapts the reporting to the user needs and their reasons for measuring.
  • A tool to gather the data. If you are a large organization, you will need a tool to enable you to aggregate and disaggregate large volumes of data. The tool is not the strategy.

Now that you have insight into what a measurement and reporting strategy is not, let’s turn to what it is.

  1. Begin your strategy with a clear articulation of why you are measuring. Different purposes to measure influence what you measure and what, how and when you report.
  2. Next, identify your users and their needs. Who will consume the data? What do they need? What decisions might they make? What actions might result from the information you provide? A strategy needs to be grounded in their requirements.
  3. Specify the measures overall that you will use. However, don’t stop there. Identify the specific measures for key learning programs or department initiatives. Include a balanced suite of measures including efficiency and effectiveness measures for all programs and outcome measures for strategic, business-aligned programs.
  4. Define your data collection approach. Where will you get the data? Where should you consider sampling? How will you ensure that you get sufficient responses to make meaningful inferences from survey data? Where can you automate to reduce effort, increase data reliability, and improve speed time to insight?
  5. Specify the types of reports you intend to use. When will you employ scorecards vs dashboards? When should you use program evaluation reports or management reports?
  6. Plan how and when you will share reports. What are the decision-making cadences and how can you align reporting to them?
  7. Finally, define the resources you need to execute and sustain the strategy. Be clear about what funding, capability, and tools your organization will require to build sustainable measurement.

Creating a measurement and reporting strategy will take time and effort.  You will need to meet with the CLO as well as senior L&D leaders and perhaps key business goal owners. The payoff for this effort will be significant and will enhance the value the L&D function delivers to the organization.  Don’t hesitate to reach out to us at the Center for Talent Reporting. We are here to help you in your journey.


Measurement Demystified: Creating Your L&D Measurement, Analytics, and Reporting Strategy

As some of you may be aware, the title of this blog is the name of the book Peggy Parskey and I recently published with ATD. It is the culmination of all our work since 2010 to create a framework and a set of standards and best practices to make it easier for L&D professionals to create a measurement and reporting strategy. It is also the result of input, feedback, and suggestions from the thousands of professionals who attended our webinars, workshops, and conferences. So, in essence—the book is a joint effort by everyone to advance the profession.

For those unaware of the backstory of the book, here’s a little history…

It all began in September 2010 at a CLO Symposium networking event. Kent Barnett, at the time CEO of Knowledge Advisors, and Tamar Elkeles, at the time VP of Organizational Learning for Qualcomm, were discussing the state of our profession. They both agreed that the time had come for measurement and reporting standards— similar to what accountants have in the Generally Accepted Accounting Principles (GAAP). Accountants go to university and learn about the four types of measures (income, expense, assets, and liabilities), the specific measures in each category, how to calculate the value for each measure, and what to do with the measure once they have its value (i.e., in what report it should be included and how it should be used). They wondered if a similar framework would benefit L&D.

Kent and Tamar set forth to form an advisory council to pursue their vision and ideas. They engaged leading practitioners as well as thought leaders in the field, about 28 in all. They also recruited both Peggy Parksey and me to lead the effort. We produced a draft document in early 2011, which would end up going through over 20 iterations as we incorporated feedback from the council and others. The final document was completed later in 2011, and we then began to expand the principles to all areas of HR, which was completed in 2012. The standards were named Talent Development Reporting Principles (TDRp). Similar to GAAP, TDRp provides a framework for measurement and reporting for L&D.

By mid-2012, the foundational work was complete, and TDRp needed a home. We established the Center for Talent Reporting (CTR) as a 501c(6) nonprofit organization in August of 2012 to continue to develop and promote the principles of TDRp. We hosted our first conference and our first workshops the following year. We also started our monthly webinars and blogs. And we never turned down an opportunity to speak and share TDRp with others.

It’s hard to believe that we have been at this for ten years. We’ve learned a lot along the way, so we captured all of our learnings in the book, Measurement Demystified: Creating Your L&D Measurement, Analytics, and Reporting Strategy. It is our intention that the book reinforces the standards of the TDRp framework and will do for the L&D profession that GAPP has done for accounting, which is to provide a common language; categories, names and definitions of measures; standard reports; and guidance on what to measure and how to report.

The TDRP framework begins with four broad reasons to measure: inform, monitor, evaluate, and manage.  There are many more detailed reasons to measure, but we believe a framework should have no more than five elements, or it becomes too difficult to remember and use. These four categories make it easier for us to have discussions about the reasons to measure, which is the starting point for any measurement strategy.

TDRp recommends three categories for measures. Historically, practitioners had divided measures into efficiency and effectiveness buckets which is what I did when I worked at Caterpillar. However, research and practice showed the need for a third category, not because of the number of measures in it but because of its importance. The third type of measure is “outcome,” which means the results or impact from the learning on the organization’s goals. This is the type of measure CEOs want to see the most, and yet it is seldom reported. This is the measure required to make the business case for learning.

Lastly, TDRp identifies five types of reports which contain the measures. Moreover, TDRp ties the type of report to the reason for measuring, which provides much-needed guidance for practitioners. Scorecards and dashboards are great for informing and can also be used for monitoring if thresholds are included. Program evaluation and custom analysis reports are ideal for sharing the results of a program or research. Management reports are the fifth type, and these have a special format to help leaders manage programs and initiatives to deliver planned results. They come in three varieties depending on the user and the need, but all have the same format, just like the basic accounting reports do.

It is our sincere hope that the TDRp framework and the guidance in Measurement Demystified will help the profession advance. Some are just beginning and should benefit significantly from a framework and guidance on how to select measures and reports. Others are further along and can benefit from the definitions of over 120 measures and the advanced report formats. Even experts may benefit from the detailed discussion of how to create plan numbers, use year-to-date results, and create forecasts.

And we are not done yet. We just submitted our draft manuscript for Measurement Demystified: The Field Guide to ATD for publication this December. We provide over 100 exercises to improve your understanding of the concepts and to give you practice in applying them. Hopefully, the Field Guide will significantly increase your skill level and your confidence, enabling you to create a much more robust measurement and reporting strategy.

We look forward to your continued engagement and feedback.

Running Learning Like a Business

The concept of “running learning like a business” continues to gain traction. It means different things to different people but in all cases involves bringing a more business-like perspective to the L&D function. Personally, I like to focus most on its implications for the actual management of learning programs and the L&D department.

So, you ask, what does this mean specifically? What would someone do differently? How can you tell if someone is running their operation like a business? I believe there are two primary elements to running learning like a business. First, you need to set specific, measurable goals or targets for every important measure. This becomes your business plan for a program or the entire department. This plan should be created just before the start of your fiscal year or no later than the first month of the new fiscal year. Second, once you have a business plan and the year is underway, you need to compare progress against plan every month and answer two key questions: 1) Are we on plan year to date? and 2) Are we going to end the year on plan? If you are not on plan or if it appears you may not end the year on plan, then you need to discuss options to get back on plan and decide whether to take corrective action. This is what we mean by disciplined execution.

Let’s look at each in more detail. First, you need good plan. Of course, there is lot that goes into a good plan beginning with an understanding of the organization’s goals and discussions with goal owners about whether learning has a role to play. Think of this as proactive, high-level performance consulting where you seek out the senior leaders and engage in good, open discussion to explore the possibilities for learning to help them achieve their goals. Learning will not always have a role, but at Caterpillar we were able to contribute to each of the CEO’s top seven goals each year.

If learning can make a contribution, then you need to reach agreement with the goal owner (like the SVP of Sales) on what the learning program will look like: target audience, timing, learning objectives, type of learning, design, etc. More than that you and the goal owner need to agree on measures of success (like the impact of learning on sales) and on targets for key efficiency and effectiveness measures to deliver the agreed upon outcome (like impact of learning). This would include agreeing on plans for number of participants, completion dates and rates, learning (level 2), and application (level3). This needs to be bolstered by agreeing on roles and responsibilities – what each of you needs to do to deliver the planned measures. For example, how will the goal owner communicate the importance of the learning before the program and reinforce desired behaviors after the program. (We had a written roles and responsibilities document at Caterpillar signed by both parties.)

Second, once the program is underway, you will begin to receive monthly data on the measures you and the goal owner agreed were important. You will need a monthly management report (called a program report) to compare year-to-date results to plan. If you are not on plan, you need to understand why. Perhaps it is just a matter of timing (for example, it took longer to launch than planned) and left alone the program will get back on track and meet plan expectations. This is why the forecast for how the year is likely to end is so important. If the forecast shows you delivering plan, no further action is required. However, if the forecast shows you falling short of plan by year end, then you need to consider taking corrective action as soon as possible. This may involve actions such as redirecting resources toward this program or coming up new plans for the goal owner to reinforce the learning.

Our focus so far has centered on what we call strategic programs. These are programs directly aligned to the goals of the CEO. However, there are typically other programs of great importance, in some cases even more important, than strategic programs. Examples would be onboarding, basic skills training, leadership, compliance, etc. The same thinking applies. Start by creating a plan for each program with specific, measurable goals. Then execute the plan throughout the year with discipline to deliver the plans.

We have just discussed what it means to run learning like a business from a program perspective. The same discipline, however, can be applied to your CLO’s initiatives to improve results across all programs (like improving the overall application rate) or to improve internal process or system efficiency or effectiveness (like improving customer satisfaction with the LMS or reducing helpdesk wait time). In this case, the CLO needs to set specific, measurable goals for each measure she chooses to manage. These measures are captured in the monthly operations report which she and the directors can use each month to determine if the initiatives are on plan and likely to end the year on plan. If not, they need to decide what action to take.

So, this is what “running learning like a business” means to me. This the hard work of managing L&D with the same type of discipline your colleagues use in sales or manufacturing where they set annual targets and do the best they can to meet those. For many, this is a very different way of managing than they do today. Most have plans of some sort, but many are not specific and measurable. Even some who do have good plans do not execute them with discipline by using the monthly program and operations reports to compare progress against plan.

If you are managing this way today, congratulations! I know you see the value in this and ask you to share your stories with others so they can be inspired by your success. If you are not managing this way, consider giving it a try. You can start small. Try it for one program. And/or try it with the CLO to better manage a short list (3-5) of measures for improvement. I am convinced that running learning like business was key to our success at Caterpillar, including support and funding from our CEO and senior leaders.

If you would like to hear more about this concept, join me for a panel discussion on March 23 at 4.15 ET hosted by Corporate Learning Week.

2020: The Year of Disruption

I think we’ll all be happy to put 2020 behind us. As of December 18, 2020, Covid has claimed more than 300,000 lives in the U.S. and disrupted the economy, our personal lives, and learning along the way. Despite the pandemic’s destruction, some good has come from the disruption—like the SEC rule mandating human capital disclosure.

The impact of the pandemic on learning has been massive. Organizations have been forced to shift from instructor-led learning (ILT) to virtual ILT (vILT) supplemented by more eLearning and portal content. Some learning couldn’t be converted immediately, so learning opportunities and productivity gains were consequently lost. Since companies had to convert learning so quickly, much of vILT didn’t meet quality standards. But despite this, learners were appreciative of the effort and happy to have more learning than none at all.

During this time, the good news is that learning departments have demonstrated their value by offering up valuable strategic advice on how to manage remotely. This has been a huge win for many L&D departments and has dramatically increased their credibility and stature within the organization.

Just as the pandemic has caused us all to reconsider our work-life balance and the possibility of an ongoing work-from-home arrangement, it has also forced us to reconsider the mix of ILT versus vILT, eLearning, and content available in the LMS portal. Actually, this reconsideration is long overdue. Many organizations had little to no vILT before March 2020. Now, they have a lot of vILT, and the quality of the content is slowly improving as it is redesigned for virtual delivery. Live virtual training offers significant benefits, such as travel and facility cost savings. And, if properly designed, it may be able to accomplish the same objectives in less time. I don’t see any chance of organizations reverting to a pre-pandemic model that realized so heavily on ILT.

There is also an opportunity here to think even more broadly about restructuring your learning programs. Many organizations going into the pandemic had long onboarding and basic skills ILT programs. Some went on for a month or more, with a few lasting three, six, or even 12 months. Is this really the best way to learn? And how much of the content will be remembered and applied? Instead, why not use this opportunity to restructure the learning entirely? Reduce ILT or vILT at the beginning to just the basics that learners will need to get started. Supplement this with additional vILT or eLearning combined with performance support in the workstream at the time of need. For example, following the initial (now shortened) course, participants might take a short vILT course or eLearning module every month over the rest of the year. And this is combined with performance support, which is easy to access and provides them just the tools they require at the time of need.

Last, let’s not forget the SEC’s new rule mandating human capital disclosure. Over the coming years, this transparency will spread to all types of organizations and radically alter the expectation of both investors and employees for information on important human capital metrics like diversity, pay equity, leadership trust, and employee engagement. By 2030 people will wonder why this type of information was not always available. Disruption is coming, and there will be no going back.

So, take this opportunity as you plan for 2021 to continue to think outside the box, just as you have been forced to do this year. Don’t go back to the old ways even when offices reopen.  Double down on new ways of learning and embrace the opportunity to disclose your important L&D and HR metrics publicly.

Best wishes for the holidays and may everyone be healthy and safe.

SEC Publishes Final Rule on Human Capital Reporting

by David Vance, Executive Director, Center for Talent Reporting

The U.S. Securities and Exchange Commission (SEC) just published its final rule on human capital reporting on August 26, 2020. This follows the proposed rule issued one year ago. The final rule makes very few changes to the proposed rule and mandates for the first time, public reporting of human capital metrics by companies subject to SEC reporting requirements, which includes all US companies issuing stock, bonds, or derivatives. The rule becomes effective 30 days after publication in the Federal Register which should happen in September, 2020.

Today, companies have to report only one human capital metric—number of employees. The new rule will still require the reporting of full-time employees, but additionally, encourage companies to report part-time and temporary employees if they are important to a company’s financial results.

More importantly, the new rule mandates for the first time that companies provide “to the extent such disclosure is material to an understanding of the registrant’s business taken as a whole, a description of a registrant’s human capital resources, including any human capital measures or objectives that the registrant focuses on in managing the business.”

The qualifying word: “material” means anything that an investor would want to know before buying or selling a stock, bond, or derivative. The SEC goes on to specifically call out the areas of “attraction, development, and retention of personnel as non-exclusive examples of subjects that may be material, depending on the registrant’s business and workforce.”

SEC Chair Clayton commented in the public release, “I cannot remember engaging with a high quality, lasting company that did not focus on attracting, developing, and enhancing its people. To the extend those efforts have a material impact on their performance, I believe investors benefit from understanding the drivers of that performance.”

In other words, the SEC expects to see these three areas discussed and reported if they are material, and it is hard to imagine companies where they are not. Consequently, public companies will need to start disclosing and commenting on them as early as  October , 2020. And this is just the starting point. The rule calls for all material matters to be disclosed; therefore each company will need to decide what other human capital matters might be considered material by an investor. Depending on a company’s  situation, this might include total workforce cost or productivity, diversity (especially at the leadership level), and culture (revealed by employee engagement and leadership surveys). Discussion may also be required about the implementation of a new performance management system or a significant change in compensation and benefit philosophy.

Where can companies get guidance on specific metrics they might use to meet the new rule? The 2018 recommendations by the International Organization for Standardization (ISO) includes 10 metrics for public reporting by all organizations and an additional 13 for reporting by large organizations. ISO also recommends 36 other metrics for internal reporting, which are organized by area or cluster. The recommended metrics for the three SEC focus areas with metrics recommended for all organizations include:

  • Attraction: Time to fill vacant position, time to fill critical vacant positions, percentage of positions filled internally, percentage of critical positions filled internally
  • Development: Development and training cost, percentage of employees who have completed training on compliance and ethics
  • Retention: Turnover rate

These metrics provide a starting point for a company’s human capital reporting strategy to meet SEC requirements. Additional metrics are available from ISO for the area of development, including percentage of employees who participate in training, average hours of formal training per employee, percentage of leaders who participate in training, and percentage of leaders who participate in leadership development.

In addition to the metrics for the three focus areas, ISO recommends a number of others that could address areas of material concern such as workforce cost,  productivity, diversity, and leadership trust. Companies will need to identify their own metrics not addressed by the ISO and may wish to supplement with other data (such as employee engagement). They will also need to report on material initiatives which may not have metrics.

Publication of the final rule by the SEC, combined with the comprehensive recommendations by ISO, ushers in a new era of transparency in human capital which will fundamentally change the way organizations operate. And these changes will go far beyond US publicly traded companies. In 5-10 years, privately held companies, nonprofits, and other types of organizations will be compelled (or shamed) into adopting the same level of transparency.

In the future of human capital transparency, what investor will buy stock in a company that refuses to disclose material human capital information, which is already the primary driver of value in many companies? What employee will work for an organization that refuses to share its key human capital metrics because they are embarrassed to share? Why would anyone work for an organization where the culture is terrible, they don’t invest in their people, turnover is high, and there are not enough employees to do the work? Today, you can say you didn’t know. Tomorrow, you will know.

Change is coming. The new world of human capital transparency has arrived.

Learn More

The final rule on human capital reporting is part of a much larger revision to what is called Regulation S-K which governs what companies must disclose to investors initially (form S-1), quarterly (form 10-Q) and annually (form 10-K). The name of this effort, which has been underway by the SEC for four years, is Modernization of Regulation S-K items 101, 103, and 105. Item 101 governs description of the business and this is where new rules for human capital reporting are found. (Item 103 covers legal proceedings and item 105 risk factors.) Read the rule at here (pages 45-54) for human capital.

The ISO 30414:2018 Human Resources Management—Guidelines for Internal and External Human Capital Reporting is available for purchase. ISO is just beginning to release technical specifications on how the metrics are defined and used. By mid 2021, specifications should be available on all 59 metrics. The specifications for five of the eight training and development metrics are now complete and in publication. They should be available soon.

Attend Our Free Webinar On Public Reporting

Join us September 16 for our Public Reporting webinar, where cover both the SEC rule and ISO recommendations in greater detail.

Webinar Registration

Attend our Free Virtual Conference

We will be covering the subject of Human Capital Reporting at length at our Virtual Conference, October 27-29. Jeff Higgins, Adjunct Professor – Data Science(DSO), Human Capital Analytics at University of Southern California – Marshall School of Business, has been actively involved in both the SEC and ISO efforts and will be keynoting on the subject. We will also have a panel on the SEC and ISO as well as a follow-up session on the recommended metrics.

Conference Registration

Why We Need Human Capital Metrics and Reporting in Company Disclosures

by Jeff Higgins
Founder and CEO, Human Capital Management Institute

Can we agree that the coronavirus pandemic is above all a human crisis requiring people-centric measurement and solutions?

The last great crisis in the U.S. was the financial crises of 2008-2009. If the pandemic is a uniquely human crisis, the most powerful driver in our economic recovery is what we do next with our human capital and people practices.

Given the importance of people and human capital practices in organizations, does it not make sense that such a critical resource and source of value creation be included in public disclosures?

Without standard human capital metrics, everyone is flying blind (risk planning, investors, policy makers, and corporate executives).

Human capital risk planning is a new focus for many companies. It’s a topic frequently mentioned but has seen little action beyond companies adding boilerplate legal risk language in public reporting.

Reliable, validated human capital reporting has long been a black hole for investors, including institutional investors representing employee pension funds. This may change, as investors and policy makers press for insightful evidence of risk mitigation and future sustainability including global standards like ISO30414 Human Capital Reporting Guidelines (December 2018) and US SEC proposed rule on human capital disclosure (August 2019). Recently the SEC has reaffirmed its want for enhanced disclosure and more focus on workforce health and welfare.

“We recognize that producing forward-looking disclosure can be challenging and believe that taking on that challenge is appropriate,”  explained the SEC in an April 2020 letter.

While critical for investors, human capital reporting is just as important for employees. Think about a person deciding where to work. Wouldn’t they want to know key human capital metrics to make their decision, such as the diversity of employees and leaders, the amount invested in developing employees, the percentage of employees and leaders who take advantage of training, the retention or turnover rate, and culture measures, such as leadership trust index or employee engagement score?

It is critical to note that organizations disclosing more about their human capital have historically performed better in the market.

EPIC research shows:

Reliable human capital data is often lacking yet raw data is readily available, along with proposed ISO HC reporting guidelines.

The enhanced usage of people data and analytics has been a positive business trend long before the COVID-19 crisis. But stakeholders, including executives and investors, have not always had information that was relevant, valuable, and useful for decision making.

Turning people data into decisions requires skill, effort, discipline, and money, which is lacking in many HR departments. Priorities have often been elsewhere in building employee engagement and experience metrics, along with other “hot” technology driven vendor offerings.

Even experts acknowledge that the value of human capital is not always easy to measure. Nevertheless, ISO guidelines do exist for internal and external human capital reporting. Until now, relatively little has been offered publicly in most U.S. companies.

“We may not always want to know what the data tells us,” said one member. As we have seen with the COVID-19 crisis, not wanting to know what the data tells us, can be a recipe for disaster and  not one that leads to economic recovery.

The Need for Transparency Has Never Been Greater

Many of the critical metrics and practices needed most are not transparent to the very stakeholders (such as employees, investors and government policy makers) who must start and sustain the economy recovery. 

Moving forward, enhanced transparency around workforce management and human capital measures will be needed. Without such data, organizations will have a difficult time adapting to changing markets and workforce needs and building critical partnerships. Such data is vital to building trust within the workforce and attracting, engaging, and retaining talent.

Increasingly, as organizations collect more data about their workforce, employees will want to benefit from that data.

“We saw less concern about privacy when we saw the workforce data was shared and being used to protect the workforce,” said one member.

Investing In and Developing the Workforce Has Never Been More Important

The power of learning, development, and people investment has never been greater nor more directly tied to economic results. A greatly changed world requires a new level of learning investment, adoption and innovation.  

Will organizations see the need for learning investment and people development as a continuing need? Will they measure it to better manage it? Will they begin to see human capital measures as a means for learning about what is working and what is not? Will they begin to see human capital reporting as a leading indicator of recovery? 

2020 CTR Annual Conference


Join us at the 2020 Virtual CTR Annual Conference for this session where industry experts share the definitions of the 23 ISO disclosure metrics and how to use these to in your human capital reporting strategy.