Latest Resources

How to Improve Employee Engagement

by Maria A. Febres-Cordero, Channel & Events Marketing Manager, Explorance

The Annual CTR Conference is almost here! Attendees can once again look forward to an engaging three-day event featuring breakout sessions, keynotes, and informative discussions with the learning industry’s best minds. The theme of this year’s conference is “Building Sustainable Measurement Capability”, putting an emphasis on realizing long-term improvements on how HR and talent are managed and measured.

HR teams are increasingly challenged to substantiate how they contribute to business goals and demonstrate real, ongoing benefits to the organization. Explorance will be contributing two special sessions on how to realize these long-term benefits.

Two Approaches Lead to Similar Benefits

Both of Explorance’s sessions focus on building value and insight for HR. The strategies outlined in each session are about getting more value from employee feedback and assessing where the real value in current policy and programs is.

Creating a Measurement Strategy To Tell Your Value Story
November 2 | 1:40 pm EDT
Presenters: Jennifer Balcom, Director of Consulting, Explorance & Lan Tran, Head of Governance, Technology, and Operations, Kraft Heinz

This session provides insights into how Explorance and Kraft Heinz partnered to create a measurement strategy that answers the key question: Did learning make an impact and positively affect behavior outcomes? This session outlines the approach, tools, and frameworks used in building this strategy.

LEARN MORE

From Text Analytics to Comment Analysis: How AI Transforms Your Learning Measurement Qualitative Strategy
November 3 | 12:05 pm EDT
Presenter: Steve Lange, Senior Consultant, Explorance

This session introduces the next generation of HR-trained comment analysis tools and how they provide deep insights into qualitative feedback. Pulling from real-world experiences and examples gleaned from the development of Explorance’s BlueML, a comment analysis solution, Steve will reveal the level of sentiment analysis and predictive analytics that is available to provide rapid, action-focused analysis at scale.

LEARN MORE

As we progress through a period of heightened employee turnover, increasingly referred to as ‘The Great Resignation’, the ability to quickly and effectively respond to employee concerns and sentiment as it relates to training is vital. These two sessions ensure you are up to speed with the latest developments and effective strategies. Leaders that can adopt an effective measurement strategy and are better equipped to derive value from qualitative employee comments will be better positioned to tackle the current challenges of the talent market.

We can’t wait to see you at CTR’s 8th Annual Conference!

 

The L&D Industry’s Answer to Measuring Leadership Development

by Kent Barnett, CEO, Performitiv

I think we would all agree that as an industry we do a very poor job of measuring, communicating, and improving the value of leadership development programs. I think most, if not all of us, would also agree that leadership development is one of the most strategic business processes in our respective organizations.

So, why is it that we do such a poor job in this area? The answer is because it’s hard and confusing. The good news is that dozens of leading learning organizations and experts have come together to create a systematic framework to measure leadership development. It is designed to work for large and small organizations and be flexible and easy to get started.

As may of you know, the Talent Development Optimization Council (TDOC) was created over threee years ago to address two primary issues:

  1. Better Communicate Learning’s Value
  2. Optimize Learning’s Impact

TDOC created the Net Impact System (NIS) to provide a systematic approach to address these two issues. Organizations that have started to apply the NIS principles are seeing huge gains in business results, effectiveness, scrap reduction, outcome indicators, and Net Promoter Score. TDOC is now focused on leadership development.

Attend Kent Barnett's Session, Measuring Leadership Development

Join Kent Barnett at the 2021 CTR Annual Conference where he will teach you how to build  world-class dashboards and management reports for leadership development. If you are interested in learning how to use metrics to demonstrate value, then you can’t miss this event!

When: November 2, 2021
Time: 3:00 – 3:35 pm (EDT)
Speaker: Kent Barnett, CEO, Performitiv

Register for this Event Now!

About Performitiv

Performitiv provides the technology, methodology, and expertise to help understand learning effectiveness, business impact, and continuous improvement opportunities. Their technology helps to streamline, automate, and upgrade evaluation processes to save time, simplify data collection, and improve the overall effectiveness of L&D operations.

Make or Buy? Not as Simple as It Sounds

We are often faced with a decision whether to design, develop and deliver a course in-house or pay an outside party. This is a classic “make or buy” problem which turns out to be more complicated than most think. Since some critical costs are often omitted in the calculation of the make option, many practitioners are choosing the make option when the buy option is actually less expensive. Of course, there are other reasons than the cost for developing a course internally, but we will only focus on cost for this article.

I’ll begin with an example. Suppose I want to develop and deliver a two-hour course which will take 100 hours of staff time to design, develop, and deliver. The labor cost for the 100 hours is $30 per hour for a total of $3,000. Assume the instructor travel expense is $1,500 and materials are an additional $500. We will assume there is no room rental charge and no other direct expenses. Furthermore, suppose I have a vendor willing to deliver the same course for $7,500. Based on cost alone, which option should I choose?

At first glance, the answer appears obvious. The internal cost comes to $5,000, and if I choose this option, I save $2,500. Is that the correct decision though? Maybe not. First, the $30 per hour may represent only salary neglecting benefits like the employer-paid portion of FICA, health care, pension, unemployment, and other costs associated directly with the employee’s salary. These are called “related costs” and typically are about 25 percent of the hourly rate for management employees. Adding these in, we have a labor and related rate of $37.50 per hour and a labor and related cost of $3,750. You may be thinking that even with the additional $750, the decision still favors the make option, but we’re not finished.

We also need to add the burden rate and here is where it gets complicated. The burden rate takes two things into account. First and the more obvious of the two, burden accounts for the overhead required to support the department. This includes office rental space, utilities, computers, copy machines, office supplies, telephone, etc. as well as items like travel for conferences and education, subscriptions, fees, and consulting not directly related to a course. We include as overhead only those costs not directly associated with or attributable to projects.

This cost needs to be spread across all of the hours attributable to direct work. Typically, an employee works around 70 percent of their 2080 annual hours (52 weeks x 40 hours per week). The rest is spent on holiday, vacation, sick time, staff meetings, performance reviews, general planning, training and development, managing others, etc. So, we divide the total overhead burden for the department by the sum of the attributable hours, not the sum of all hours. This is often close to the average hourly rate, so for this example let’s say it’s $25.

But, we’re still not finished adding internal costs. The second component to burden is the cost of the non-attributable hours discussed above. In other words, what are the labor and related cost of the 30 percent of total hours employees do not spend directly working on task? Employees still need to be paid 40 hours per week even though they are not working on task for all of these hours. This money has to come from somewhere. Those of you who are consultants understand this well. You have to generate enough revenue to cover your time when you’re not on a billable project. This becomes an important part of burden and is not a small number. We divide this additional burden by the number of attributable hours to get the second component of the burden rate. Let’s say this come to $15 per hour.

In this example, the two components of burden add up to $40 per hour, which is more than the labor rate and not unusual. Now, our fully-burdened labor and related rate is $30 + $7.50 + $25 + 15 = $77.50 per hour, which is more than double the labor-only rate of $30 per hour. This pushes the make option to $7,750 for labor plus travel ($1,500) and supplies ($500) for a total cost of $9,750. It turns out that the less expensive option is to buy rather than make.

As you can see, there is more to the calculation than at first glance. Are you making these calculations in your organization? Do you know your total burden cost and the number of attributable hours? Many don’t and consequently think they are saving their organization money by developing and delivering courses using internal staff when it would be more cost-efficient to hire out.

Attend the Make or Buy Session at the CTR Annual Conference

I’ll be covering make or buy in more detail during the CTR Annual Conference on November 3. Click here to learn more and to register. Hope to see you there!

Why Does L&D Measurement Remain So Difficult?

Why does L&D measurement remain so difficult? Put a different way, why hasn’t the profession made greater progress over the last ten years? ATD research shows clear progress was made from 2001 -2009 in the percentage using higher-level program evaluations (3, 4, and 5), but little has changed since then. We continue to hear from a majority of colleagues how difficult measurement is.

Many explanations have been offered, however, the most common is that practitioners lack the knowledge. They don’t know what to measure, how to measure, and what to do with the measures once they have them. For program evaluation, though, this is hard to understand because there are so many books written and workshops on the subject.

At a broader level, however, I think the profession has lacked a comprehensive framework that addresses how and what to measure for all the reasons to measure—not just program evaluation. We know that most measurement and reporting activity is focused on informing and monitoring, not on program evaluation. Just think of all the scorecards and dashboards you use; these are not for program evaluation. And there has been no guidance whatsoever on reporting and on how to choose the most appropriate report based on the user and reason to measure.

Talent Development Reporting Principles (TDRp) were created specifically to meet the need for an overarching measurement framework. Peggy Parskey and I wrote the book Measurement Demystified: Creating Your L&D Measurement, Analytics, and Reporting Strategy to share this framework and guidance. Our hope is that practitioners will now have the answers to their questions about what and how to measure, and how to report their results once they have them.

Knowledge and lack of a comprehensive framework, however, probably don’t explain all the discomfort with measurement. For some, measurement implies accountability, and many don’t want to be accountable for results. After all, what if we measure and find that application rates are low, impact is negligible, and ROI is low or negative? What if we find we have poor rates of on-time completion for development and delivery or a high cost per learner? This is a culture issue within L&D and more likely within the entire organization that cannot be solved by reading books or attending workshops. Leaders, starting with the CLO, need to make it clear that they want to measure in order to deliver better results and learn what is working so that opportunities for improvement can be identified.

Even if the issues of knowledge and culture are addressed, there may still be a hesitancy to measure and report due to a lack of confidence. This is typical for analysts and others new to their position who simply don’t have much experience with measurement and reporting. They have read the books and attended the workshops, and they may have a supportive culture, but they just aren’t sure about the right measures to use or how to calculate and report on them. Unfortunately, the only way to gain confidence is to do the work. Mistakes will be made, but we all make mistakes and the goal is that we learn from them. A supportive boss or a good coach or mentor can help, but often that is difficult to find.

Another possible explanation is turnover. Like other professions, it takes years to acquire all the practical knowledge needed to excel at a job. I am speculating, but could it be that those responsible for measurement and reporting don’t stay in the position long enough to truly master it with confidence?

Finally, I’ve heard that measurement and reporting is difficult because they don’t have the resources to do a good job. They have a small budget for measurement and not enough staff. In many cases, this reflects a lack of appreciation for measurement and reporting by the CLO and other senior leaders which in turn may reflect a lack of knowledge or accountability on their part.

By the way, these same issues apply across all of HR regarding measurement and reporting, so it is not just L&D.

If you’re committed to making measurement and reporting a part of your L&D strategy, please join us at our virtual conference November 2-4 to explore this issue in more detail. Kimo Kippen will lead a panel discussion on this very topic on November 3 at 2.45 pm ET, and Michelle Eppler from the Human Capital Lab at Bellevue University will share results from a survey she just completed to probe the reasons. Registration is free.

Mid-Year Check Up on Measurement and Reporting

Since we are well into the month of July, I thought it the perfect time to reflect on what we have accomplished so far this year in the world of measurement and reporting.

If you are running learning like a business, that means you set specific, measurable plans or targets for your key measures. How are they doing compared to the plan? Are they where they need to be if you are going to make the plan by the end of the year? If you have started to provide forecasts for how the year will end, you can just compare the plan to the forecast. If you are not yet at the stage of making forecasts, just ask yourself if you are on track to make the plan by the end of the year. Even if it appears you will not make the plan on your current trajectory, the good news is that you still have six months to do something about it. What steps can you take now to get back on plan?

Even if you did not set plans for measures, this is a perfect time to see what you can learn from the first six months. Here are some measures to check:

  • Completion rate: Are you getting the completion rates for formal learning programs that you would expect? Are there significant differences between programs or audiences?
  • Reach: Are you reaching as many employees with learning as you hoped? What percentage of employees have been enrolled in formal learning? How many have engaged in some informal learning?
  • Cost: Do you know what your costs are for the first six months? Are you on budget? Where are the surprises and what can you learn from them?
  • Total Participants: While reach measures whether an employee has taken at least one course or touched at least one informal learning asset, total participants allow for employees to take more than one course or interact with more than one learning asset. So, your reach may be meeting expectations while total participation is not. Are you satisfied with your total participation?
  • Usage: Closely aligned to participation is your usage of online and virtual courses as well as informal learning like communities of practice, portal content, and performance support tools. Are you getting the usage you want?
  • Level 1: Is your participant’s reaction to the learning at an acceptable level? Did it drop when you switched from ILT to VILT? How do your L1 scores compare for VILT and online? Are you measuring L1 for your informal learning assets? How do they compare to formal learning? Where are your opportunities for improvement?
  • Level 2: Are you satisfied with the test scores for your VILT and online courses? Are the VILT scores lower than you were getting for ILT? If so, does the content or instruction need to change?
  • Level 3: Are your application rates where they should be or do have too much scrap learning? Did L3 decline during COVID? If it did, what could you change to improve?
  • Employee Engagement: If you are measuring employee engagement quarterly, has employee engagement with their learning changed with COVID? Are employees happy with the learning opportunities available to them? If it dropped initially after employees began to work from home, has it come back to an acceptable level?

These are just some of the questions that you should consider answering with your data. Remember that a key aspect of running learning like a business is learning from the data. Mid-year is a perfect time to see what you can learn from the first half of the year and then use that knowledge to make improvements in the second half for a strong finish.

Are We Over Using Dashboards?

Dashboards have become increasingly popular, especially those with well-designed visual elements. For many applications, they do represent a great advance from the more boring scorecards filled with rows of data. That said, the question now is whether we have gone too far and are relying too much on dashboards when in fact other types of reports would be better. My answer is yes.

Many practitioners today appear to believe that dashboards are the best, if not the only, way to share data, and this is a problem. It would be better for us as a profession to utilize many different types of reports and tailor the type of report to the specific need of the user, which in turn requires us to think more carefully about the reasons for measuring in the first place. We describe four broad reasons to measure in our new book, Measurement Demystified, and each of these four reasons is linked to the type of report best suited to meet the user’s needs. The dashboard is suited to only two of these four reasons.

The first reason to measure is to inform. This means the measures will be used to answer questions from users and discern if trends exist. The question may be about the number of participants or courses, or perhaps about the participant reaction or application rate. In any case, the user just wants to know the answer and see the data. If they want to see it by month and especially if they want to see subcategories (like courses by type or region), a traditional scorecard will be best with rows as the measures and columns as the months. If the user is interested in year-to-date summaries and more aggregate data as well as some visual representations, a dashboard will be best. So, even for this one reason (inform), the best report depends on what the user wants to see. These may be one-off reports or reports that are regularly updated.

A second reason to measure is to monitor. This occurs when a user is happy with how a measure is performing and wants to ensure the value remains in an acceptable range. For example, participant reaction scores may average 80% favorable and the CLO wants to ensure they stay above 80%. In this case, a dashboard with thresholds and color coding is a perfect way to share the measures. This may be the only element in the dashboard or it may be combined with some other elements. This type of dashboard should be generated monthly.

The third reason to measure is to evaluate a program and share the results. In this case, a program (like sales training) has been completed and the users desire to evaluate and share the results with others. This is a one-off report designed to be used at the end of a program or perhaps at the completion of a pilot. In this case, a dashboard should not be used. Instead, a program evaluation report would be best which takes the audience through the need for the training, the planned results, the activities completed, the actual results, and lessons learned. The report will probably be a PowerPoint but could be a written document.

The fourth broad reason to measure is to manage. In contrast to monitoring, managing means that a goal has been set to improve the value of a measure, perhaps increasing the application rate from 40% to 60% or reaching an additional 2000 employees with learning. If monitoring is about making sure the status quo is maintained, managing is about moving the needle and making progress. In this case, a dashboard should definitely not be used because it would not convey the key information or the detail needed to make management decisions.

Every month a manager needs to know whether their efforts are on plan and whether they are likely to end the year on plan. For this, they need a management report which includes the plan or target for the year, year-to-date results, and a forecast of how the year is likely to end if no additional actions are taken. This type of information is very difficult to share in a dashboard format which is why special-purpose management reports have been designed for L&D. These are generated monthly and focus on both specific programs and aggregated department results. In contrast to dashboards and program evaluation reports, these management reports are not meant to be “presented” but to be used in working sessions to identify where action is required.

In conclusion, dashboards have their place but should not be the only type of report generated by L&D. Dashboards are recommended in two cases: 1) when the reason to measure is to inform and the user wants summary data along with visual elements, or 2) when the reason to measure is to monitor in which case thresholds will need to be included. Dashboards are not recommended when the reason to measure is to inform and the user wants detailed, monthly data. In this case, a scorecard is preferred. Nor is a dashboard recommended to share program evaluation results or to manage programs or the department. In each of these cases, better report types exist and should be employed (program evaluation and management report respectively). Bottom line, it is important to use the right type of report which should match the reason for measuring.

When it Comes to Learning Measurement—What is Good Enough?

When it comes to learning measurement—what is good enough?

This is a question that needs to be asked much more frequently. This is especially true in the case of isolating the impact of learning (Phillips’s level 4) or talking about the accuracy of an ROI calculation. Context is key. Business economists think about the answer to this question all the time because the one thing they know for sure is that their forecast (for GDP, sales, housing starts, commodity prices, exchange rates, etc.) will be wrong. Only by chance will the forecast be exactly right. So, the question is not whether the forecast will be exactly right but rather will it be close enough to make the right decision.  For example, should we raise production, should we hire more workers, should we invest in A rather than B?

We need to apply this same type of thinking in learning. We need to start with the context and the reason for measuring. What decision are we trying to make or what will we do with the estimate once we have it? Given the answers to these questions, how close does our estimate of impact or ROI need to be? I cannot think of a single instance where the estimate needs to be perfect. It just needs to be good enough to help us make the right decision or take the right course of action.

So, let’s step back for a minute and ask why we might estimate impact or ROI? First, I think we would all agree that we want to identify opportunities for improvement. If this is the context, how accurate does our estimate of impact need to be?  In this case, the estimate just needs to be roughly right or “in the ball park”. For example, if the true (but unknown) ROI is 20% we would like an estimate to be in the 10%-30% range. Typically, we would conclude that an ROI in this range has opportunity for improvement. Similarly, if the true ROI were 100%, we would want our estimate to be in the 70%-130% range, and we would likely conclude that no improvement is necessary.

Will the standard methods to isolate impact (control group, trendline analysis, regression or participant estimation methodology) be good enough for this purpose?  I believe so. We simply need to know whether improvement is required, and we want to avoid making improvement when non is needed or failing to make an improvement when improvement is needed. In other words, if the ROI is truly 10%, we don’t want an estimate of 100% And vice versa. The standard methods are all good enough for us to make the right decision in this context.

Now, suppose the reason to measure impact and ROI is not for improvement but to demonstrate the value or effectiveness of the program. At a minimum we want to be sure we are not investing in learning that has no impact and a negative ROI. This is a bit more demanding but the same logic applies. We want the error margin around the estimate to be small enough that we can use the estimate with confidence. For example, if the estimate for ROI is 10%, we want to be confident that the error margin is not plus or minus 10% or more. If it is, we might conclude that a program had a positive ROI when in fact it was negative.

In this context, then, we want to be more confident in our estimates than in the first scenario. Stated differently, we want smaller error margins. We will use the same four methods, but we need to be more thoughtful and careful in their use. We would have the most confidence in the results obtained using a control group as long as the conditions for a valid control group are met, so extra care needs to be taken to make sure the control group is similarly situated to the experimental group. Trendline and regression also can produce very reliable estimates for the “without training” scenario if the data are not too messy and if the fit of the line or model is good. All three of these methods are generally considered objective and, when the conditions noted above are met, should produce good estimates of the impact of learning with a suitably narrow error margin.

The participant estimation method is the most widely used because no special statistical expertise is required and because often there are no naturally occurring control groups. However, it does rely on the subjective estimates of the participants. Accordingly, we will want to be sure to have 30 or more respondents and, ideally, we will obtain their estimates of impact about 90 days after the training. It is also critical to adjust their estimate of impact by their confidence in the estimate. When this methodology is used as described by the Phillips, it, too, should produce estimates reliable enough to be close to the actual but unknown impact and ROI.

The common theme in both scenarios is good enough. At Caterpillar we conducted about three impact and ROI analyses per year using the participant estimation method. We used the results to both show the value of the programs and to identify opportunities for improvement. We always presented the results with humility and acknowledged the results were estimates based on standard industry methodology with adjustments for self-reported data. We had confidence that the estimates were good enough for our purposes and we never received any pushback from senior leadership.

So, remember that your results do not need to be perfect, just good enough.

Where Measurement and Reporting Strategies Go Wrong

measuring and reporting

by Peggy Parskey, CTR Assistant Director

Imagine you are a manager in a Learning and Development function in your company. Each month, you receive several reports (or links to a dashboard) with a plethora of data. The reports focus on the function overall, but with a bit of digging you can find the data relevant to you. While the reports and dashboard provide a lot of information, you lament that they don’t help you manage your operation. Beyond the inadequacy of the reporting, you believe the organization doesn’t measure the right things or provide enough insights about improvement opportunities. You suggest to your CLO that the organization needs a measurement and reporting strategy.  The CLO agrees and delegates the job to your measurement person if you have one, or lacking that, to you. (Since you asked, you own it.)

Having never created a measurement and reporting strategy, you conduct a Google search.  Before November 2020, your search would have surfaced ads for product or service companies at the top, followed by a few white papers from these same companies or blogs with best practices and advice. Some content focused on measurement while others only addressed reporting.  Few, if any, provided a checklist of elements to include in your strategy. Lacking any meaningful guidance, you cobble something together from the disparate pieces of information found on the web.

This theoretical manager exists in organizations across the globe. Over the past 5-10 years, I have reviewed dozens of client-generated strategies. Most have some of the components of a strategy but nearly all lack the critical elements to advance measurement capability in the organization.  In November 2020, David Vance and I published Measurement Demystified with the express purpose to provide practical guidance on how to create a robust measurement and reporting strategy.

In this post, I’ll give you an overview of what comprises a well-designed strategy. But before sharing that with you, I’d like to review what a measurement and reporting strategy is not. Each bullet below represents something I’ve seen over the years as clients have shared their strategies.

  • Focused on reporting rather than measurement and reporting: You need both: what and how you measure as well as what and how you report. Reports are an output. Without the appropriate inputs (that is, the measures), you will be challenged to meet user needs.
  • Kirkpatrick’s four levels or Phillip’s five levels: Both frameworks provide important guidance and processes to select and report effectiveness measures. However, neither framework addresses the efficiency measures you should choose. Moreover, while both address outcome measures, they don’t provide detailed guidance on when and how to use them. They are an essential element of your strategy, but they are not the strategy.
  • One size fits all approach to measurement: To avoid implementing an unsustainable complex approach to measurement and reporting, many organizations have opted to go in the opposite direction and adopt an overly simplified approach. They identify a single suite of measures, methods, and reports across all programs regardless of their purpose. While this approach may meet the needs of some users, inevitably, it will frustrate others who lack the data to manage their operation.
  • A focus on one type of report: Dashboards are extraordinarily popular as a reporting tool and enable “speed of thought reporting”. The data is up-to-date, users can filter the data, and they can drill down to look for root cause of specific program issues. But dashboards can only take you so far. A robust strategy adapts the reporting to the user needs and their reasons for measuring.
  • A tool to gather the data. If you are a large organization, you will need a tool to enable you to aggregate and disaggregate large volumes of data. The tool is not the strategy.

Now that you have insight into what a measurement and reporting strategy is not, let’s turn to what it is.

  1. Begin your strategy with a clear articulation of why you are measuring. Different purposes to measure influence what you measure and what, how and when you report.
  2. Next, identify your users and their needs. Who will consume the data? What do they need? What decisions might they make? What actions might result from the information you provide? A strategy needs to be grounded in their requirements.
  3. Specify the measures overall that you will use. However, don’t stop there. Identify the specific measures for key learning programs or department initiatives. Include a balanced suite of measures including efficiency and effectiveness measures for all programs and outcome measures for strategic, business-aligned programs.
  4. Define your data collection approach. Where will you get the data? Where should you consider sampling? How will you ensure that you get sufficient responses to make meaningful inferences from survey data? Where can you automate to reduce effort, increase data reliability, and improve speed time to insight?
  5. Specify the types of reports you intend to use. When will you employ scorecards vs dashboards? When should you use program evaluation reports or management reports?
  6. Plan how and when you will share reports. What are the decision-making cadences and how can you align reporting to them?
  7. Finally, define the resources you need to execute and sustain the strategy. Be clear about what funding, capability, and tools your organization will require to build sustainable measurement.

Creating a measurement and reporting strategy will take time and effort.  You will need to meet with the CLO as well as senior L&D leaders and perhaps key business goal owners. The payoff for this effort will be significant and will enhance the value the L&D function delivers to the organization.  Don’t hesitate to reach out to us at the Center for Talent Reporting. We are here to help you in your journey.

 
 

Measurement Demystified: Creating Your L&D Measurement, Analytics, and Reporting Strategy

As some of you may be aware, the title of this blog is the name of the book Peggy Parskey and I recently published with ATD. It is the culmination of all our work since 2010 to create a framework and a set of standards and best practices to make it easier for L&D professionals to create a measurement and reporting strategy. It is also the result of input, feedback, and suggestions from the thousands of professionals who attended our webinars, workshops, and conferences. So, in essence—the book is a joint effort by everyone to advance the profession.

For those unaware of the backstory of the book, here’s a little history…

It all began in September 2010 at a CLO Symposium networking event. Kent Barnett, at the time CEO of Knowledge Advisors, and Tamar Elkeles, at the time VP of Organizational Learning for Qualcomm, were discussing the state of our profession. They both agreed that the time had come for measurement and reporting standards— similar to what accountants have in the Generally Accepted Accounting Principles (GAAP). Accountants go to university and learn about the four types of measures (income, expense, assets, and liabilities), the specific measures in each category, how to calculate the value for each measure, and what to do with the measure once they have its value (i.e., in what report it should be included and how it should be used). They wondered if a similar framework would benefit L&D.

Kent and Tamar set forth to form an advisory council to pursue their vision and ideas. They engaged leading practitioners as well as thought leaders in the field, about 28 in all. They also recruited both Peggy Parksey and me to lead the effort. We produced a draft document in early 2011, which would end up going through over 20 iterations as we incorporated feedback from the council and others. The final document was completed later in 2011, and we then began to expand the principles to all areas of HR, which was completed in 2012. The standards were named Talent Development Reporting Principles (TDRp). Similar to GAAP, TDRp provides a framework for measurement and reporting for L&D.

By mid-2012, the foundational work was complete, and TDRp needed a home. We established the Center for Talent Reporting (CTR) as a 501c(6) nonprofit organization in August of 2012 to continue to develop and promote the principles of TDRp. We hosted our first conference and our first workshops the following year. We also started our monthly webinars and blogs. And we never turned down an opportunity to speak and share TDRp with others.

It’s hard to believe that we have been at this for ten years. We’ve learned a lot along the way, so we captured all of our learnings in the book, Measurement Demystified: Creating Your L&D Measurement, Analytics, and Reporting Strategy. It is our intention that the book reinforces the standards of the TDRp framework and will do for the L&D profession that GAPP has done for accounting, which is to provide a common language; categories, names and definitions of measures; standard reports; and guidance on what to measure and how to report.

The TDRP framework begins with four broad reasons to measure: inform, monitor, evaluate, and manage.  There are many more detailed reasons to measure, but we believe a framework should have no more than five elements, or it becomes too difficult to remember and use. These four categories make it easier for us to have discussions about the reasons to measure, which is the starting point for any measurement strategy.

TDRp recommends three categories for measures. Historically, practitioners had divided measures into efficiency and effectiveness buckets which is what I did when I worked at Caterpillar. However, research and practice showed the need for a third category, not because of the number of measures in it but because of its importance. The third type of measure is “outcome,” which means the results or impact from the learning on the organization’s goals. This is the type of measure CEOs want to see the most, and yet it is seldom reported. This is the measure required to make the business case for learning.

Lastly, TDRp identifies five types of reports which contain the measures. Moreover, TDRp ties the type of report to the reason for measuring, which provides much-needed guidance for practitioners. Scorecards and dashboards are great for informing and can also be used for monitoring if thresholds are included. Program evaluation and custom analysis reports are ideal for sharing the results of a program or research. Management reports are the fifth type, and these have a special format to help leaders manage programs and initiatives to deliver planned results. They come in three varieties depending on the user and the need, but all have the same format, just like the basic accounting reports do.

It is our sincere hope that the TDRp framework and the guidance in Measurement Demystified will help the profession advance. Some are just beginning and should benefit significantly from a framework and guidance on how to select measures and reports. Others are further along and can benefit from the definitions of over 120 measures and the advanced report formats. Even experts may benefit from the detailed discussion of how to create plan numbers, use year-to-date results, and create forecasts.

And we are not done yet. We just submitted our draft manuscript for Measurement Demystified: The Field Guide to ATD for publication this December. We provide over 100 exercises to improve your understanding of the concepts and to give you practice in applying them. Hopefully, the Field Guide will significantly increase your skill level and your confidence, enabling you to create a much more robust measurement and reporting strategy.

We look forward to your continued engagement and feedback.

Running Learning Like a Business

The concept of “running learning like a business” continues to gain traction. It means different things to different people but in all cases involves bringing a more business-like perspective to the L&D function. Personally, I like to focus most on its implications for the actual management of learning programs and the L&D department.

So, you ask, what does this mean specifically? What would someone do differently? How can you tell if someone is running their operation like a business? I believe there are two primary elements to running learning like a business. First, you need to set specific, measurable goals or targets for every important measure. This becomes your business plan for a program or the entire department. This plan should be created just before the start of your fiscal year or no later than the first month of the new fiscal year. Second, once you have a business plan and the year is underway, you need to compare progress against plan every month and answer two key questions: 1) Are we on plan year to date? and 2) Are we going to end the year on plan? If you are not on plan or if it appears you may not end the year on plan, then you need to discuss options to get back on plan and decide whether to take corrective action. This is what we mean by disciplined execution.

Let’s look at each in more detail. First, you need good plan. Of course, there is lot that goes into a good plan beginning with an understanding of the organization’s goals and discussions with goal owners about whether learning has a role to play. Think of this as proactive, high-level performance consulting where you seek out the senior leaders and engage in good, open discussion to explore the possibilities for learning to help them achieve their goals. Learning will not always have a role, but at Caterpillar we were able to contribute to each of the CEO’s top seven goals each year.

If learning can make a contribution, then you need to reach agreement with the goal owner (like the SVP of Sales) on what the learning program will look like: target audience, timing, learning objectives, type of learning, design, etc. More than that you and the goal owner need to agree on measures of success (like the impact of learning on sales) and on targets for key efficiency and effectiveness measures to deliver the agreed upon outcome (like impact of learning). This would include agreeing on plans for number of participants, completion dates and rates, learning (level 2), and application (level3). This needs to be bolstered by agreeing on roles and responsibilities – what each of you needs to do to deliver the planned measures. For example, how will the goal owner communicate the importance of the learning before the program and reinforce desired behaviors after the program. (We had a written roles and responsibilities document at Caterpillar signed by both parties.)

Second, once the program is underway, you will begin to receive monthly data on the measures you and the goal owner agreed were important. You will need a monthly management report (called a program report) to compare year-to-date results to plan. If you are not on plan, you need to understand why. Perhaps it is just a matter of timing (for example, it took longer to launch than planned) and left alone the program will get back on track and meet plan expectations. This is why the forecast for how the year is likely to end is so important. If the forecast shows you delivering plan, no further action is required. However, if the forecast shows you falling short of plan by year end, then you need to consider taking corrective action as soon as possible. This may involve actions such as redirecting resources toward this program or coming up new plans for the goal owner to reinforce the learning.

Our focus so far has centered on what we call strategic programs. These are programs directly aligned to the goals of the CEO. However, there are typically other programs of great importance, in some cases even more important, than strategic programs. Examples would be onboarding, basic skills training, leadership, compliance, etc. The same thinking applies. Start by creating a plan for each program with specific, measurable goals. Then execute the plan throughout the year with discipline to deliver the plans.

We have just discussed what it means to run learning like a business from a program perspective. The same discipline, however, can be applied to your CLO’s initiatives to improve results across all programs (like improving the overall application rate) or to improve internal process or system efficiency or effectiveness (like improving customer satisfaction with the LMS or reducing helpdesk wait time). In this case, the CLO needs to set specific, measurable goals for each measure she chooses to manage. These measures are captured in the monthly operations report which she and the directors can use each month to determine if the initiatives are on plan and likely to end the year on plan. If not, they need to decide what action to take.

So, this is what “running learning like a business” means to me. This the hard work of managing L&D with the same type of discipline your colleagues use in sales or manufacturing where they set annual targets and do the best they can to meet those. For many, this is a very different way of managing than they do today. Most have plans of some sort, but many are not specific and measurable. Even some who do have good plans do not execute them with discipline by using the monthly program and operations reports to compare progress against plan.

If you are managing this way today, congratulations! I know you see the value in this and ask you to share your stories with others so they can be inspired by your success. If you are not managing this way, consider giving it a try. You can start small. Try it for one program. And/or try it with the CLO to better manage a short list (3-5) of measures for improvement. I am convinced that running learning like business was key to our success at Caterpillar, including support and funding from our CEO and senior leaders.

If you would like to hear more about this concept, join me for a panel discussion on March 23 at 4.15 ET hosted by Corporate Learning Week.

2020: The Year of Disruption

I think we’ll all be happy to put 2020 behind us. As of December 18, 2020, Covid has claimed more than 300,000 lives in the U.S. and disrupted the economy, our personal lives, and learning along the way. Despite the pandemic’s destruction, some good has come from the disruption—like the SEC rule mandating human capital disclosure.

The impact of the pandemic on learning has been massive. Organizations have been forced to shift from instructor-led learning (ILT) to virtual ILT (vILT) supplemented by more eLearning and portal content. Some learning couldn’t be converted immediately, so learning opportunities and productivity gains were consequently lost. Since companies had to convert learning so quickly, much of vILT didn’t meet quality standards. But despite this, learners were appreciative of the effort and happy to have more learning than none at all.

During this time, the good news is that learning departments have demonstrated their value by offering up valuable strategic advice on how to manage remotely. This has been a huge win for many L&D departments and has dramatically increased their credibility and stature within the organization.

Just as the pandemic has caused us all to reconsider our work-life balance and the possibility of an ongoing work-from-home arrangement, it has also forced us to reconsider the mix of ILT versus vILT, eLearning, and content available in the LMS portal. Actually, this reconsideration is long overdue. Many organizations had little to no vILT before March 2020. Now, they have a lot of vILT, and the quality of the content is slowly improving as it is redesigned for virtual delivery. Live virtual training offers significant benefits, such as travel and facility cost savings. And, if properly designed, it may be able to accomplish the same objectives in less time. I don’t see any chance of organizations reverting to a pre-pandemic model that realized so heavily on ILT.

There is also an opportunity here to think even more broadly about restructuring your learning programs. Many organizations going into the pandemic had long onboarding and basic skills ILT programs. Some went on for a month or more, with a few lasting three, six, or even 12 months. Is this really the best way to learn? And how much of the content will be remembered and applied? Instead, why not use this opportunity to restructure the learning entirely? Reduce ILT or vILT at the beginning to just the basics that learners will need to get started. Supplement this with additional vILT or eLearning combined with performance support in the workstream at the time of need. For example, following the initial (now shortened) course, participants might take a short vILT course or eLearning module every month over the rest of the year. And this is combined with performance support, which is easy to access and provides them just the tools they require at the time of need.

Last, let’s not forget the SEC’s new rule mandating human capital disclosure. Over the coming years, this transparency will spread to all types of organizations and radically alter the expectation of both investors and employees for information on important human capital metrics like diversity, pay equity, leadership trust, and employee engagement. By 2030 people will wonder why this type of information was not always available. Disruption is coming, and there will be no going back.

So, take this opportunity as you plan for 2021 to continue to think outside the box, just as you have been forced to do this year. Don’t go back to the old ways even when offices reopen.  Double down on new ways of learning and embrace the opportunity to disclose your important L&D and HR metrics publicly.

Best wishes for the holidays and may everyone be healthy and safe.

Covid Has Changed Our L&D Word—Where Do We Go from Here?

by June Sonsalla,
Vice President, Head of Learning & Development,
Thompson Reuters

I 2014 I attended an event focused on Resilience at a boutique hotel near the harbor in San Diego. The day of the conference, it was a temperate 70?F, the palm trees were bending in the breeze, and the air smelled of saltwater and wildflowers. In a pre-pandemic world, resilience lacked urgency. At best, it was a new way to describe change readiness. At worst, it put the onus on employees to manage increasing work volume with the same resources. That day back in 2014 seems like a lifetime ago. I’ve since moved to the midwest, there’s a different president in the White House, and I must wear a mask while in the grocery store.

Fast-forward to 2020. March of this year I started a new position at Thompson Reuters and only spent nine days in the office before moving to my home office indefinitely. A global pandemic, increased focus on social justice, and inclusion as a critical input for all talent processes require a completely different way of working to get the same results as in prior years. In addition, adults are reporting more adverse mental health conditions as they navigate uncertainty. All of these variables mean new demands on human resources function and on each of us individually. If I had a crystal ball, I would have taken better notes during that Resilency workshop I attended back in 2014.

So, how do we overcome new obstacles, move forward, and contribute as much or more value to the organization than before? What matters most to our organizations now? What measures do we use to ensure continued progress and long-term success? And, how do we move ourselves and human resources from merely “getting by” to excelling in the new normal?

A Special Invitation…

I invite you to explore the answers to all of these questions with me at a very special event… the 2020 CTR Conference. During the session, Covid-19: Challenges, Change, and Continued Innovation, a few of my L&D colleagues and I will discuss how the pandemic has accelerated long-term trends creating permanent transformations in what and how L&D delivers strategic value. Hope to see you there for a lively discussion.

Measure and Appreciate Talent Outcomes

by Jeffrey Berk, COO, Performitiv

Learning’s ‘holy grail’ is to measure business outcomes. These include measuring how common business results are connected to learning. Results such as revenue, expenses, quality, productivity, customer satisfaction, risk, cycle time, and employee retention are common examples of business outcomes where L&D desires to show its connection.

However, let’s not forget about important talent outcomes. Talent outcomes are not the business results mentioned above, yet they play a very important role in measuring learning’s impact, especially for strategic, visible, and costly programs. Over the past year, Performitiv has partnered with L&D experts to identify the main sources of talent outcomes, such as culture, engagement, leadership, knowledge and skills, and quality of hire. These outcomes are extremely important to executives for programs such as leadership and onboarding, as well as programs that are part of material change or reorganization.

The next challenge is to figure out how to measure these talent outcomes. Unlike operational business outcomes, talent outcomes are not as clear or obvious. They require evidence of impact but neither perfect nor precise statistical measures of impact. As a result, we can ask questions on evaluations to understand if the programs were connected to the talent outcomes.

Here are some sample questions of talent outcome indicators on evaluations:

  • Culture: As a direct result of this learning, I feel more connected to our organization’s values and beliefs
  • Engagement: As a direct result of this learning, I am more motivated and committed to this organization
  • Leadership: As a direct result of this learning, I feel I am significantly better leader for this organization
  • Knowledge and Skills: As a direct result of this learning, I am confident I will significantly improve my job performance
  • Onboarding/Orientation: As a direct result of this learning, I am confident and comfortable to perform my job requirements at a high level

While not perfect, the above guidance may be used to build your own talent outcome indicator questions, as they are reasonable evidence of impact. For example, the Engagement indicator question may be used to help an executive understand whether participants in a program felt significantly more motivated and committed to be with the firm as a result of participating in a program. The program manager can then look to learning objectives, exercises, and support materials to show the connection of the program toward engagement.

2020 CTR Annual Conference

Learn More about Learning Measurement to Track Impact

Jeffery Berk will be discussion Learning Measurement in depth at the 2020 Virtual CTR Annual Conference, October 27-29, 2020. Join him for his session: Learning Measurement Using Impact Process Mapping, October 28 at 2:00 pm (ET). Registration is completely free.

REGISTER>>

A Continuous Improvement Approach Designed by Learning for Learning

by Kent Barnett, CEO, Performitiv

Ten years ago, a group of professionals came together to create an industry standard for Talent Development Reporting. Under Dave Vance’s leadership, that work turned into Talent Development Reporting principles (TDRp) and the Center for Talent Reporting. This work has withstood the test of time and has become the industry standard we all desired.

We now have the opportunity to leverage TDRp and create a much needed continuous improvement process that helps our industry optimize its impact. Other industries created their own continuous improvement process with a dramatic impact on their contributions, such as Six Sigma and Net Promoter System.

Many of us who have contributed to the development of TDRp have created a similar continuous improvement process, the Learning Optimization Model (LOM). The LOM is being deployed by several leading organizations and experts. It has six core components that can be implemented over time.

One of the key components is to introduce a new way of measuring effectiveness and impact. It leverages the simplicity and power of Net Promoter Score but is designed specifically for Talent Development. It is a great way to compare formal and informal learning while identifying way to improve impact.

Another component, the impact process map, helps us understand how all the measures we have fit together. More importantly, it helps us understand what to do with the data.

A third component of the LOM is hard data. Many of us believe that learning needs to take ownership of its financial contributions similar to manufacturing and sales. A vice president of manufacturing doesn’t control all aspects of gross profit, but it is a hard measure critical to managing manufacturing’s contribution. A vice president of ales doesn’t control all aspects of sales, but sales growth is hard data critical to managing impact. All business units with a P&L have workforce productivity goals measured by revenue divided by labor cost. Shouldn’t a CLO know the workforce productivity goals for his/her customers and take ownership to help achieve them?

Another component of the LOM is financial modeling. Jack and Patti Phillips of the ROI Institute have been an integral part of developing the LMO. Financial modeling is designed to incorporate the ROI process by helping to do analysis on the best mix of resources prior to rolling out certain strategic programs.

If you’re interested in learning more about the Learning Optimization Model, Join me at the CTR Conference, October 27-29, where I will go more in-depth.

2020 CTR Annual Conference

LEARN MORE ABOUT THE LEARNING OPTIMIZATION MODEL


Join us at the 2020 Virtual CTR Annual Conference, October 27, for the session: A Continuous Improvement Approach Designed by Learning for Learning presented by Kent Barnett, Founder and CEO of Performitiv.

LEARN MORE >>

SEC Publishes Final Rule on Human Capital Reporting

by David Vance, Executive Director, Center for Talent Reporting

The U.S. Securities and Exchange Commission (SEC) just published its final rule on human capital reporting on August 26, 2020. This follows the proposed rule issued one year ago. The final rule makes very few changes to the proposed rule and mandates for the first time, public reporting of human capital metrics by companies subject to SEC reporting requirements, which includes all US companies issuing stock, bonds, or derivatives. The rule becomes effective 30 days after publication in the Federal Register which should happen in September, 2020.

Today, companies have to report only one human capital metric—number of employees. The new rule will still require the reporting of full-time employees, but additionally, encourage companies to report part-time and temporary employees if they are important to a company’s financial results.

More importantly, the new rule mandates for the first time that companies provide “to the extent such disclosure is material to an understanding of the registrant’s business taken as a whole, a description of a registrant’s human capital resources, including any human capital measures or objectives that the registrant focuses on in managing the business.”

The qualifying word: “material” means anything that an investor would want to know before buying or selling a stock, bond, or derivative. The SEC goes on to specifically call out the areas of “attraction, development, and retention of personnel as non-exclusive examples of subjects that may be material, depending on the registrant’s business and workforce.”

SEC Chair Clayton commented in the public release, “I cannot remember engaging with a high quality, lasting company that did not focus on attracting, developing, and enhancing its people. To the extend those efforts have a material impact on their performance, I believe investors benefit from understanding the drivers of that performance.”

In other words, the SEC expects to see these three areas discussed and reported if they are material, and it is hard to imagine companies where they are not. Consequently, public companies will need to start disclosing and commenting on them as early as  October , 2020. And this is just the starting point. The rule calls for all material matters to be disclosed; therefore each company will need to decide what other human capital matters might be considered material by an investor. Depending on a company’s  situation, this might include total workforce cost or productivity, diversity (especially at the leadership level), and culture (revealed by employee engagement and leadership surveys). Discussion may also be required about the implementation of a new performance management system or a significant change in compensation and benefit philosophy.

Where can companies get guidance on specific metrics they might use to meet the new rule? The 2018 recommendations by the International Organization for Standardization (ISO) includes 10 metrics for public reporting by all organizations and an additional 13 for reporting by large organizations. ISO also recommends 36 other metrics for internal reporting, which are organized by area or cluster. The recommended metrics for the three SEC focus areas with metrics recommended for all organizations include:

  • Attraction: Time to fill vacant position, time to fill critical vacant positions, percentage of positions filled internally, percentage of critical positions filled internally
  • Development: Development and training cost, percentage of employees who have completed training on compliance and ethics
  • Retention: Turnover rate

These metrics provide a starting point for a company’s human capital reporting strategy to meet SEC requirements. Additional metrics are available from ISO for the area of development, including percentage of employees who participate in training, average hours of formal training per employee, percentage of leaders who participate in training, and percentage of leaders who participate in leadership development.

In addition to the metrics for the three focus areas, ISO recommends a number of others that could address areas of material concern such as workforce cost,  productivity, diversity, and leadership trust. Companies will need to identify their own metrics not addressed by the ISO and may wish to supplement with other data (such as employee engagement). They will also need to report on material initiatives which may not have metrics.

Publication of the final rule by the SEC, combined with the comprehensive recommendations by ISO, ushers in a new era of transparency in human capital which will fundamentally change the way organizations operate. And these changes will go far beyond US publicly traded companies. In 5-10 years, privately held companies, nonprofits, and other types of organizations will be compelled (or shamed) into adopting the same level of transparency.

In the future of human capital transparency, what investor will buy stock in a company that refuses to disclose material human capital information, which is already the primary driver of value in many companies? What employee will work for an organization that refuses to share its key human capital metrics because they are embarrassed to share? Why would anyone work for an organization where the culture is terrible, they don’t invest in their people, turnover is high, and there are not enough employees to do the work? Today, you can say you didn’t know. Tomorrow, you will know.

Change is coming. The new world of human capital transparency has arrived.

Learn More

The final rule on human capital reporting is part of a much larger revision to what is called Regulation S-K which governs what companies must disclose to investors initially (form S-1), quarterly (form 10-Q) and annually (form 10-K). The name of this effort, which has been underway by the SEC for four years, is Modernization of Regulation S-K items 101, 103, and 105. Item 101 governs description of the business and this is where new rules for human capital reporting are found. (Item 103 covers legal proceedings and item 105 risk factors.) Read the rule at here (pages 45-54) for human capital.

The ISO 30414:2018 Human Resources Management—Guidelines for Internal and External Human Capital Reporting is available for purchase. ISO is just beginning to release technical specifications on how the metrics are defined and used. By mid 2021, specifications should be available on all 59 metrics. The specifications for five of the eight training and development metrics are now complete and in publication. They should be available soon.

Attend Our Free Webinar On Public Reporting

Join us September 16 for our Public Reporting webinar, where cover both the SEC rule and ISO recommendations in greater detail.

Attend our Free Virtual Conference

We will be covering the subject of Human Capital Reporting at length at our Virtual Conference, October 27-29. Jeff Higgins, Adjunct Professor – Data Science(DSO), Human Capital Analytics at University of Southern California – Marshall School of Business, has been actively involved in both the SEC and ISO efforts and will be keynoting on the subject. We will also have a panel on the SEC and ISO as well as a follow-up session on the recommended metrics.

Optimizing Investments in Learning

by Peggy Parskey, Associate Director, Center for Talent Reporting & John Mattox, II, PhD, Managing Consultant, Metrics That Matter

You might wonder why L&D leaders need guidance for optimizing investments in learning. Recent research from The Learning Report 2020 (Mattox, J.R., II & Gray, D. Action-Learning Associates) indicates that L&D leaders struggle to convey the impact of L&D. When asked what they would do better or differently to communicate value to business leaders, 36% of L&D leaders indicated they would improve processes related to communication. More telling though is, 50% indicated they would improve measurement. To convey value, leaders need a story to tell. Without measurement there is not much of a story.

This is where Talent Development Reporting principles (TDRp) are so relevant. It begins with a measurement framework that recommends gathering three types of data: efficiency, effectiveness, and outcomes. Efficiency data tells the story of what happened and at what cost. How many courses did L&D offer? How many people completed training? How may hours of training did learners consume? What costs did we incur? Effectiveness data provides feedback about the quality of the learning event (e.g., instructor, courseware, and content) as well as leading indicators of the success of the event (e.g., likelihood to apply, estimates of performance improvement, estimates of business outcomes, and estimates of ROI). Outcomes are the business metrics that learning influences such as sales, revenue, customer satisfaction, and employee satisfaction. TDRp recommends gathering data for all three types of measures so L&D leaders can describe what happened and to what effect.

The table below (Table 11.1 in Learning Analytics) shows a variety of measures for each category.

Key Performance Measures for L&D

Efficiency MeasuresEffectiveness MeasuresOutcome Measures

  • Number of people trained

  • Number of people trained by learning methodology (instructor led, e-learning, virtual)

  • Reach (percentage of people trained in the target population

  • Cost of training per program

  • Decrease in costs

  • Cost of training per learner

  • Cost of training per hour


  • Satisfaction with training

  • Knowledge and skills gained due to training

  • Intent to apply learning on the job

  • Expectation that training will improve individual performance on the job

  • Expectation, that individual performance improvement to lead to organization performance improvement

  • Return on expectations


  • Increase in customer satisfaction

  • Increase in employee performance

  • Decrease in risk

  • Increase in sales

  • Increase in revenue

Source: Learning Analytics ©2020, John R. Mattox II, Peggy Parskey, and Cristina Hall. Published with permission, Kogan Page Ltd.

In addition to this guidance about what to measure, TDRp provides guidance on how to report results using methods familiar to business leaders. Using these measurement and reporting approaches, L&D leaders can tell a robust story to business leaders that they can connect with.

In April 2020, Learning Analytics, Second Edition (by John Mattox, Peggy Parskey, and Cristina Hall). The book helps L&D leaders improve the way they measure the impact of talent development programs. The second edition includes a chapter on Optimizing Investments in Learning, where TDRp is discussed in detail. TDRp is featured heavily in the book because it helps L&D leaders connect their efforts to business outcomes by measuring the right things and reporting them in a way that business leaders understand.

In Chapter 11, the authors connect TDRp to the Portfolio Evaluation methodology. This approach implies that business leaders are interested in how learning and development programs impact four business areas: growth, operational efficiency, foundational skills, and risk. By aligning courses with one of these business areas and using TDRp, L&D leaders can demonstrate the effectiveness of the courses in preparing the workforce to improve each business area (portfolio).

The book also provides guidance on how to use TDRp to spur L&D leaders to act on the data they report. The book indicates L&D leaders need to shift reporting in three ways to make it more actionable (see graphic below). Reports should provide:

  • Analytics that improve business decisions
  • Insights that link to evolving business challenges
  • Information that connects talent data to business impacts

Using Data to Spur Action

Using Data to Spur Action

Source: Learning Analytics ©2020. Reproduced with permission, Kogan Page, Ltd.

Another critical aspect of conveying a meaningful message that drives action is to tell a compelling story. Chapter 11 of the book includes three critical elements of a data-driven story:

  • Scene Setting—Connect all data and results back to the desired business outcomes and goals.
  • Plot Development—Emphasize logical and clear connections between learning results and outcomes; focus on the central message of the results, not peripheral findings; and note any areas where L&D is creating value for the organization.
  • Resolution—Clearly and succinctly outline the justification for key findings; suggest improvements, recommendations, and nest steps. Continue the conversation on how to help L&D improve.
2020 CTR Annual Conference

LEARN MORE ABOUT THE USE OF DATA AND LEARNING ANALYTICS


Join us at the 2020 Virtual CTR Annual Conference, October 27, for the session, Everything You Wanted to Know about Learning Analytics but Were Afraid to Ask.

LEARN MORE >>

Why We Need Human Capital Metrics and Reporting in Company Disclosures

by Jeff Higgins
Founder and CEO, Human Capital Management Institute

Can we agree that the coronavirus pandemic is above all a human crisis requiring people-centric measurement and solutions?

The last great crisis in the U.S. was the financial crises of 2008-2009. If the pandemic is a uniquely human crisis, the most powerful driver in our economic recovery is what we do next with our human capital and people practices.

Given the importance of people and human capital practices in organizations, does it not make sense that such a critical resource and source of value creation be included in public disclosures?

Without standard human capital metrics, everyone is flying blind (risk planning, investors, policy makers, and corporate executives).

Human capital risk planning is a new focus for many companies. It’s a topic frequently mentioned but has seen little action beyond companies adding boilerplate legal risk language in public reporting.

Reliable, validated human capital reporting has long been a black hole for investors, including institutional investors representing employee pension funds. This may change, as investors and policy makers press for insightful evidence of risk mitigation and future sustainability including global standards like ISO30414 Human Capital Reporting Guidelines (December 2018) and US SEC proposed rule on human capital disclosure (August 2019). Recently the SEC has reaffirmed its want for enhanced disclosure and more focus on workforce health and welfare.

“We recognize that producing forward-looking disclosure can be challenging and believe that taking on that challenge is appropriate,”  explained the SEC in an April 2020 letter.

While critical for investors, human capital reporting is just as important for employees. Think about a person deciding where to work. Wouldn’t they want to know key human capital metrics to make their decision, such as the diversity of employees and leaders, the amount invested in developing employees, the percentage of employees and leaders who take advantage of training, the retention or turnover rate, and culture measures, such as leadership trust index or employee engagement score?

It is critical to note that organizations disclosing more about their human capital have historically performed better in the market.

EPIC research shows:

Reliable human capital data is often lacking yet raw data is readily available, along with proposed ISO HC reporting guidelines.

The enhanced usage of people data and analytics has been a positive business trend long before the COVID-19 crisis. But stakeholders, including executives and investors, have not always had information that was relevant, valuable, and useful for decision making.

Turning people data into decisions requires skill, effort, discipline, and money, which is lacking in many HR departments. Priorities have often been elsewhere in building employee engagement and experience metrics, along with other “hot” technology driven vendor offerings.

Even experts acknowledge that the value of human capital is not always easy to measure. Nevertheless, ISO guidelines do exist for internal and external human capital reporting. Until now, relatively little has been offered publicly in most U.S. companies.

“We may not always want to know what the data tells us,” said one member. As we have seen with the COVID-19 crisis, not wanting to know what the data tells us, can be a recipe for disaster and  not one that leads to economic recovery.

The Need for Transparency Has Never Been Greater

Many of the critical metrics and practices needed most are not transparent to the very stakeholders (such as employees, investors and government policy makers) who must start and sustain the economy recovery. 

Moving forward, enhanced transparency around workforce management and human capital measures will be needed. Without such data, organizations will have a difficult time adapting to changing markets and workforce needs and building critical partnerships. Such data is vital to building trust within the workforce and attracting, engaging, and retaining talent.

Increasingly, as organizations collect more data about their workforce, employees will want to benefit from that data.

“We saw less concern about privacy when we saw the workforce data was shared and being used to protect the workforce,” said one member.

Investing In and Developing the Workforce Has Never Been More Important

The power of learning, development, and people investment has never been greater nor more directly tied to economic results. A greatly changed world requires a new level of learning investment, adoption and innovation.  

Will organizations see the need for learning investment and people development as a continuing need? Will they measure it to better manage it? Will they begin to see human capital measures as a means for learning about what is working and what is not? Will they begin to see human capital reporting as a leading indicator of recovery? 

2020 CTR Annual Conference

IN-DEPTH LOOK AT ISO'S L&D AND HR METRICS FOR HUMAN CAPITAL REPORTING

Join us at the 2020 Virtual CTR Annual Conference for this session where industry experts share the definitions of the 23 ISO disclosure metrics and how to use these to in your human capital reporting strategy.

LEARN MORE >>

Three Measures You Should Capture Now During COVID-19

I hope everyone is healthy and managing through these very trying times. And, I hope the same for your family and friends. I know this pandemic has not been easy.

All of us by now are well into our Covid-induced new world where we have shifted from instructor-led training (ILT) to virtual instructor-led training (vILT) and eLearning, supplemented by content available through our employee portals. On top of all of these changes, you have probably had to create new courses on safety and return-to-work procedures. Perhaps you have provided guidance to your leaders about how to manage remotely in stressful times. Some have even gone into cost reduction mode to help offset the loss of income.

While you’ve been really busy, I hope are ready to ‘come up for air’ now. If so, here are a few measurements to think about. If you have not already implemented them, there is still time to collect the data before it gets lost.

Make Time to Collect and Analyze Data

First, be sure you are collecting data on your vILT and any new courses you have put online. You may be using a new platform like ZOOM, that were not set up to connect with your learning management system (LMS), and the number of participants may not be recorded and not be receiving typical follow-up survey. In this case, be sure to set up a system to capture the number of participants manually—perhaps by asking instructors to send in counts. And, consider sending a survey out to at least a sample of participants—even if you have to do it manually. If you cannot easily generate the email list of participants, send to a sample of all employees and make it more general. For example, ask whether you have taken a vILT course, online course, or downloaded content in the last three months. Then, get feedback on each.

Feedback, especially for vILT, is important so you can compare to ILT and understand what participants like better about it as well as what they dislike. Surprisingly, some organizations are reporting higher or equal scores for vILT. And these results are from vILT which was probably rushed into production (i.e., presentations from ILT that were used and repurposes for vILT). Imagine how much better vILT could be if it were actually designed for virtual delivery. This is the data you and your leaders will need to decide if you want to permanently change your mix to less ILT in favor of more vILT when the pandemic is over.

Capture Your Efficiency Savings

Second, be sure to capture the efficiency savings as a result of switching to vILT and online courses. Typically, vILT and eLearning will be less expensive because there is no instructor or participant travel and the course may be shortened. To be fair you should really only count the savings where an alternative to ILT was provided. Of course, there will be savings from simply cancelling ILT with no replacement, but that is not really a savings. It is just a reduction in offerings. You can estimate the cost of ILT and the cost of vILT so you can calculate the difference. Don’t be afraid of estimating—it’s a common business practice.  If you have too many offerings to make a calculation for each course, come up with the  average ILT and vILT cost, find the difference, and multiply it by the number of participants.

Capturing the dollar savings from switching to vILT is important because there is real value there and it should factor into your decision about continuing with a mix of learning modalities after the pandemic ends. The savings will be especially large for organizations where participants or instructors incur travel and lodging costs.

Calculate Your Opportunity Costs

Third, calculate the reduction in opportunity costs by switching to vILT and e-learning. Opportunity cost is the value of participants’ time while in class and traveling to and from class. This adds up—especially for half-day or full-day courses and can easily exceed the accounting costs (i.e., room rental, supplies, instructor, etc.) for some courses.

Calculating opportunity cost is simple. Take the average hourly employee labor and related cost (including benefits, employer paid taxes, etc.) from HR and multiply it by the reduction in hours from switching to vILT and e-learning. For example, suppose you replaced an 8-hour ILT with a 5-hour vILT, which also saved participants one hour of travel time.  The reduction in hours would be 3+1=4 hours. Multiply the 4 hours by the number of participants and by the average hourly compensation rate. You can use the average hours for an ILT course and compare it to the average for vILT or e-learning to make the calculation at an aggregate level.

It’s important to account for savings in travel time—because employee’s time is valuable. By eliminating travel, you are giving time back to employees when you move to vILT or e-learning. Share the opportunity cost savings with your senior leadership, CFO, and CEO along with the savings in course costs. The opportunity cost savings, in particular, is likely to be an impressive number. Think about this savings when you decide whether to go back to ILT or keep using vILT.

I hope you find these measures helpful. You really need them to make an informed decision about what to do when the pandemic ends. I think the profession has a great opportunity to use this experience to become much more efficient and effective in the long run.

L&D Leadership in a Time of Great Change

[FREE VIRTUAL CONFERENCE] July 14 & 15, 2020. 12:00 – 4:15 pm EDT. This two-day virtual conference details the specific tasks L&D organizations have to carry out to achieve significant and continuous workforce productivity. Presentations will detail how to organize these tasks so that training executives can perform them systematically, purposefully, with understanding, and with a high probability of accomplishment.

Hear case studies, winning strategies, evidence-based results from strategic management leaders and learning executives charged with the awesome responsibility of managing L&D in today’s new business climate. Register Today!

Talent Leaders Virtual Exchange Americas

Virtual Event | June 17, 2020
REGISTER NOW

The Talent Leaders Virtual Exchange will give talent executives the much needed opportunity to hear how their peers are dealing with unprecedented disruption during this global pandemic. In a time when the situation changes almost daily, it’s important that talent executives are making quick decisions in the moment while also planning for a post pandemic world.

The Talent Leaders Virtual Exchange will offer insights from many perspectives on what is working now and what strategies are being put into place for the future. Topics covered include:

  • Communicating with a concerned, remote workforce
  • Lessons learned in the face of a global pandemic
  • Capturing the voice of the employee in a crisis
  • How talent leaders are delivering value during the time of COVID-19
  • Managing your employer brand in uncertain times
  • Preparing for a post-pandemic workforce

>> REGISTER

 

Forward to the Past—Reflections on How Our Profession Evolves (Or Not)

If you follow this blog, you know that I’ve been struggling this past year with the profession’s new focus on upskilling employees for future needs. I have been trying to understand two things:

  1. How future needs can be reliably identified now, and
  2. Assuming needs can be identified, why would anyone want to provide training now, when the skills and knowledge will not be needed or applied for years to come—resulting in scrap rates of 100%.

To me, this seems to go against everything we have learned over the last 20 years in terms of good performance consulting and designing with application and impact in mind.

I was excited to see multiple topics on upskilling and reskilling at a recent CLO Symposium. I looked forward to learning more about this topic, given its incredible popularity.

During the sessions, I had expected to learn that the identifiable future skills would be tangible skills—like programming or new manufacturing techniques (which would be difficult to predict with enough specificity to train for today). Instead, the future skills identified by the presenters were all soft skills like communication, teamwork, innovation, creative thinking, and problem solving. This reminded me of current research that cited soft skills are very important to future success.

That same research also indicated that employers are looking for those same soft skills in their employees today. If these skills are needed today, we can use our performance consulting tools to identify the gaps and design learning to address them. This all means that these soft skills can be applied today, which means we are really not talking about future skills—we are talking about skills to meet current needs, which are considered important for the future.

The CLO Symposium presenters shared these skills in the context of  being ‘new’—as in organizations should start to train employees on them. This is where I felt I was fast forwarding to the past. When Caterpillar U was founded in 2000, we had a brand new LMS containing hundreds of courses. In the ‘old days’ training organizations used to brag about how many courses they had in their ‘paper’ catalog. Guess what was in that catalog…soft skill courses like communications, team building, writing, and problem solving. The catalog also included a lot of job function skills too, but the point is, soft skill courses are not new. They have always been in demand and always will be. So, can we please apply a little historical context and be more careful about how we define  ‘future skills’? What we are really talking about is ‘forever skills.’

As I reflect on the past, it’s interesting to note what happened to learning leaders bragging about how many courses they offered. Company executives began asking for results. They wanted to know how these hundreds of courses specifically aligned to business goals and needs. Executives demanded measures beyond how many courses were offered. This led to ‘strategic alignment’ where the emphasis is how courses proactively and specifically align to meet the goals of the enterprise.

For many years, strategic alignment had been a hot topic at industry conferences and many books have been written on the topic. How many times did I hear strategic alignment mentioned at the CLO Symposium conference? Zero. I understand that focus areas change and evolve, but we don’t seem to be building on our past. Instead, we’re re-inventing it as with identifying soft skills as future skills. We will advance more quickly and soundly as a profession if we have a sense of our past and can skip the re-learning phases.

Let’s remember that soft skills will always be important as will strategic alignment. There is no need to cycle back and forth and rediscover the value of each. A holistic future can include all that we have learned which will provide the strongest foundation upon which to build, and then we can focus more of our energy on inventing what is truly new and special.

Thoughts on the Coronavirus and What It Means for L&D

The situation surrounding COVID-19 is evolving on a daily basis. Uncertainty is high and it may be weeks or even several months before we know how all this plays out. Several things are clear, though.

First, the US and world are likely headed into an economic “timeout” and quite possibly a recession. Hopefully, this will be brief but even a brief downturn will cause significant harm. Many in the service industries are going to lose their job or have their hours severely reduced. Other companies will experience falling sales as people cut back or delay their spending. Most organizations have already restricted travel. Even if the worst of the virus is over by summer and travel restrictions are lifted, companies may implement cost reduction measures for the remainder of the year to partially offset a decline in revenue for the first half of the year.

So, if your organization is one which is negatively impacted, now is the time to implement the recession planning that we have talked about for years. Be prepared to start reducing variable expenses, including the use of vendors and consultants. Where appropriate, shift from instructor-led learning to virtual ILT, e-learning, and performance support. Prioritize your work and be prepared to focus on only the most important. Get advice from your governing body or senior leaders on where they would like you to focus your efforts. If you have not already done so, clearly align your programs to the top goals and needs of your CEO.

Second, COVID-19 presents an excellent opportunity to make significant progress in moving away from instructor-led learning and towards virtual ILT and e-learning. Most organizations have been moving in this direction for years but say they are not yet at their desired mix of ILT to vILT and e-learning. Well, now is the perfect opportunity to make this transition. Travel restrictions will prevent employees from traveling to a class and will also prevent instructors from flying to class locations. And social distancing discourages bringing employees together for training which is why so many universities have announced that they are shifting entirely to remote learning for at least the next month or so. The private sector should be equally responsive. Use the existing platforms to conduct classes virtually and ramp up the marketing of your e-learning and portal content.

Third, the virus also presents a once-in-a-lifetime opportunity to highlight performance support. What can we in L&D do to help our colleagues adapt to this new world and still do their jobs? People will be working remotely from home, perhaps for the first time. What performance support do they need? Most will not need a 30-minute e-learning course on working remotely but they will need steps to take to get connected. They will need help setting up virtual meetings. What can you provide them to make their life easier and to prevent a flood of calls to IT or HR for support? And, building on this, moving forward what opportunities do you have to replace some of your existing ILT or e-learning with performance support? Take this opportunity to truly integrate performance support into all that you do.

On a personal note, I wish each of you the best and hope you stay healthy. The situation will stabilize and eventually return to normal, but in the meantime let’s see what we can do to help each other through these challenging times.

Top Five New Year’s Resolutions for the Learning Profession

January is always a good time to reflect on the past and ponder the future. It’s also a good time to make some resolutions to improve in the coming year and decade. Here are my top five suggestions for the profession in the areas of measurement, analytics, reporting, and running learning like a business.

1. Resolve to measure more at levels 3 (application) and 4 (impact).

In November, ATD released its latest survey on evaluation that showed 60% of respondents evaluated at least one program at level 3 which is up from 54% in the 2015 study. Likewise, 38% measured at least one program at level 4 which is a slight increase from 35% in 2015. The good news is that the numbers are moving in the right direction. The bad news, is that they should be much higher. Over 80% measure levels 1 and 2. This should be the target for level 3 as well. And, we should measure level 3 for more than just a few programs. It should be measured and managed for all important programs. Once we start measuring application at a higher rate, we can turn our attention to measuring more programs at level 4. Phillips has shown us that this measure is what CEOs most want to see from us and we’re not giving it to them. We need to fix this.

2. Continue our shift from order takers to strategic business partners.

This will require us to have a discussion with the CEO before the year starts, to discover the goals fro the coming year and the goal owners. We need to talk with the goal owner to determine if learning has a role to play, and if it does, work with their staff to recommend an appropriate program. The goal owner then needs to approve the program and we need to reach agreement on the planned impact, other key measures of success, timing, and roles and responsibilities. One of the most important responsibilities of the goal owner will be to ensure that supervisors reinforce the learning with their employees in order to achieve a high application rate. All of this should be completed before the new year starts.

3. Be more disciplined and honest about how well aligned courses are with your organization’s goals.

Most learning departments would say they are well aligned but really, they’re not. What do I mean by “alignment”? It means the courses were designed (or purchased) in response to an identified need in support of a goal. If you say you are “aligned” to your organization’s top five goals, you should have a strategic alignment table which lists the top five goals—and under each one, you show the programs designed (or purchased) specifically to help achieve that goal. Instead, most use backward mapping, which starts with the course catalog and then seeks to map a course to all relevant goals. Since most courses indirectly support most goals, many conclude they are well aligned. For example, you might find that courses to improve communications skills are “aligned” to each of the top five goals (like increasing sales). The two approaches are very different—which explains why in surveys such a high percentage report they are aligned.

4. Be more sophisticated in your analysis of data.

Historically, data has been aggregated and reported as a total or average. Say for example, we report that the average participant reaction (level 1) is 76% favorable or that the average application rate is 51%. These are excellent summary measures but often do not tell the entire story. Perhaps the overall 76% favorable rating is due to the majority of participants being very satisfied but a sizable minority being very dissatisfied. We would want to know this information so we can explore the reasons (i.e., wrong target audience, bad instructor, etc.). Look at the distribution, not just the average or total and then use the information at a “by-name” level to take action (i.e., letting the supervisor know they are going to have to work harder with an individual to get application). This is the potential of microdata, which is really just a fancy term for data at the individual employee level. Ken Phillips is doing interesting work in this area.

5. Create and use more management reports.

Today, the industry primarily generates scorecards and dashboards. Scorecards show historical data in tabular form (like number of participants or courses for the last six months) while dashboards include some visual and/or interactive displays. Both however are designed to inform by sharing historical results. In other words they are “backwards” looking. Of course, it is important to know where we have been so we need scorecards and dashboards—however we also need to look forward. For this, we have management reports that show the plan (or targets) for the year, comparison of year-to-date results with plan, forecast (how the year is likely to end), and forecast compared to plan. They are designed to help leaders manage their initiatives to deliver promised results by answering two questions: “Are we on plan year to date?” and “Are we going to end the year on plan?” Management reports complement scorecards and dashboards by providing that forward look which can be used to actively manage initiatives.

 

Corporate Learning Week 2020

 

 

 

Corporate Learning Week | March 23-26 | Austin TX | www.clweek.com

This year’s Corporate Learning Week (CLW) conference celebrates 25 years as an essential gathering for senior corporate learning leaders and innovators. If you’re a learning, training and development professional who wants to increase performance, maximize profits, make more strategic and tactical decisions, and offer more effective training and development to future-proof your organization’s workforce knowledge shortages—then, this conference is for you.

This year’s event will focus on helping L&D leaders to utilize innovative technology to deliver measurable, quantifiable results—results that empower diverse talent and ultimately drive business outcomes. In short, CLW 2020 will help address the fundamental questions and challenges that remain in today’s environment:

  • How do you make your L&D more effective in face of disruption?
  • How do you find and realize new training potential in a rapidly digital environment?
  • How do you align your training and development outcomes to corporate goals?

CLW 2020 will equip L&D practitioners with practical skills to go from strategy-setting to practical execution to communicate the business outcomes and results back to senior executives.

Reserve your spot today!

Upskilling Revisited: What Strategy Makes the Most Sense?

In last month’s blog I shared my skepticism about some of the current upskilling initiatives. I have been thinking more about it since then. In fact, it is hard not to given the near daily references to the need for upskilling. Just this morning I saw a reference to a World Economic Forum study on the future of jobs which indicated that 54% of employees will require significant reskilling or upskilling by 2022.

Several things come to mind. First, I wonder about the terminology. How should we define these two terms? The term “upskilling” seems to suggest providing a higher level of skill while “reskilling” suggests a different skill, which may be higher, lower, or the same in terms of competency. Is that what we mean? And how does this differ from what we have been doing in the past? Haven’t you been reskilling and upskilling your employees for years? Isn’t that what the term training means? (Unless you have been “downskilling” your employees!) Is “reskilling” just the new term for “training”? Don’t get me wrong. If you can get a larger budget by retiring the term “training” and emphasizing “reskilling or upskilling”, then go for it. But let’s be careful about how we use the terms.

Second, I wonder about the methodology being used to conclude that upskilling is needed. As I mentioned last month, one of my worries is that we are not doing a needs analysis to confirm this need or identify precisely what the need is. I know some organizations are hiring management consultants to tell them what skills will be needed in the future but how do the consultants know what your needs will be? I know other L&D departments are asking their own organization leaders the same question, but how good is the knowledge of your own leaders? My concern is that this methodology will generate the type of common, somewhat vague, needs we now hear about all the time like digital literacy or fluency.

Third, what exactly do we do once we have identified these vague needs? A traditional needs analysis would identify the performance that needs to be improved and then, assuming learning does have a role to play in improving that performance, recommend training specifically designed to close the gap. If we were to provide training to improve digital fluency, exactly what will be improved? Are we back to increasing competency simply to increase competency? Training for training’s sake? So, my concern here is that even if we increase competency in these areas of “need”, nothing of measurable value will improve.

Fourth, what is the time frame? Put another way, how exactly is the question being asked of leaders about future skills? Is it about “future skills” with no time period specified, or is it about skills that will be needed in one year, five years, or ten years? When does the “future” start? It seems to me that the further out we go, the more likely we are to mis-identify the actual needs, especially given the rapid pace of change. Even if we could identify a need one or two years in the future, do you really want to develop a course today to address it, given that the need may evolve? Moreover, if you begin upskilling employees today, they are not going to have an opportunity to apply the new skill for one or two years, by which time they will have forgotten what they learned.

Here is my suggestion for discussion. Doesn’t it make more sense to focus on the skills needed today and in the very near-future and to deploy the learning at the time of need rather than months or years ahead of time? Ask leaders what skills their employees need today or will need in the very near-future, and then follow up with a proper needs analysis to confirm or reject their suggestions. If we can do a better job meeting the current and very near-future needs of the workforce, won’t our employees always be well positioned for the future? And isn’t a reskilling/upskilling initiative tailored to specific employees and delivered at the time of need likely to be far more impactful than an approach based on vague needs for a large population delivered months or years before the new skills will actually be used?