CTR Exchange and Blog

The Portfolio Approach to Managing Learning

I just listened to a great webinar by Cristina Hall, Director Product Strategy for Metrics That Matter (MTM), which is an organization that specializes in helping companies gather, analyze and report data on learning. They have developed a very helpful approach for thinking about and managing the mix of courses an organization offers.

At CTR we spend most of our time focusing on the key programs in support of key organization goals as well as the key initiatives of the L&D department head to improve internal efficiency and effectiveness. Consequently, we advocate the use of program reports by the program manager and the goal owner which contain detailed data on the programs in support of key company goals like increasing sales. We also recommend summary reports for sharing with the CEO and other senior leaders to show alignment of learning to their goals and the expected impact from that learning. Last, we have operations reports for the L&D department head to use in managing improvement of a few select efficiency and effectiveness measures. So, the focus is primarily on targeted programs which are actively managed on a monthly basis to deliver planned results.

But what about all of the other courses an organization offers? Some companies have hundreds of courses in their learning management system (LMS). Most are not strategically aligned to the top goals of the company and are not actively managed, although they may be reviewed periodically. While not strategic, they can still be very important, and organizations want be sure they offer the right mix of courses and that the offered courses add value. So, the question is, “How should all of these programs be managed?” This where the MTM approach offers a very valuable contribution to the field.

MTM recommends assigning each course to one of four portfolios or categories. The four portfolios are: Drive Growth, Operational Efficiency, Mitigate Risk, and Foundational Skills. The portfolios reflect the reason for the learning and force the L&D group to answer the question of why the course is being offered. Is it to improve revenue? If so, it falls in the Drive Growth portfolio. Is it to reduce cost or improve productivity? If so, it falls in the Operational Efficiency portfolio. Is it to ensure compliance with regulations or standards or reduce the organization’s exposure to potential harm? If so, it falls in the Mitigate Risk portfolio. All other courses are assigned to the Foundational Skills category which would include all types of basic and advanced work skills, from very specific job training to communications skills and team building.

The beauty of this approach is in its simplicity. We could imagine more portfolios and we could argue some courses may fall in more than one portfolio, but assigning each course to just one of four portfolios forces the question of what the course is primarily designed to accomplish. Once all the courses have been assigned, imagine a grid with four quadrants—one for each portfolio. Now populate the box for each quadrant with measures that will help you manage your portfolios. First, show the

percentage of courses and hours as well as the amount of L&D budget dedicated to the portfolio to determine if your mix is right. For example, it may be that 5% of the L&D budget is being spent on driving growth, 35% on improving efficiencies, 10% on risk and 50% on foundational skills. This may be the right mix, especially for an L&D department with responsibility for a lot of basic skills training. On the other hand, a CEO might look at the mix and be surprised that more effort isn’t being allocated to driving growth and improving efficiency. Just sharing this data can lead to a great discussion about the purpose of L&D and on priorities.

Measures could be added for each portfolio to show the number of participants, level reaction, level 3 application, and some indicators of level 4 impact or outcome. The data could also be color coded to show comparison to last year or better yet to plan (or target) for this year. This portfolio approach can also be incorporated directly into TDRp’s operations report where the four portfolios serve as headings to organize the measures.

In conclusion, I strongly recommend an approach like MTM’s to better understand the existing mix of courses and to ensure alignment with the priorities of the CEO in terms of budget and staffing. I also believe the portfolio approach will be helpful in monitoring the courses throughout the year for replacement or revision.

Great Advice From the Kirkpatricks

By Dave Vance,

In their July 25 newsletter the Kirkpatricks ask if you are a Training Fossil. I love the imagery and their message. Here is what they say:

“The role of training professionals has changed as technology has advanced. Fifteen years ago, it may have been enough to design or deliver a great training class. If that’s all you do these days, however, you are in danger of being replaced by a smartphone app or other technology that can do the same things quickly and for free, or nearly free.

In a similar way, training professionals need to design and build “roads and bridges” into the job environment before, during and after training, and then monitor them regularly. Instructional designers need to do more than design formal classroom or virtual instruction. Trainers need to do more than deliver it. “

Their concern is a good one and I think in the near future we will see aggregators who can provide generic content on any platform so efficiently that internally developed content will not be able to compete. So, if a training department is focused solely on providing generic content, often in an effort to boost employee engagement, it risks becoming extinct just like the dinosaurs.

Unlike the dinosaurs, however, we can avoid extinction by making sure we are adding true value to our organizations. As Jim and Wendy explain, this goes far beyond providing generic content. It starts with understanding your organization’s goals and priorities and includes establishing a very close, strategic relationship with senior business leaders. Put simply, start with the “business” end in mind which the Phillips and all leading authors in our field recommend. Next, design the entire learning experience and define the roles that the business goal owner and L&D must play in order for the learning to be appreciated, applied and impactful. This starts by engaging the business goal owner to agree on reasonable expectations from the learning if both of you fulfill your roles. Neither party can make the learning a success on their own. While the learning department will play the biggest role in designing and delivering the learning, the business goal owner and supervisors will need to play the biggest role in communicating the need for the learning and then reinforcing the learning to ensure application.

Roy Pollok describes what is necessary for successful learning in The Six Disciplines of Breakthrough Learning which I highly recommend to understand learning as a process and not an event. In a similar fashion, Jack and Patti Phillips are emphasizing the importance of design thinking in their latest books. All agree that learning cannot be just an “event”. For true impact it must be carefully planned and managed in close partnership with the business, and the follow up after the “event” to ensure application and impact is just as important as the design and delivery of the “event” itself.

This process oriented approach coupled with a close strategic partnership with the business is what will allow the profession to add true value. It also will prevent us from becoming extinct since we add value at every stage of the process from consulting with the business upfront to helping the business reinforce the learning with their employees on the backend. While we may purchase generic content where it makes sense, our value add goes far beyond the design and delivery of content. We will not be in danger of becoming a training fossil as long as we focus on partnering with the business to meet their needs by delivering the entire learning experience required to make a difference in results.

Impact and ROI of Learning: Worth Pursuing or Not?

Several articles in the last month have suggested the profession should step back from trying to isolate the impact of learning and the resulting return on investment (ROI). The authors argue that it is difficult or impossible to isolate the impact of learning from other factors and no one will believe the results anyway so why bother. Instead, they call for showing the alignment of learning to business goals, focusing on easier to measure levels 1-3 (participant reaction, amount learned, and application), and finally focusing on employee engagement with learning (consumption of learning and completion rates). Aligning learning to business goals and measuring levels 1-3 are always good ideas, so no disagreement there. And, depending on your goals for learning and the type of learning, measuring average consumption and completion rates may also make sense. However, for certain types of learning there is still a need to measure impact and ROI (levels 4 and 5).

The primary reason is that senior corporate leaders (like the CEO and CFO) want to see it and so should the department head and program director. Research by Jack Phillips in 2010 showed that CEOs most want to see impact and ROI but instead are often provided with only participation data (number of learners, number of courses, etc.), participant reaction (level 1) and cost. While these measures are helpful they don’t answer the CEO’s question of what they are getting for their investment. CEOs and CFOs want to know what difference the training made and whether it was worth the time and effort. The program director and CLO should also be curious about this, not from a defensive point of view (like proving the value of training), but from a continuous improvement perspective where they are always asking what we learned from this project and how we can improve next time.

It is true that level 4, the isolated impact of learning on a goal, is harder to determine than levels 1-3. Sometimes there will be a naturally occurring control group which did receive the training. In this case, any difference in performance between the two groups must be due to the training. In other cases, statistical analysis like regression may be used to estimate the isolated impact from the training. The most common approach, however, is participant and leader estimation which is generally good enough to roughly determine the impact of learning and definitely good enough to learn from the project and to identify opportunities for improvement. In a nutshell, the methodology calls for asking participants to estimate the impact from just the training and to also share their confidence in that estimate. The two are multiplied together to provide a confidence-adjusted estimate of the isolated impact of learning. For example, one participant may say that the training led to a 40% increase in performance (like higher sales) but is only 50% confident in the 40%. The confidence adjusted impact would be 40% x 50% = 20% increase in performance. Repeat for others and average. Best practice would be to ask supervisors what they believe as well. Then share with the initiative’s sponsor and other stakeholders and modify as necessary. Once the level 4 isolated impact is determined, ROI is very straight-forward.

The Existential Question: Why Do We Measure?

There are several excellent answers to the question: Why Do We Measure? And, the answers will provide much needed direction to your measurement strategy. The most common answers are to:

  1. Answer questions
  2. Show results
  3. Demonstrate value
  4. Justify our budget (or existence)
  5. Identify opportunities for improvement
  6. Manage results

We will examine each in turn and comment on the implications of the answer for your strategy.

The most basic reason for measuring is to answer questions about programs and initiatives. For example, someone wants to know how many courses were offered in the year, how many participated in a particular course, or what the participants thought about the course. Assuming this information is already being collected and stored in a data base or captured it in an excel spreadsheet, simply provide the answer to the person who asked for it. If it is one-time-only (OTO) request, there is no need to create a scorecard or dashboard. If someone wants to see the same information every month, then it does make sense to create a scorecard to show the data by month.

The second most common reason is to show results. L&D departments produce a lot of learning each year and usually want to share their accomplishments with senior management. In this case, results generally translates to activity with departments sharing their results in dashboards or scorecards which show measures by month (or quarter) and year-to-date. The scorecard might also show results for the previous year to let senior management know they are producing more learning or improving.

The third reason is to demonstrate value. It is also very common and is just an extension of the second. Some believe that simply showing results demonstrates value while others believe that demonstrating value requires a comparison of activity or benefit to cost. For example, a department might demonstrate value by calculating cost per participant or cost per hour of development and showing that their ratios are lower than industry benchmarks or perhaps last year’s ratios.  Some adopt a higher standard for value and show the net benefit or ROI of a program. Net benefit is simply the dollar value of the impact less the total cost of the program. Any value above zero indicates that the program has more than paid for itself. Return on investment (ROI) is simply net benefit divided by total cost expressed as a percentage, and any positive percentage indicates the program more than paid for itself. Measures to demonstrate value are usually shared at the end of a program or annually rather than monthly.

The fourth reason is to justify the L&D budget or the department’s existence. This is an extension of the third reason where justification is the reason behind demonstration of value. In my experience this is almost always a poor reason for measuring. Typically, a department measuring to justify their budget is a department which is not well aligned to the goals of the business, lacks strong partnerships with the business, and has poor or nonexistent governing bodies. Not only is it a poor reason for measuring, but the effort in most cases is doomed to fail even with high ROIs. In this situation, energy would be better spent addressing the underlying problems.

The fifth reason is to identify opportunities for improvement. This is a great reason for measuring and indicates a focus on continuous improvement. In this case scorecards may be generated to show measures across units, courses and instructors with the goal of discovering the best performers so that the lessons learned from them can be broadly shared. There may also be a comparison to best-in-class benchmarks, again with an eye toward identifying areas for internal improvement. Another approach would be to create a scorecard, graph or bar chart with monthly data to determine if a measure is improving or deteriorating through time.

The last reason to measure is to manage. This is perhaps the most powerful, and least appreciated, reason to measure. A well-run L&D department will have specific, measurable plans or targets for its key measures. These plans will be set at the start of the fiscal year and the L&D leaders will be committed to delivering the planned results. By definition, this approach requires the use of measures, and time will be spent selecting the appropriate measures to manage and then setting realistic, achievable plans for each. Once the plans are done, reports need to be generated each month comparing year-to-date results to plan in order to answer two fundamental questions: 1) Are we on plan, and 2) Are we likely to end the year on plan? If the answer is “no” to either question, the leaders need to take appropriate action to end the year as close to plan as possible. In this case, reports must be generated each month showing plan, year-to date results, and ideally a forecast of each measure is likely to end the year.

A good measurement and reporting strategy will address all the reasons above except number four (justify existence). Reports, dashboards or scorecards are required in some cases but not in others just as monthly reporting is required in some cases but not in others. If there are limited resources, it is best to generate regular reports only for those measures to be actively managed (reason six) or those measures used to show results (reason two). Reports can be generated on an as-needed basis in other cases and most measures can simply be left in the data base until they are needed to answer a specific question.

What Finance and Accounting Should Expect from HR

Finance and accounting often do not hold HR to the same standards as other departments because it is believed that HR is different and people initiatives cannot be measured or managed like other business initiatives. However, having different standards for HR is a disservice to both HR and the company, resulting in lower expectations and less return from HR expenditures.  HR professionals are quite capable of playing by the same rules as their colleagues in other departments and can deliver tremendous business value, but finance and accounting professionals need to have a realistic sense of what may be expected from HR.

I believe that HR, including L&D, should be held to the same business standards as other departments. This means that finance and accounting should expect the same from HR as any other department – not more, not less – just the same. At a minimum, finance and HR should have five expectations for HR.

First, since HR is a support function, it should align some of its initiatives to the organization’s top business goals. This means that HR, through its many departments like L&D, talent acquisition and total rewards, should have programs or initiatives that directly support business goals like increasing sales, reducing defects, improving patient quality, etc. Of course, HR will also have programs in support of HR goals like increasing employee engagement, improving leadership and reducing regrettable turnover. While these programs are very important and will indirectly contribute to achieving all the business goals, the expectation should be for HR to directly contribute to the business goals as well, which it can easily do.

The second expectation is that HR will prepare a business case for major programs or initiatives, just like other departments are required to do. The business case will bring together the planned benefits and costs of the initiative, making explicit all the key assumptions and risks. The business case should include the HR ROI (return on investment) which is simply the net benefit divided by the total cost, which is commonly used in L&D. This will help finance and accounting make better decisions about funding and will allow comparisons among different HR requests, although it will not allow a direct comparison to the financial ROIs from other areas since the formulas are different for HR and financial return on investment.

The third expectation is that HR will create a business plan for the coming fiscal year, just like other departments do. The plan may be written or PowerPoint depending on the organization’s culture and should include at least an executive summary, a review of last year’s accomplishments, the business case for the coming year for at least the major initiatives, and the budget, staffing, and other resources required to deliver the plan. The plan will include specific, measurable goals for all key initiatives. The process of creating a plan is just as important as the finished product. A good business planning process will ensure the right questions have been raised and the right people have been involved to yield the organization’s best thinking about what can be accomplished in the coming year.

The fourth expectation is that HR will execute its approved business plan with discipline. Now that specific, measurable goals (targets, plans, KPIs – whatever you prefer to call them) have been set in the business plan, HR needs a process to ensure these planned results are delivered. Disciplined execution requires monthly reports comparing year-to-date (YTD) results to plan and ideally a forecast for how the year is likely to end compared to plan as well. The reports should include all the key measures identified in the business plan and should be used in a monthly meeting of senior leaders dedicated to actively managing results to come as close to plan as possible by the end of the year. Disciplined execution requires that two questions be answered every month: 1) Are we on plan year to date? and 2) Are we going to end the year on plan? Answers to these two questions will drive management efforts for the rest of the year.

The fifth expectation is that HR will be accountable for results. The approved business plan contained the specific, measurable goals for the year. Now HR needs to execute with discipline and be willing to be held accountable for achieving its plans, just the same as any other department.

These are the five most important expectations that finance and accounting should have for HR. Each is a realistic expectation that is being met by some HR departments today. Some in and outside HR have argued that HR is different and cannot meet these expectations. They claim that since HR works with people, specific and measurable goals cannot be set and HR initiatives cannot be managed the same way as in other departments. Consequently, they don’t believe in creating business cases or business plans or reports comparing progress to plan. As an HR insider, I believe they are wrong. HR can meet these expectations. HR is no more difficult to manage than sales, manufacturing, quality or other departments, and in some ways it may be easier since we often have more control over the outcome.

Meeting these expectations will strengthen HR’s seat at the table and vastly improve our credibility in the eyes of our colleagues. Let’s show them we can play by the same rules and realize our full potential to contribute to our organization’s success.

The Future of Measurement and Reporting for Learning and Development

2018 CTR Conference Summary

We had our best conference yet February 21-22 in Dallas. We had more than 90 participants despite widespread flight cancellations and delays due to terrible weather. The energy level, sharing, and participation more than made up for the weather.

Roy Pollock, co-author of The Six Disciplines of Breakthrough Learning, was our first keynoter and really set the tone for the conference by reminding people that learning must be connected to the business.  Our panel session before lunch gave Kimo Kippen, Adri Morales, Terrence Donahue, and Peggy Parskey an opportunity to share their thoughts on the greatest opportunities ahead of us. All agreed we live in a very exciting time for L&D and that advances in technology and learning platforms will allow us to reach more people in more impactful ways than ever before. Adri Morales, 2016 CLO of the Year, then stayed on the stage to share her personal journey and thoughts with the group.

The afternoon of Day One offered six more breakout sessions for a total of nine the first day. Predictive analytics, measuring informal learning, ROI Goal Setting, xAPI, spending levels, and the application of the 6D’s at Emerson were just a few of the topics that generated a lot of interest. Participants came back together late in the afternoon to learn about  a new maturity model for learning developed by Peggy Parskey, Cushing Anderson and Kevin Yates which was in turn used to help judge the First Annual TDRp Excellence Awards. The 2018 winner was USAA. We closed out Day One by having speakers meet with participants in the café for wine and lots of good discussion.

Day Two began with a keynote by Marianne Parry and Lee Webster on the brand new ISO standards for human capital reporting. The audience was very interested to hear about the recommended measures and next steps in the process to adopt these. More breakout sessions followed to engage participants on measurement strategy, isolating impact, level 3 design, alignment and the future of the corporate university. Leaders from Humana, Siemens and USAA also shared their measurement strategies and journeys. We wrapped up with a panel hosted by Patti Phillips to learn what Lorrie Lykins, Paul Leone and John Mattox saw for the future of measurement. Much like Wednesday’s panel, the group predicted a bright future for measurement enabled by technology, xAPI, and advances in the field, especially in predictive analytics.

Everyone left feeling recharged and excited after a day and a half of great speakers and provocative, informative sessions. And it just feels good to be around like-minded people who understand your challenges and who all want to improve.

Planning for next year’s conference is already underway. It will be February 20-21 in Dallas at the same venue. Patti Phillips, CEO of the ROI Institute, and Jeff Higgins, CEO of the Human Capital Management Institute (HCMI) will keynote and Lee Webster will return to give us an update on the ISO standards for human capital reporting. CTR is poised to help drive an ISO effort to establish TDRp as the framework for the types of measures and reports for our field so we will have an update on that as well.

Hope to see you in Dallas in 2019!

Alignment Revisited

Tim Harnett in a recent Human Capital Media Industry Insights article (undated) reminds us of the importance of aligning L&D initiatives to organizational goals. He shares some sobering research indicating that “only 8% of L&D professionals believe their mission is aligned to the company strategy” and “only 27% of HR professionals believe there is a connection between existing [learning] KPIs and business goals”. So despite perennial intentions to do a better job of alignment as a profession we still have a long way to go.

I am afraid, though, that he does not go far enough in recommending the actions we need to take. He suggests that “identifying and tracking KPIs related to L&D initiatives is the best way to align L&D to organizational goals and make the business case for development programs”. For KPIs (key performance indicators) he is thinking of measures like level 1 participant satisfaction, learning hours, level 3 application and employee satisfaction. While these are all important measures and can indeed help make the case for learning, they actually have nothing to do with alignment.

Here is why. Alignment is the proactive, strategic process of planning learning to directly support the important goals and needs of the organization. Alignment requires L&D leaders to discover the goals and needs of the organization and then go talk to the goal owners to determine if learning has a role to play. If it does, the two parties need to agree on the specifics of the learning initiative including target audience, timing, type of learning, objectives, cost, and measures of success (ideally the outcome or impact of the initiative on the goal or need). They must also agree on the mutual roles and responsibilities required from each party for success including communication before the program and reinforcement afterward.

Measures or KPIs will come out of this process but the measures are NOT the process. It is entirely conceivable to have a learning program with great effectiveness and efficiency measures indicating many employees took it, liked it, learned it, and applied it, but the program was NOT aligned to the goals of the organization and should never have been funded. This is true even if it had a high ROI. Great numbers do not take the place of a great process, and alignment is not determined to have existed by looking back on measures or KPIs.

Conversely, you can easily imagine a program that is definitely aligned to one of the organization’s top goals but was executed so poorly that its effectiveness numbers came in very low. So, alignment is about the process of working with senior organizational leaders to plan learning initiatives which directly address their goals and needs. It must start with the organization’s goals and not with existing learning initiatives.

Last, there is much discussion these days about using employee engagement as an indicator of alignment. It is not for all the reasons discussed above. It is simply another measure and not a process. For engagement to be an indicator of alignment you would have to assume that employees know the organization’s goals, as well as the senior leaders do and that learning about those goals is the primary driver of engagement. Both of these assumptions are likely to be false. A focus on employee engagement would be appropriate only if engagement is the highest priority goal of the organization. In most organizations business goals like higher revenue, lower costs, and greater productivity are more important although higher engagement is always a good thing and will contribute indirectly to achieving the business goals.

In conclusion, I am happy to see a focus on this important topic of alignment, but success will require us to work the process of alignment with senior leaders. At the end of this process, every L&D department should have a one-pager listing the organizational goals in the CEO’s priority order and, whenever it is agreed that learning has a role to play, a list under each goal of the key learning programs that are planned. This is how you demonstrate alignment, not through KPIs or measures.

Personalized Learning: Means to an End or the End Itself?

The learning field is currently focused on personalized learning which might be defined as providing learners with individualized, custom content in the way each prefers to learn. Advances in digital learning and platforms combined with an explosion in learning content make this advancement not only possible but highly desirable. It has the potential to contribute significantly to better learning experiences and higher application rates leading to better outcomes. This said, there is a danger that some will consider personalized learning not just a strategy to improve learning but as the goal itself, the reason or mission for the learning department. This brings us to a discussion of means versus ends and the importance of keeping the two straight.

I would suggest that personalized learning is best considered as a means to the end and that it will almost never be an end in itself. Over the past several years some in our profession have advocated that it is the end. They have redefined their mission as a learning department or corporate university to provide learners with whatever they want in whatever form they want it, which is an extension of our definition of personalized learning above. At its heart this issue of means versus ends is far from an issue of semantics; rather, it is a fundamental question about the reason for the existence of corporate training. Imagine a discussion with your CFO or CEO. They ask what your strategy is for next year. You say it is personalized learning and that the majority of your resources will be dedicated to providing more and better-personalized learning. They ask why. You tell them learners will be more engaged, will learn more, and will retain more. I guarantee you that in their mind you never really answered the question. Your answer is good as far as it goes but doesn’t get to the business reason for learning. You described a process improvement for them, one that will deliver learning more effectively and efficiently, and that is good but not enough. Basically, you are improving the means to an end by personalizing the learning, but they want to know what the end is. In their mind, the end may be higher sales, greater productivity or quality, fewer accidents, lower operating costs, or higher employee engagement, but you didn’t connect the dots for them. By not appreciating the difference between means and ends, you focused just on the means when you needed to also focus on the end. Better to tell them that you will improve learning in order to effect higher sales, lower costs, or whatever their goals are. These are the ends they care about and once they know that you are working toward the same ends, they will be more receptive to your request for resources to improve the means (personalized learning).

As a profession, we must continue to make great strides in process improvement and personalized learning is one such process improvement. But it is not and never will be an end in itself any more than e-learning, blended learning or mobile learning are ends in themselves. We don’t provide learning just to provide learning. The learning must serve a higher need. It must serve an end and that end should be one of your organization’s high-level goals or needs. With this understanding we also can see that personalized learning is not the opposite of company learning which has been defined as learning directed by the company (not the employee) to meet company needs. Instead, personalized learning should support the company goals and needs even if it is directed or mandated by the company. If at the discretion of the employee it is most likely to improve employee engagement which is a company goal in almost all organizations. If directed by the company, the personalized learning will support one of the other company goals like higher sales. So, personalized learning may be at the discretion of the employee or at the discretion of the company, but in either case, it is a means to an end.

Are You Ready for the Next Recession?

I know this seems like a silly question to ask, as the stock market continues to set new records and the unemployment rate is the lowest it has been in more than 10 years. However, speaking as a former economist, I know that this is precisely the time to ask the question because good times never last forever and the next downturn is likely to come without warning. Economists have a poor track record of accurately predicting recessions, so by the time a consensus believes that a recession is coming, we will probably already be in one. What we do know though, is that we are well into the current expansion and soon we will be living on borrowed time. The last recession ended in June 2009, so we are no 8 plus years into the current expansion. The longest expansion in modern economic history was the 9 year run from 1991 to 2000. The next longest was 8 years from 1982 to 1990. So, in another we will hopefully be in record territory for the longest economic expansion in our history.

This means that a recession is due sooner rather than later. Are you ready for it? Have you done all that you can to demonstrate your value by carefully aligning your discretionary programs to the CEOs most important goals and then showing the impact of learning on those key goals? Do you have strong relationships with the goal owners and senior leaders so they will speak up on your behalf? Are you running your basic skills and compliance training as efficiently and effectively as possible and do you demonstrate that to your senior leaders? Are you setting specific, measurable goals and plans for key measures and then use monthly reports to compare year-to-date progress against this plan? Do your senior leaders see these reports and know you are running learning like a business?

If you‘re not doing these kinds of things now, why should your CEO continue your current level of funding and staffing when the next recession comes. Competition for funds and staff will be fierce, and every department may well see a reduction, but historically, L&D takes a disproportionately large hit which reflects the CEO’s lack of confidence in or understanding of the value L&D provides. So no is the time to put the business processes in place to demonstrate alignment, value and rigor so that your senior leaders will have a better appreciation for the value added by learning and they will have greater confidence in you as a learning leader. You cannot prevent the next recession, but you can position your department to weather the storm.

Now is the time to prepare.

The Future of Learning

At the May ATD International Conference in Atlanta one speaker said the following about learning and development functions: “We don’t own content and aren’t managing learning. That power long ago shifted from the learning department to individual employees.” The speaker was making the point that since learning has shifted to employees there is an opportunity for CLOs to assume broader responsibility for other talent processes such as talent acquisition and organizational development. While I agree that CLOs are well positioned by their experience to expand their scope and probably the best positioned to lead an integrated talent acquisition and development function, the broad assertion about the end of L&D as we know it still troubles me.

The statement may be true for an L&D department which has responsibility only for general learning which is not aligned to any company goals other than perhaps employee engagement. In this case the L&D department would offer a catalog of courses, either internally developed or externally purchased, and employees would select what they want to improve or are interested in. Learning aligned with company goals other than employee engagement and learning for onboarding, basic skills training and compliance would be managed by their respective departments. For example, the sales department would manage all sales-related learning, quality would manage all quality-related learning, and HR would manage all compliance-related learning. In this model I would say that the L&D department never managed learning to begin with. They simply offered a catalog and tracked what employees took. The department did own the content, but I agree with the speaker that advances in digital learning offered outside the company will soon make this model obsolete. In the future, employees will be able to find all their general learning outside the company, and the L&D department will no longer own that content.

The statement however is not true now nor should it be true in the future for L&D departments which offer learning aligned to company goals, important business, or HR needs. In many companies the L&D department is responsible for the content and management of learning to increase sales, improve quality, reduce injuries, improve leadership, achieve compliance targets, onboard employees, provide basic skills, and meet other company goals. Why would a company have its L&D department abandon this important role as a strategic business partner? Why would it be better to have employees try to figure out on their own what they need to be successful or in compliance, especially new hires or employees new to a position, when experienced leaders already know what employees will need to be successful in their job? Furthermore, some knowledge is proprietary and simply not available outside the company. And let’s face it, real impact on company goals requires more than access to content. It requires a well-conceived and executed program to target the particular need, convey the required knowledge or skills, and then reinforce the desired behavior. It actually takes a lot of effort to manage learning for results, none of which occurs in the employee self-directed model.

In conclusion, I agree that in the future we will not need to own general content which is unaligned to company goals and that employees will be able to find general content of interest to them outside the company. However, I do believe we should continue to own critical content and that there will always be an important role for L&D departments to manage learning aligned to company goals and needs.

Do We Need New Measurements and Reporting for the Digital Learning Revolution?

Digital learning is revolutionizing the corporate learning environment by making a vast amount of content available to learners. Some of this content can be accessed through internal learning management systems, but the fastest growth is likely to come from content available outside the organization’s firewalls and systems. This is content directly available to anyone with internet access. In a sense we have had digital learning since the first computer-based training (CBT) became available in the 1980s and even more so with the advent of e-learning (WBT) in the late 1990s, but the current revolution dwarfs these past initiatives in terms of breadth, reach, and sheer volume.

Most of our current measures and reports / dashboards were designed for traditional classroom learning and have been applied to e-learning as well, but some have always asked if we need special measures for e-learning. The answer is basically been no. Traditional efficiency measures about number of participants, classes, hours, etc., combined with traditional effectiveness measures for participant reaction, test scores, application, impact and ROI can easily be applied to e-learning. The old post event survey was delivered at the end of class and with e-learning it is generally sent to participants immediately following class. So, no revolution was required in measurement or the reporting of those measures for e-learning.

The question we face now at the start of this new digital revolution is the same as the on e at the inception of e-learning. Will new measures and reporting be required? My initial take is that the answer remains the same as well. The standard efficiency, effectiveness and outcome measures will still serve us well, and the three standard reports recommended by Talent Development Reporting principles (TDRp) will still help us to better manage our programs and departments as well as demonstrate our alignment to and impact on key company goals. However, the measurement strategy itself may have to change dramatically because an increasing amount of content will be accessed outside the company and outside any learning management system (LMS). Think of employees taking free courses at the Kahn Academy and universities like MIT, learning from You Tube or their Google search, or taking advantage of aggregators like Coursera, Udemy, Udacity and others. Since this learning takes place outside the LMS there is no way to automatically capture efficiency measures like number of participants, courses, or hours, and there is no way to automatically send a survey to gauge satisfaction, amount learned, application, or other effectiveness measures let alone any outcome measure.

Instead, much more thought is going to have to be given to what measures are really important for this new digital learning. Do we really need to know how many courses employees take outside the LMS? Do we really need to know how satisfied they were with each course or how much they learned? I don’t think so. So, let’s go back to basics and determine what we want to know, why we want to know it, and what we would do with the data if we had it. Learning and development is increasingly important to millennials so an employer certainly has an interest in finding out whether employees are satisfied with their learning but not necessarily with each micro instance of learning (like a Kahn Academy course). Why not send a quarterly or semi-annual learning survey to employees (or expand your current employee engagement survey) to ask about their satisfaction with learning in general? You could ask about how they have learned in the past  three months (company offerings, MOOCs, free universities, Coursera-type providers, Kahn, etc.) and perhaps how many times per week they access different sources. A summary question would be something like, How satisfied are you with all the opportunities you have to learn? This should provide the information to determine whether employees are sufficiently engaged with learning.

In conclusion, the digital revolution is not likely to require new measures or reports, but is likely to require rethinking your measurement strategy with regard to data acquisition.

3 Principles of Effective Learning Metrics & Measurement

Contributed by Caveo Learning

As more and more talent development leaders take a serious look at implementing meaningful metrics and measurement across their learning and development organizations, the business relationships and conversations between L&D professionals and stakeholders are changing for the better.

There are times when talent development leaders need to root themselves in foundational principles of talent development metrics. It’s easy to get caught up in new thinking, models, and frameworks, and lose focus on the fundamentals of how to run a learning organization.

No matter what model, framework, system, tool, methodology, or new approach we want to adapt, adopt, and deploy, there are at least three fundamentals that we should never lose sight of—principles that should be applied to all learning metrics.

  1. Differentiate between metrics you actively manage and those you monitor.

The principle is so simple, yet so rarely considered. Just like in medicine, some “vital statistics” are always monitored—the moment something changes in the wrong direction, an alarm sounds, allowing for the “metric” to be actively managed. Similarly, when selecting your metrics, determine which metrics you intend to monitor, and which you intend to actively manage. Further, know what the target thresholds are for the monitored metrics that will trigger the “alarm” for active management. For example, you may want to monitor L&D staff utilization, and only actively manage it should the metric fall out of the acceptable range.

  1. Align to industry formulas, where practical.

Another issue that often comes up is defining the specific formula for a metric. Frequently, the formula is not considered deeply enough to add the full value that it can. This can result in metrics with little credibility or validity, and which should not be informing any decisions. It’s true that each organization has its own characteristics, its own language, and specifics that need to be considered. However, using a standard formula defined by an existing industry benchmark can be very helpful when planning your metric strategy, goals, and even budget. Industry benchmarks are only valuable for planning and comparison if you are comparing apples with apples—to do that, the formulas need to match.

CTR does a fantastic job in its L&D metrics library of not only providing the recommended formula, but also noting which industry organization defined it, or even which other industry benchmark it is similar to. A good balance between very-company-specific and industry-aligned formulas will allow for at least some comparison, planning, and target setting against industry metrics. Whether it is CTR’s Talent Development Reporting Principles (TDRp), Bersin by Deloitte, Training magazine, ATD, SHRM, or any others, consider aligning at least some meaningful metrics to an industry definition and formula. With our above example of staff utilization, one TDRp formula we could use is “Learning staff, % of total hours used.”

  1. Measure to inform decisions and actions.

Whether you are monitoring or managing a particular metric, there must be an underlying business reason to do so. Having a metric that does not inform a decision or an action is of no value. Also, consider why you are measuring something and what that metric will influence. Learning metrics are there to help us improve what we do as L&D professionals. Whether it is improving the efficiency of our learning organization, the effectiveness of our learning solutions, alignment to our stakeholders, or the contribution our interventions have on the strategic business goals of the organization, metrics play a critical role in influencing the value we bring to our organizations and, almost more critically, the credibility of L&D in the eyes of external stakeholders and the C-suite.

It’s better to have a few good metrics that inform meaningful decision making and allow for agility in improving the value L&D brings to your organization, rather than hundreds of metrics that offer limited value. Sticking with our example, we could monitor the utilization of learning staff and start actively addressing this metric should the percentage increase above, say, 90%, by engaging a learning solutions provider to assist with some of the workload until it returns to the acceptable range.

Learning leaders are starting to take more notice of the deep value that metrics can bring toward the constant improvement of everything we do in L&D. No matter what the specific model, framework, and approach your organization chooses for learning metrics, there remain some fundamental principles that will help ensure that we ultimately have a metrics strategy that guides us, helps us improve, changes conversations with our stakeholders, and increases our credibility as business leaders.

Learning metrics are our friend, our source of feedback and intelligence, ensuring we are constantly focused on maximizing the value we bring to our organizations.

Caveo Learning is a learning consulting firm, providing learning strategies and solutions to Fortune 1000 and other leading organizations. Caveo’s mission is to transform the learning industry into one that consistently delivers targeted and recurring business value. Since 2004, Caveo has delivered ROI-focused strategic learning and performance solutions to organizations in a wide range of industries, including technology, healthcare, energy, financial services, telecommunications, manufacturing, foodservice, pharmaceuticals, and hospitality. Caveo was named one of the top content creation companies of 2017 by Training Industry Inc. For more information, visit www.caveolearning.com

Take a Business-Minded Approach to Sourcing Learning Partners

By: Gary Schafer
President, Caveo Learning

One of the most important tasks talent development leaders face is selecting outsourcing partners and product vendors. It also happens to be one of the most daunting.

The learning and development organization’s relationship with providers can be a major factor in the success of the business. Learning leaders must weigh many variables in the purchasing process, from the factual (pricing, experience) to the intangible (flexibility, dedication).

Navigating the procurement process can be tremendously difficult. How can a provider’s ability to flex with the challenging demands of the business be analyzed through a formal procurement process? How does one tell if an external learning partner is going to react to changing environments and truly be aligned with the business? How is the commitment of the supplier to the mission, vision, and values of the business measured? What about issues of scalability, global capability, and communication?

How to Develop a Learning Sourcing Strategy

Start by establishing a factual market intelligence base—understand the array of variables around learning services, from expertise to capability, and the rates associated with those services. Create your supplier portfolio, determine a list of qualification criteria, and then winnow your list of potential suppliers.

Identify the types of services your organization needs—learning strategy, audience analysis, curriculum design, instructor-led training, eLearning, change management, etc. Then, determine the demand across the organization, broken out by role.

Next, perform learning-spend and target-cost analyses. You’ll want to analyze supplier rates using both internal and external data sources.

  • Conduct some internal benchmarking with the same supplier—are rates equal across multiple projects? Does the firm charge premiums based on experience? Is pricing consistent across roles?
  • Do the same internal benchmarking across multiple suppliers. Find out if rates are similar for comparable roles and skill levels at different organizations.
  • Develop some external benchmarks using publicly available market data, comparing rates from market analyses and publicly available salary data. Factor for loaded cost (about 1.3 times base salary) to cover benefits, PTO, company-paid taxes, etc., and also allow some room for supplier margin.

Investigate the Intangibles

Before moving forward with sourcing service partners and learning products providers, conduct a supplier quality assessment. Create quality profiles for each prospective partner by evaluating the following factors:

  • Strategic planning—Will they assist with communicating and messaging to senior stakeholders throughout the business? Do they provide intelligence around industry trends and best practices? What do they offer in the way of post-project analysis, such as lessons learned and next-steps recommendations?
  • Experience—Does their team have expertise in relevant areas? What is the screening process for potential employees and contractors? How cohesive is the team, and how closely do they adhere to defined processes?
  • Responsiveness—Do they show a commitment to your business needs through effective communication? Is there a proven track record of adherence to deadlines? Do they offer strategic learning support?
  • Quality of work—What processes are in place to ensure solid instructional design? What methodology is used for quality reviews (test plans, style guides, etc.)? Does the supplier set clear review expectations for training deliverables and correct issues in a timely manner?
  • Flexibility—Do the suppliers have the ability and willingness to react to new or changing business needs? What is the capacity to scale a team with appropriate roles and volume?
  • Support—Do they have ready access to tools and templates necessary to speed development? Are there documented methodologies for executing common tasks?

Pricing as a Determining Factor

Suppliers’ rates often (though certainly not always) correlate with their quality profiles. If budget were no object, the learning sourcing strategy would simply mean identifying the most qualified provider; of course, budget is oftentimes the overriding factor, and so we must balance the need for quality with fiscal realities. Thus, try to optimize supplier usage based on rates, hiring high-quality suppliers for critical projects and project management, and settling for more cost-effective options, such as staff augmentation, for lower-importance projects and commodity roles.

Likewise, negotiate for spending reductions based on the type of support required. Each of the three main services models comes with its own potential cost-savings.

  • A project-based agreement can reduce per-unit costs and may be further cost-efficient due to greater opportunity to leverage offshore assets.
  • Contract labor ensures compliance across the organization, and the consolidated labor structure means negotiated volume rates are an option.
  • A managed services model will likely have optimized service levels and adjustable staffing levels, along with efficiencies gleaned from custom redesigned processes.

Be cautious with regard to electronic procurement. There are several tools now available and in use by procurement professionals to streamline the proposal process, but they are not always ideal for learning organizations. These tools are great for providing a uniform response with efficiency, but for learning services, not everything is uniform. E-procurement tools often create barriers for providers to be able to tell their full story, essentially reducing the proposal process to 100-word bites of information that make it difficult to recognize the provider’s true value. A good procurement professional can get around some of these limitations by still offering a face-to-face meeting opportunity of the top candidates for consideration.

Finally, take care to negotiate a favorable agreement with the external learning partner. Have a negotiating strategy before initiating contract talks, and be ready and willing to walk away if a better partnership can be found elsewhere.

Creating and implementing a learning sourcing strategy for learning will greatly reduce the stress of the services procurement process, helping identify the ideal partners for your learning organization’s initiatives while optimizing budget. Rather than an intimidating task, a well-prepared strategy plan can make sourcing service partners a cornerstone in your organization’s success.

Gary Schafer is president of Caveo Learning, recently named a top custom content company by Training Industry Inc. Gary was formerly a management consultant at McKinsey & Co., and he holds an MBA from Northwestern University’s Kellogg School of Management.


Will Corporate Universities Become Extinct?

Michael Fernandes, a friend of mine who is presently conducting an international survey on corporate universities, recently posed this question. He asked whether corporate universities would go the way of the dinosaur. Just as an asteroid brought an end to the age of dinosaurs, is it possible that the digital revolution in learning currently underway may bring an end to corporate universities as we know them today? (By “digital revolution” I am referring to the exploding amount of learning content available to employees through the web outside the organization’s learning management system.)

It is a great question and obviously a very important one. I think the answer depends on the mission of the corporate university. If the mission is simply to provide learning opportunities for employees then I think the answer may be yes. A significant number of corporate universities today have just such a mission. Some provide learning as a mechanism to increase employee engagement or improve attraction and retention of the best employees. Others simply want to provide learning opportunities for their employees without regard to higher motives. In either case, the digital learning revolution over the next five years is likely to produce learning aggregators who can provide more learning opportunities at lower cost and with greater ease than any corporate university. Presently, corporate universities are busy looking for ways to make it easier for their employees to find and access this content, and the new term capturing this role is “curator” which is meant to convey the shift away from learning creation. It seems to me that this role is not sustainable in the long run for each corporate university. Industry aggregators, working at scale globally, will be able to provide this service much less expensively and at the same time offer a much greater breadth of learning opportunities. Why, then, would an organization continue to fund a corporate university to provide learning opportunities?

However, if the mission of the corporate university is to help the organization achieve its business goals, then there will always be a place for it. First, organizations often want customized programs which will require management from the corporate university even if an outside provider is selected to help. Second, some of the content is likely to be proprietary and will never be available from outside providers or aggregators. Third, even if the desired content were available outside the organization, there will always be an important role for the corporate university to work with the stakeholder to determine whether learning is the solution, identify the right target audience, set the appropriate learning objectives and content, and most importantly to work hand-in-hand with the stakeholder to ensure the learning is transferred in the work place. The issue of learning transfer or application, so vital to results, cannot be accomplished by an aggregator so this will remain the realm of the corporate university.

So the corporate university need not suffer the fate of the dinosaurs. A good corporate university with a mission to help the organization achieve its business goals by partnering effectively with stakeholders and senior leaders should live long and prosper. On the other hand, a corporate university whose only mission is to provide general, unaligned learning for its employees faces a much less certain future. In fact, it is hard to imagine how it will survive. This should serve as a warning for all those who are currently rushing toward becoming a first-class curator of digital content. If this is your only mission, you are likely rushing towards extinction.

Reskilling Employees: Is This Our Next Priority?

I saw two articles this last week on the urgent need to reskill our employees. The articles talked about the rapid and profound changes underway in the world which are driving the need to equip our workers with the new skills necessary to succeed. I am sure you have seen similar articles and I think we’re going to hear more about this urgent need to reskill our employees.

Now if you know me at all, you know that as soon as I hear a brand new phrase or as soon as everyone appears to be jumping on the bandwagon for the next new thing, I like to ask some basic questions. I suppose some would just attribute this to my being conservative, resistant to change, or just plain “old fashioned”. Naturally, I don’t think of myself that way, and in fact I like taking risks and I wholeheartedly embrace the future. That said, I think there is value sometimes in just slowing down for a minute to do a high-level “needs analysis”.

So exactly what need is driving this proposed solution? Yes, the world is changing and the pace of change continues to accelerate. And I totally endorse the notion that for the United States as a whole there is need to reskill workers whose skills have become obsolete. These are workers who had employable skills but now, through no fault of their own, there is no longer any demand for these particular skills. These workers do indeed need to learn new skills to re-enter the workforce and once again contribute to the economy. There are, of course, other workers who never were skilled to begin with. This group needs skills as well but we would not say they need to be “reskilled”. Rather they need to acquire skills for the first time. So I would agree that at the macro level there is certainly a need to both “skill” and “reskill” workers in our country. This is the province and the mission of local and regional workforce development boards throughout the country.

But how about at the company level? Do most of our organizations have this same need to reskill workers? I don’t think so. If a company is acting rationally, it is paying its employees because those employees are using their existing skills to deliver value that is at least equal to their compensation. So, existing employees have skills of value to their employer. Now a good learning and development function can find opportunities to “upskill” employees so they deliver even greater value to the organization, and the best organizations are doing this all the time through leadership training, sales training, lean processes, and a host of other great programs. The point is that upskilling is not the same as reskilling, and we have been upskilling for the last 60 years.

Might there be some employees whose skills are about to become obsolete who would be good candidates for reskilling? Absolutely. Think of those in manufacturing whose manual skills are being replaced by robotics. The question for us, then, is how many of our employees fall in this category? I don’t think the number is going to be large and certainly not large enough to make reskilling our next priority. First, some of these employees will leave voluntarily, perhaps hoping they can still apply their skill elsewhere. Second, some whose skills are becoming obsolete may not have the talent or desire to be reskilled in the skills that your organization will need moving forward. Third, some of these employees may simply not have the other attributes (aside from talent) the organization now looks for in new employees and it will be best to part ways. So, how many does this leave in a typical organization for actual reskilling? Probably not too many.

To conclude, I don’t believe reskilling is the next priority for L&D in the corporate sector. I do believe it is critically important for our country and programs need to be funded to provide these new skills. But what is true for the country is not necessarily true for individual companies which for the most part today have workers with viable skills. There are critical shortages for certain positions and there may be opportunities to reskill some existing employees for these positions, but generally the required skills are so specific that it would be unrealistic to think that many of these shortages are going to be filled by reskilled internal candidates. So, is reskilling important? Yes, but more so for the economy as a whole than for most individual organizations.

Corporate L&D’s Share of Learning Declining: Should We Be Concerned?

There is concern in the industry that a decline in formal learning is leading to a smaller share of the “learning market” for corporate L&D departments. The analogy is a marketing department’s concern over a declining share of the consumer’s wallet as other products or services displace their own. In our profession the worry is driven by a move toward employee-directed learning where employees are bypassing formal corporate training programs in favor of online learning opportunities offered by outside organizations as well as self-directed learning on the internet and informal learning through their own networks. Several studies in the past two years have confirmed the shift and it is certainly a growing topic for discussion in the profession.

So, should we be concerned about this shift? In general, I don’t think so. First, for the sake of those new to our profession, let’s be clear that it was never the goal of any corporate L&D department to control 100% of the learning for the organization’s employees. Nor was it ever a reality. There has always been informal learning going on outside the reach of a training department. Although not a fan of the 70:20:10 principle it certainly reminds us that formal training plays an important but usually limited role. With regard to control, most organizations have always offered discretionary or elective learning opportunities, including tuition assistance, where employees were free to choose their own learning.

Now, back to the question. I think the answer really depends on the mission of the learning department. If the mission is narrowly defined to focus on providing consistent, uniform basic skills to employees, and if those employees are now getting inconsistent and perhaps incorrect information on their own which detracts from their performance, then, yes, we should be concerned. Think of basic skills training in industries like manufacturing, banking, or food services where the goal is for employees to perform tasks in certain defined ways. Similarly, if your focus is primarily on compliance, you don’t want employees getting a lot of incorrect information on the internet. In these cases the amount of formal training provided has probably not decreased but the informal has increased so the share of formal has declined. More to the point, you may have to work harder or redesign your formal learning to take into account what employees are picking up through their own channels.

If the mission is to provide learning to improve performance through additional training beyond the basics, then I think there is less reason for concern. Think of formal learning to improve the performance of marketing employees or most leadership development programs. As employees learn more about marketing or leadership on their own, the share of the knowledge they have gained on these topics from your formal training will decrease but this is not necessarily a bad thing. In fact, if they are actively looking to learn from outside resources that shows a high level of engagement. Like in the cases above, you may need to take these new sources of learning into account when you design your programs. You could incorporate the good sources that are consistent with your approach and refer to the issues or problems with the approaches not consistent with your formal teaching. In either case, the employees are probably going to learn more and be better prepared for the challenges they face on the job.

Last, consider a mission which is primarily to increase employee engagement. Here there is little reason for concern if employees are migrating to more learning opportunities outside your formal learning framework. Yes, your share of their total learning is decreasing but the good news is that they (hopefully) are finding what they want. In fact, in this case you would want to do all you can to help them find and access great learning – whatever the source. The easier you can make it for them, the more engaged they will be. Mission accomplished.

I think this shift towards deceased share of learning for corporate L&D departments is indeed a permanent shift and the trend is likely to accelerate as more outside resources become available and as organizations make it easier for employees to access them. Some caution is required depending on the department’s mission and type of offerings, and the shift is likely to force changes in how formal learning is designed, which is fine. I think the future remains bright.

Corporate L&D’s Share of Learning Declining: Should We be Concerned?

By Dave Vance, Executive Director, Center for Talent Reporting

There is concern in the industry that a decline in formal learning is leading to a smaller share of the “learning market” for corporate L&D departments. The analogy is a marketing department’s concern over a declining share of the consumer’s wallet as other products or services displace their own. In our profession the worry is driven by a move toward employee-directed learning where employees are bypassing formal corporate training programs in favor of online learning opportunities offered by outside organizations as well as self-directed learning on the internet and informal learning through their own networks. Several studies in the past two years have confirmed the shift and it is certainly a growing topic for discussion in the profession.

So, should we be concerned about this shift? In general, I don’t think so. First, for the sake of those new to our profession, let’s be clear that it was never the goal of any corporate L&D department to control 100% of the learning for the organization’s employees. Nor was it ever a reality. There has always been informal learning going on outside the reach of a training department. Although not a fan of the 70:20:10 principle it certainly reminds us that formal training plays an important but usually limited role. With regard to control, most organizations have always offered discretionary or elective learning opportunities, including tuition assistance, where employees were free to choose their own learning.

Now, back to the question. I think the answer really depends on the mission of the learning department. If the mission is narrowly defined to focus on providing consistent, uniform basic skills to employees, and if those employees are now getting inconsistent and perhaps incorrect information on their own which detracts from their performance, then, yes, we should be concerned. Think of basic skills training in industries like manufacturing, banking, or food services where the goal is for employees to perform tasks in certain defined ways. Similarly, if your focus is primarily on compliance, you don’t want employees getting a lot of incorrect information on the internet. In these cases the amount of formal training provided has probably not decreased but the informal has increased so the share of formal has declined. More to the point, you may have to work harder or redesign your formal learning to take into account what employees are picking up through their own channels.

If the mission is to provide learning to improve performance through additional training beyond the basics, then I think there is less reason for concern. Think of formal learning to improve the performance of marketing employees or most leadership development programs. As employees learn more about marketing or leadership on their own, the share of the knowledge they have gained on these topics from your formal training will decrease but this is not necessarily a bad thing. In fact, if they are actively looking to learn from outside resources that shows a high level of engagement. Like in the cases above, you may need to take these new sources of learning into account when you design your programs. You could incorporate the good sources that are consistent with your approach and refer to the issues or problems with the approaches not consistent with your formal teaching. In either case, the employees are probably going to learn more and be better prepared for the challenges they face on the job.

Last, consider a mission which is primarily to increase employee engagement. Here there is little reason for concern if employees are migrating to more learning opportunities outside your formal learning framework. Yes, your share of their total learning is decreasing but the good news is that they (hopefully) are finding what they want. In fact, in this case you would want to do all you can to help them find and access great learning – whatever the source. The easier you can make it for them, the more engaged they will be. Mission accomplished.

I think this shift towards deceased share of learning for corporate L&D departments is indeed a permanent shift and the trend is likely to accelerate as more outside resources become available and as organizations make it easier for employees to access them. Some caution is required depending on the department’s mission and type of offerings, and the shift is likely to force changes in how formal learning is designed, which is fine. I think the future remains bright.

Where Does Reporting Fall on the L&D Maturity Spectrum?

By Dave Vance, Executive Director, Center for Talent Reporting

Maturity models show a journey from just beginning to collect data all the way through the use of advanced and sophisticated practices and processes. The models can be very useful in conveying the journey that must be followed from immature to mature and in helping organizations assess both where they are today and where they may wish to go in the future.

Unfortunately, since most practitioners don’t really understand what is required to run learning like a business, most of these models show reporting as the second or third step on what is often a fix or six-rung maturity ladder. Properly understood, reporting should be the next to highest rung.  Let’s see why.

First, we need to define terms which is where the problem begins. For most, reporting represents the creation of monthly dashboards or scorecards which show ONLY actual results. And typically the measures are low level efficiency and effectiveness measures like number of courses, hours and participants, and level 1. You have all seen these. They may show monthly or quarterly data and may contain bar charts or line graphs. Dashboards may show a speedometer. If this is how we define reporting, then I agree it belongs on the second rung since it is merely capturing the most elementary data from the first rung and using it only to discern how the measure is trending. Authors of these maturity models often go to great lengths to contrast this low level “reporting” with higher level predictive analytics which these days is almost the highest rung. So, the model encourages you to move beyond elementary reporting (which is good!) to higher level measures (like levels 3-5, which is also very good!) to big data and predictive analytics (which may or may not be so good), all the while never mentioning the use of reports for management of the department of program.

If we instead define reporting in line with Talent Development Reporting principles (TDRp), the whole model changes. In TDRp executive reports are NEVER simply a compilation of actual results (history). A report must include the plan (target or goal) for that measure, year-to-date (YTD) results, a comparison of YTD results to plan, and ideally a forecast of how the year is likely to end if no special (unplanned) action is taken. The sole purpose of the report is to answer the two basic questions every manager should ask for every important measure: (1) Are we on plan year to date? and (2) Are we going to end the year on plan? Trend for the year and comparison to previous years are interesting but largely irrelevant after the plan has been set for the year. Trend and history would have been used to set an achievable, reasonable plan, but after the plan is set, all that matters is whether you can achieve it. So, these reports are absolutely indispensable to managing key programs and the department as a whole. A good manager simply cannot manage without them.

In this light, good reporting should be near the top of the maturity model since it supports the active, disciplined management of the function which, in my opinion, is the top rung. I believe that this active management supported by good reporting will deliver FAR greater results and impact than big data and predictive analytics. In fact, it is not even close. Predictive analytics is typically used to discover relationships among measures which is great but usually impacts a small number of measures or projects. In contrast, the active management of all your key programs, by definition, will affect everything important you do for the year. In other words, if a CLO is trying to decide between investing in predictive analytics and disciplined management of the department using good reporting, my strong advice is to get your disciplined management in place first.  This will provide far more bang for the buck and has the potential to advance your career in management (versus in analytics).

Please take a second look the next time you see a maturity model to understand how the authors have defined the components. I bet they will have a “dumbed” down definition for reporting and consequently have allocated it to a low rung. If you are constructing your own maturity model or journey chart, I challenge you to aspire to great management of the entire department as your highest rung.




Is Learning a Real Profession?

By Dave Vance Executive Director, Center for Talent Reporting

One of the hallmarks of a profession is a standard language, an agreed-upon framework for measures, and common processes. Think of accounting. The profession has a common language, and agreed-upon measures, statements, and processes. For example, accountants have four basics types or categories to organize their hundreds of measures. You will recognize these as revenue (or income), expense (or cost), assets and liabilities. Most accounting measures can be grouped into one of these categories. Accountants place these measures into three standard statements, each with a different purpose. The three are income statement (or profit & loss, P&L for short), balance sheet, and cash flow statement. The three taken together provide a holistic view of an organization’s financial position and can be used to manage the organization throughout the year. Last, their profession has a host of practices taught at university like how to define and use the measures, how to create and use the statements, how to use measures to analyze issues and solve problems (for example, ratio analysis, the DuPont model, the audit process).

What does learning have that is comparable? Unfortunately, very little. Ed Trolley and David van Adelsberg suggested two categories of measures (effectiveness and efficiency) in their ground breaking 1999 book Running Training like a Business, but the profession did not fully embraced them. We started with their work in 2010 and broke out outcome measures from effectiveness measures for Talent Development Reporting principles (TDRp) to highlight their importance. While more are adopting the  three TDRp categories, you can pick up any professional publication and find the terms used inconsistently. This does not happen in accounting. Accountants have standard names and definitions for all their measures. We in learning do not. People define and calculate the same measures differently. There is no single source of truth, no profession-wide measures library if you will. There is in accounting.

Unlike accounting and prior to TDRp, we have had no standard reports what so ever. Period. So there has been no agreement on what measures go into what reports, or how many to include, or what to do with the measures and reports once they are created. Yes, we have a number of dashboards that typically show just actual results for measures like number of participants, hours, courses developed and delivered, and utilization rates, but these really don’t rise to the level of standard reports and there is no agreement in any case on what should be included or how it should be displayed. Last, we have few standard processes. We have some for instructional design and performance consulting like ADDIE, SAM, SCORM and API, but not much agreement as a profession on most other processes. For example, although it is recommended by all authors and leading practitioners, we don’t always start with the end in mind, we don’t always meet proactively with our CEOs to discover the coming year’s goals, we don’t always meet proactively with the goal owners to reach upfront agreement on the impact of learning and mutual roles and responsibilities, we don’t always align our learning to the organizations’ most important goals, we don’t always create SMART goals for our most important measures, and we don’t always create reports with plan and year-to-date results to let us manage our programs to deliver the agreed-upon goals. Compare these with the audit process where all accountants know how to do an audit and where there are standard expectations for how an audit is to be conducted.

So, if we want to be a true profession, we need standards just like the accountants have. TDRp is designed to help us get there but we are just at the beginning of this long, but very important, journey. To succeed we all need to work together. Please start using the three categories of measures in your workplace. For your key programs and measures, please start using formats like those in the three TDRp reports. Let us know where you believe we need to change, refine or improve this framework. Tell us if you think there is another category which should become part of the standard or if we need a different type of report. Recently, several have asked for some example dashboards that would be consistent with the TDRp framework so we will work on this. Bottom line, learning is still a very immature field so this is a work in progress. No one has all the answers, but it is time for us to grow up and take the next important step to become a true profession.


What CEOs Know (and You Should Too)

By Dave Vance

Some in our profession have not had the opportunity to work closely with senior leaders like a CEO or CFO. With that in mind I would like to share some observations about senior leaders from my experiences at Caterpillar (first as the Chief Economist and later as the CLO) and at other companies since then. Of course it goes without saying that my sample size is small and thus may not be representative of all CEOs, but I believe most will share the traits discussed below.

First, despite their very lofty position, they are still just people. Now I know that some CEOs believe they have transcended to a whole new level of existence, but in my experience most were down to earth and you could just talk with them. This doesn’t mean you can waste their time because it is incredibly valuable so you need to be prepared and be clear about you want from them. At the same time, though, you don’t need to memorize a script or be scared to death about the meeting. It is okay to ask questions and with most it is okay not to have all the answers. Because they are people, don’t be surprised if they ask questions about your family or interests in addition to work. They really do want to know more about you, and some personal connection time breaks up their day which is otherwise filled with 10-12 hours of work-related issues. So, relax a little bit and enjoy the time you get with them. And be prepared to be surprised at what you might learn about them and the organization.

Second, they have a very high tolerance for some uncertainty and ambiguity. Their tolerance will almost certainly be much greater than yours because they have to deal with it every single day. Having said that, though, we need to provide some context. They will not have a high tolerance for your being unprepared or acting unprofessionally or making excuses. They will, however, understand that a plan for next year is just that: a plan. They know from experience that plans are usually wrong because something unexpected typically happens. So, risk cannot be eliminated because of fundamental uncertainties about the future but it can be managed and that starts with ensuring you have a good process for planning and that you have talked with the right people. So, expect questions about how you came up with the plan and who you talked to. If you talked to the right people (like the Sales SVP to agree on an outcome measure) and made reasonable assumptions, then they are going to feel good about the plan. Put differently, there is nothing they can do to guarantee a perfect plan so instead they will focus on what they can influence (process and people). So, just be humble in setting and communicating plans. Don’t worry that they are not perfect.

Third, they know that you are going to make some mistakes along the way. They themselves are still making mistakes and other senior leaders are still making mistakes so it is only natural that you will, too. (Unless you believe you are far superior to them!) Many in our profession are afraid that the CEO is going to discover they are not perfect, and thus are afraid to share TDRp-type reports which may show that every measure is not on track to meet plan. News flash: Your CEO already knows you are not perfect and they don’t expect you to achieve or exceed plan on every single measure. In fact, any good leader would be very skeptical of someone who says they always achieve or exceed plan because that is not how the world works. Unless you set incredibly easy targets to meet, you are going to achieve some, exceed some, and fall short on the rest. This is what your CEO is expecting. I met with our Board of Governors (CEO and other senior leaders) each quarter and I never reported that we were on plan to achieve or exceed all measures. Likewise, as an economist it was impossible to get all the forecasts right. Since your CEO already knows this (even if you do not) they will be looking to see if you are honest and transparent, willing to share the bad news as well as the good news. They will then want to see if you can address those areas where you are behind without being defensive or blaming anyone else. They want to know that you are on top of the issues and are doing everything that makes sense to deliver the planned results. So, don’t worry about letting the CEO see you are not perfect. Instead show them that you can manage L&D like business.

If you have not already discovered the above for yourself, I hope this makes you a little more confident and at ease about meeting and interacting with your CEO and other senior leaders. At the end of the day they are just people just like you, planning in the face of uncertainty, making mistakes, and moving forward despite it all.

Running Learning Like a Business: What Does It Mean and Is It Still Relevant? By Dave Vance

Running learning like a business simply means applying business discipline to the learning field. More precisely, it means creating a plan with specific, measurable goals and then executing that plan with discipline throughout the year. That’s it. Most organizations already do this at a high level. They create a business plan for the year and then create and use reports on a monthly basis to manage their activities to come as close to plan as possible. At a lower level many other departments already do this as well. Think about a typical sales, quality, or manufacturing department. They establish goals at the start of the year and then they compare their progress each month against those goals, taking appropriate actions as necessary to stay on plan or get back on plan if they are behind. The point is they have a plan and they manage to it.

It is my experience that fewer than 10% of learning departments manage this way. Most would say they work very hard and accomplish a great deal. And they do work hard and they do accomplish a lot. (And most produce activity reports to show just how busy they are (number of courses, participants, etc.)). But they could have an even greater impact on their organization if they worked smarter. They might not deliver more courses or reach more employees, but they will make more of a difference in their organization’s performance. So, what does it mean to work smarter? Well, it means that you plan your actions more carefully and then you execute your plan with discipline. The upfront planning means that you think carefully and critically about what you want to accomplish, why you want to accomplish it, what you will need in terms of resources to accomplish it, and what measures you will use. Invariably this will involve tradeoffs and prioritization since resources are always limited. So, what is more important for you to work on this coming year? Where can you make the greatest difference or add the most value? Once you have some ideas (and they may be different than what you have been working on!) the next task is to settle on a reasonable and achievable goal which means you must think carefully about what resources will be required. It simply may not be possible to accomplish the goal with your resources in which case you need to set a more realistic goal. And this will force you to think about measures to use in setting the plan and lining up your work effort.

While this does require work, I would argue that a disciplined planning process culminating in specific, measurable goals will help ensure you are focused on the right goals for the right reasons with the right resources. In essence this is a learning process and if you don’t plan this way you deprive yourself of this very important learning opportunity. I think many leaders would say they do think about goals and resources but they stop short of setting specific, measurable goals. In this case they have not thought critically about the level of resources required (without a specific measurable plan how do you know how resource will be required?) nor have they created reports with a column for plan to use in managing the learning each month. Theoretically, it would be possible for these leaders to accomplish the same as those with plans but practically speaking it is far less likely. After all, why do you think your colleagues in other departments go through all this work? They wouldn’t if it didn’t lead to better results. Simply put, running learning like a business will ensure you are focused on the right learning with the right goals, measures and resources to deliver the greatest value to the organization.

While I believe more learning leaders are running learning like a business, most are not. One objection I hear is that this focus on planning and executing is no longer relevant. Some tell me that their organization’s goals change every two or three months and thus it makes no sense to plan. This is ridiculous. I don’t think the company’s goals change every few months and, if they do, I would question senior leadership’s abilities to manage the company. It may be that priorities shift during the year but goals like increasing sales, quality, and employee engagement are not going to change every few months. (They may mean that their own projects or work assignments change frequently which may actually be a result of not having a good business plan for learning.) Others say they have stable goals but their company’s products (like cell phones) change once per quarter and thus there is no way to plan. Of course there is. Focus first on the goal of increasing sales and let this be your guide in planning programs. True, new models will come out and you will have to design training programs around them but you don’t need all the specifics at the start of the year. You have an idea of what is required for each product launch so use that to plan. Your specific, measurable plans for reaction, learning and application are probably the same for all your product-related learning and you have ideas about the size of the target audience. So, plan based on what you know and refine them as you get better information through the year.

Bottom line, running learning like a business is simply applying time-tested basic business skills to learning. Outside of learning, it is nothing new. And, outside of learning it is still very much relevant. So, let’s move that percentage up from 10% and see what a difference it makes to manage learning this way. Give it a try and see what results you get. I think you will be pleasantly surprised.

Four Approaches to Break the “No One Is Asking” Cycle By Peggy Parskey







In the last 18 months, I have noticed a concerning shift in the sentiment of learning professionals about ramping up the quality of their measurement and in particular their reporting. This sentiment had been quite prevalent 10-12 years ago, but abated for a while. Now it’s back with a vengeance.
The essence of the sentiment is “Business leaders aren’t asking for this information, so there is no need for us to provide it.”  Most L&D practitioners don’t state it quite like that of course. They say, “Our business leaders only want business impact data.” Sometimes the stated reason is that business leaders consider L&D self-report data to be useless. Regardless of how they say it, the message is clear: “Business leaders aren’t asking for it (or worse, don’t want it), so I’m not going to bother.”
I’m not imaging this shift. Brandon Hall Group recently published a study entitled, “Learning Measurement 2016: Little Linkage to Performance.” In their study, they found that the pressure to measure learning is coming mostly from within the learning function itself. In addition, of the 367 companies studied, 13% said there was no pressure at all to measure learning.


What’s concerning about this situation?

L&D has been saying for years that it wants to be a viewed as a strategic partner and get a seat at the proverbial table.  Yet, the sentiment that “no one is asking for measurement data” is equivalent to saying, “Hey, I’m just an order taker. No orders, no product or service.”   If we are going to break free of an order-taking mentality, then we need to think strategically about measurement and reporting, not just the solutions L&D creates.


The sad reality is that often business leaders do not ask for L&D data because they don’t know what data is available or they don’t understand its value.  Many business leaders assume L&D is only capable of reporting activity or reaction (L1) data. Many still pejoratively refer to L&D evaluations as Happy Sheets or Smile Sheets.  If they believe that’s all L&D can do, then of course they won’t ask or exert any pressure to do more.


Equally importantly, innovation would come to a standstill if organizations only produced products and services that resulted from a client request. Henry Ford famously said, “If I had asked people what they wanted, they would have said faster horses.”  Steve Jobs talked about the dangers of group-think as a product development strategy when he said, “It’s really hard to design products by focus groups. A lot of times, people don’t know what they want until you show it to them.”  Simply because your internal client cannot envision what you can provide does not mean you shouldn’t provide it and then iterate to get it right.


The big problem with a “no one is asking” mentality is that it is a self-fulfilling prophesy and a downward spiral. At some point, someone needs to break the cycle. It might as well be L&D.  (See the Infographic)


What can you do differently right now?

Fortunately, as an individual learning practitioner you can break this cycle. Here are four approaches (singly or together) you should consider:Untitled

  1. Reframe the problem: If truly ‘no one is asking”, stop and ask yourself, “Why aren’t leaders asking?” Is it because this information has no value or because they don’t understand the value? Is it because they don’t know what L&D can provide?” Based on your answers to these questions, develop a game plan to demonstrate the value of the data you can provide (and then provide it to them).
  2. Find a friendly: If you want to break the ‘negative reinforcing loop’, find a business leader who is willing to work with you, a ‘friendly.’  You can spot a friendly fairly easily. She is interested in how L&D can add value to her business. He views you as a trusted advisor, brainstorming how to build new skills and capabilities within his team. She is innovative and willing to try new approaches even if they might not succeed the first time out. If you have one or two such business leaders with whom you work, seek them out and discuss what data you could provide to answer their questions.
  3. Report on data that matters to the business leader:From a measurement and reporting standpoint, L&D still puts too much of its energy into gathering and reporting data that matters to L&D but is not compelling to the business.  The number of employees who attended courses or the results of your Level 1 evaluation are simply not important to the business. Look beyond your current measures and educate yourself on best practices in L&D measurement.  Integrate questions about learning application and support into your evaluation instruments. This is data business leaders will care about if you show them how it affects them and their organization.
  4. Tell a compelling story:Do you remember the Magic Eye picture within a picture phenomenon? If you held the picture up to your nose, you might see the constellation Orion buried inside the picture. (I never saw anything.)  If you believe your data is meaningful and can help the business, don’t use the Magic Eye approach. Don’t expect your business partner to find the meaning in the data. Rather, tell the story behind the charts and graphs through dialogue. Help your business partner connect the dots; help her understand the consequences of not acting on the data and the benefits if she does.

A real life example

The ability of employees to apply training in the workplace depends on several conditions, much of it outside control of the L&D department. Factors such as the quality of the training, the motivation of the employee, the opportunity to apply the training to real work and the reinforcement of the direct manager all affect the extent to which training is applied.


A few years ago, I worked with a company that sold complex financial software. They re-engineered their implementation process to simplify the client experience, reduce implementation time and accelerate revenue recognition. The business leaders identified project management (PM) skills as critical to the success of this new approach.


The Process Transformation Team identified an initial group of employees to attend the PM training and pilot the new approach with several clients. When they reviewed the pilot results they were disappointed. Implementation time had not declined appreciably and the client felt that the process was more complex than expected. The Team Leaders investigated and found that the employees’ managers were not reinforcing the training or directing them to support resources when they struggled to apply the PM methodology in a real life setting. They also found that the job aids L&D had created were too cumbersome and not designed to be used in a dynamic client setting.


Imagine you are the L&D partner of the Implementation Transformation Business Leader. What data could you have provided to demonstrate that his people were not building and honing skills in this critical discipline?


This leader needed a regular stream of L&D data on actual application on the job and barriers to application.  He needed data on employees’ perceptions of how this training would impact business outcomes of simplification, implementation time and reduced time to revenue recognition. He needed to understand the barriers to application and where the business was accountable to address the issue or L&D. Moreover, he did not yet need incontrovertible proof that the training was improving business outcomes.


Data from L&D was essential for this leader to take action and address issues that affected his ability to successfully transform a key client process.  Unfortunately, the business leader didn’t realize that L&D could help him get ahead of this issue and didn’t think to ask. After L&D and the business leader started talking, sharing data and insights, the Leader not only acted, but worked with L&D to develop regular business-oriented reports.


Final thoughts

As an L&D practitioner, you can break the negative reinforcing cycle. Why not regularly provide this type of data and use it to create a dialogue about what other insights you can provide?  Why not take the first step to dispel the belief that L&D has no useful data or insights to offer?  It’s in your hands.


Have you had a “No One is Asking” moment? I would love to hear from you. Follow me on Twitter @peggyparskey or connect with me on LinkedIn at https://www.linkedin.com/in/peggy-parskey-11634


It’s Time to Solve the “The Last Mile Problem” in L&D


The telecommunications industry has had to confront a challenge referred to as “The Last Mile Problem.” This phrase describes the disproportional effort and cost to connect the broader infrastructure and communications network to the customer who resides in the ‘last mile.’

Learning Measurement also has a “Last Mile” problem. We have a wealth of data, automated reporting and data analysts who attempt to make sense of the data. Yet, as I observe and work with L&D organizations, I continually witness “Last Mile Problems” where the data doesn’t create new insights, enable wise decisions or drive meaningful action. If we want our investments in measurement to pay off, we need first to recognize that we have a problem and then second, to fix it.

Do you have a “Last Mile” problem?  Here are six indicators:

  1. One-size-fits-all reporting
  2. Mismatch between reporting and decision-making cadence
  3. Lack of context for assessing performance
  4. Poor data visualizations
  5. Little attention to the variability of the data
  6. Insufficient insight that answers the “so what?” question

Let’s explore each indicator with a brief discussion of the problem and the consequences for learning measurement

1.    One size fits all reporting

While L&D organizations generate a plethora of reports, not enough of them tailor their reports to the needs of their various audiences. Each audience, from senior leaders to program managers to instructors has a unique perspective and requires different data to inform decisions.

In an age of personally targeted ads on Facebook (creepy as they may be), we each want a customized, “made for me” experience. A single report that attempts to address the needs of all users will prove frustrating to everyone and meet the needs of no one. Ultimately, the one-size-fits-all approach will lead users to request ad hoc reports, create their own shadow process or simply resort to gut feel since the data provided wasn’t very useful.

2.    Mismatch between Reporting and Decision Making Cadence

Every organization has a cadence for making decisions and the cadence will vary based on the stakeholder and the decisions he/she will make. Instructors and course owners need data as soon as the course has completed. Course owners will also need monthly data to compare performance across the courses in their portfolio. The CLO will need monthly data but may also want a report when a particular measure is above or below a specific threshold.

In a world of high velocity decision making, decision makers need the right information at the right time for them. When the reporting cycle is mismatched with the timing of decision-making, the reports become ineffective as decision-making tools. The result? See #1: shadow processes or ad hoc reporting to address decision needs.

3.    Lack of context to assess performance

Many L&D reports present data without any context for what ‘good’ looks like.  The reports display the data, perhaps comparing across business units or showing historical performance. However, too many reports do not include a goal, a performance threshold or a benchmark.

Leaders don’t manage to trends nor do they manage to comparative data. Without specific performance goals or thresholds, L&D leaders lose the ability to motivate performance on the front end and have no basis for corrective action on the back end. These reports may be interesting, but ultimately produce little action.

4.    Poor data visualization

Data visualization is a hot topic and there is no shortage of books, blogs and training courses available. Despite the abundance of information, L&D’s charts and graphs appear not to have received the memo on the importance of following data display best practices.

Look at the reports you receive (or create). Do your visuals include unnecessary flourishes (also known as chart junk)? Do they contain 3-D charts? Do they present pie charts with more than five slices? Do your annotations simply describe the data on the chart or graph? If you answered “yes” to any of these questions, then you are making the last mile rockier and more challenging for your clients than it needs to be.

No one wants to work hard to figure out what your chart means. They don’t want to struggle to discern if that data point on the 3D chart is hovering near 3.2 (which is how it looks) or if the value is really 3.0.  They won’t pull out a magnifying glass to see which slice of the pie represents their Business Unit.

Poor visualizations fail to illuminate the story.  You have made it more difficult for your users to gain new insights or make decisions.  Over time, your readers shut down and opt to get their information elsewhere.

5.    Little attention to the variability of the data

As the L&D community increasingly embraces measurement and evaluation, it has also ratcheted up its reporting. Most L&D organizations publish reports with activity data, effectiveness scores, test results, cost data and often business results.

What we rarely consider, however, is the underlying variability of the data.  Without quantifying the variance, we may over-react to changes that are noise or under-react to changes that look like noise but in fact are signals.  We are doing our users a disservice by not revealing the normal variability of the data that helps to guide their decisions.

6.    No answers to the “So-what” question

This last mile problem is perhaps the most frustrating. Assume you have addressed the other five issues. You have role-relevant reports. The reports are synchronized with decision cycles. You have included goals and exception thresholds based on data variability. Your visualization are top notch.

Unfortunately, we all too often present data “as is” without any insights, context, or meaningful recommendations. In an attempt to add something that looks intelligent, we may add text that describes the graph or chart (e.g. “This chart shows that our learning performance declined in Q2.”). Perhaps we think our audience knows more than we do about what is driving the result. Often, we are rushed to publish the report and don’t have time to investigate underlying causes. We rationalize that it’s better to get the data out there as is than publish it late or not at all.

Amanda Cox, the New York Times Graphics Editor once said: “Nothing really important is headlined, “Here is some data. Hope you find something interesting.”

If we are going to shorten the last mile, then we have an obligation to highlight the insights for our users. Otherwise, we have left them standing on a lonely road hoping to hitch a ride home.

Recommendations to solve the “Last Mile” problems

There is a lot you can do to address the “Last Mile Problem” in L&D. Here are six suggestions that can help:

  • Create a reporting strategy. Consider your audiences, the decisions they need to make and how to present the data to speed time to insight. Include in your reporting strategy the frequency of reports and data to match the decision-making cadence of each role.
  • Identify a performance threshold for every measure on your report or dashboard that you intend to actively manage.  In the beginning, use historical data to set a goal or use benchmarks if they are available. Over time, set stretch goals to incent your team to improve its performance.
  • Learn about data visualization best practices and apply them.  Start with Stephen Few’s books. They are fun to read and even if you only pick up a few tips, you will see substantive improvements in your graphs and charts.
  • Periodically review the variability of your data to see if its behavior has changed. Use control charts (they are easy to create in Excel) for highly variable data to discern when your results are above or below your control limits.
  • Add meaningful annotations to your graphs and charts. Do your homework before you present your data. If a result looks odd, follow up and try to uncover the cause of the anomaly.
  • For senior leaders, in particular, discuss the data in a meeting (virtual or face-to-face). Use the meeting to explore and understand the findings and agree on actions and accountability for follow up. Employ these meetings to create a deeper understanding of how you can continually improve the quality of your reporting.

If you start taking these recommendations to heart, your last mile could very well shrink to a very small and manageable distance. Have you experienced a “Last Mile” problem? I’d love to hear from you.

Follow me on Twitter @peggyparskey or subscribe to my blog at  parskeyconsulting.blogspot.com/

CTR Is excited to Announce the Release of a Significantly Updated TDRp Measures Libary

CTR is excited to announce the release of a significantly updated version of the TDRp Measures Library. This update focuses on Learning and Development Measures and has expanded from 111 measures to 183 Learning related measures.  The additions include measures related to informal and social learning as well as more detailed breakdown of the cost measures.  This version of the library also includes measures defined by the CEB Corporate Leadership Council as well as updated references to the HCMI Human Capital Metrics Handbook and updated references to ATD, Kirkpatrick and Philips defined measures

If you are a CTR member, you have access to the updated version at no additional charge. https://www.centerfortalentreporting.org/download/talent-acquisition/ .

If you are not a member, join CTR for $299/year

The Missing Component of L&D Management By Dave Vance

The L&D field is more exciting than ever, particularly with the advent of smart systems, mobile learning, big data, and greater access to content than ever before. Even predictive analytics is beginning to move beyond workforce planning, retention and recruiting to some applications in the learning arena. What is missing, however, is an improvement in how we manage our programs and departments. Our management practices have not kept pace and often have not improved at all over the last twenty years. Sadly, most masters and Ph.D. programs in organizational or human capital development do not include even a single course in management, and there is often little opportunity to learn good management practices on the job.

Now, I know what some of you are thinking. You would tell me that you do manage. You hire people into the department, set goals for them, and have performance reviews. You create an annual budget. You implement new systems, you roll out new courses and update existing content, and you might even track where staff spend their time. You address issues when they arise and you may have a long-term strategy. All of these activities are important and, yes, are a part of the overall management of the department. But there is a component of management that is missing in most organizations, and it is a critical component – one that has the potential to drive significant value creation. In fact, in most organizations it would drive much greater improvement than adopting a new LMS, implementing a mobile strategy, or doubling the amount of e-learning available.

What is this magical, powerful management component? It is the disciplined process of creating a plan and then executing it. Sounds simple and obvious, right? It is not, which is why very few L&D departments manage this way today. Let’s put this concept in concrete terms. It means the department head and senior department leaders need to decide what they want to do for the coming year and then create SMART goals for each item. Do you want to improve participant satisfaction or the application rate? Great, set a specific, measurable goal like a 5 percentage point increase in the application rate. Do you want to shift your mix of learning from instructor-led to online? You need to set a specific, measurable goal like increasing the percentage of online learning from 25% to 45%. Of course, you won’t accomplish these goals simply because you wrote them down on paper. You need to create action plans for each one, detailing all that will be necessary to achieve the desired improvement, including any additional staff or budget. You will need to assign someone responsibility for it and everyone will need a clear understanding of their roles and responsibilities.

And that is just the start. Once the plan is complete containing all these SMART goals and the year is underway, you will need to actively manage these initiatives to ensure their success. You will need to use reports every month to see if you are on plan or not. If you are not on plan, or if the forecast shows you falling short of plan by year-end, then you need to take management action to get back on plan. So, the department head should have a dedicated (meaning the only agenda item is management of the department) one or two-hour meeting each month with her direct reports to review progress against goals and decide on actions to get back on plan. (At Caterpillar, this meeting was the second Tuesday of every month, and we always used the full two hours.)

This, then, is the essence of good management. You must spend the time upfront to craft a good plan with specific, measurable goals, and then you must execute that plan with discipline. Very few L&D departments do this today. Nearly all can tell you how many courses were delivered last year to how many participants, and they can tell what the participant satisfaction was with those courses. But very few created a business plan before the year started with SMART goals, and fewer yet produced the reports necessary to manage throughout the year. Almost none have a monthly meeting dedicated to using these reports to manage the key programs and initiatives of the department. So, conceptually, this all sounds pretty straightforward, and colleagues in other departments would assume L&D already operates this way. After all, many of them do and have done so for a long time. (Can you imagine a sales department with no SMART goals and no monthly meetings to compare progress to those goals?) We need to adopt the same disciplined approach to management. It will result in our achieving significantly more than the alternative management approach which is basically that everyone will work hard and we will achieve whatever we achieve. Given the current state of management in L&D, I am convinced that the discipline of creating a good plan and then executing it will deliver far greater value in most organizations than any other single change, program or initiative.