Are We Over Using Dashboards?

Dashboards have become increasingly popular, especially those with well-designed visual elements. For many applications, they do represent a great advance from the more boring scorecards filled with rows of data. That said, the question now is whether we have gone too far and are relying too much on dashboards when in fact other types of reports would be better. My answer is yes.

Many practitioners today appear to believe that dashboards are the best, if not the only, way to share data, and this is a problem. It would be better for us as a profession to utilize many different types of reports and tailor the type of report to the specific need of the user, which in turn requires us to think more carefully about the reasons for measuring in the first place. We describe four broad reasons to measure in our new book, Measurement Demystified, and each of these four reasons is linked to the type of report best suited to meet the user’s needs. The dashboard is suited to only two of these four reasons.

The first reason to measure is to inform. This means the measures will be used to answer questions from users and discern if trends exist. The question may be about the number of participants or courses, or perhaps about the participant reaction or application rate. In any case, the user just wants to know the answer and see the data. If they want to see it by month and especially if they want to see subcategories (like courses by type or region), a traditional scorecard will be best with rows as the measures and columns as the months. If the user is interested in year-to-date summaries and more aggregate data as well as some visual representations, a dashboard will be best. So, even for this one reason (inform), the best report depends on what the user wants to see. These may be one-off reports or reports that are regularly updated.

A second reason to measure is to monitor. This occurs when a user is happy with how a measure is performing and wants to ensure the value remains in an acceptable range. For example, participant reaction scores may average 80% favorable and the CLO wants to ensure they stay above 80%. In this case, a dashboard with thresholds and color coding is a perfect way to share the measures. This may be the only element in the dashboard or it may be combined with some other elements. This type of dashboard should be generated monthly.

The third reason to measure is to evaluate a program and share the results. In this case, a program (like sales training) has been completed and the users desire to evaluate and share the results with others. This is a one-off report designed to be used at the end of a program or perhaps at the completion of a pilot. In this case, a dashboard should not be used. Instead, a program evaluation report would be best which takes the audience through the need for the training, the planned results, the activities completed, the actual results, and lessons learned. The report will probably be a PowerPoint but could be a written document.

The fourth broad reason to measure is to manage. In contrast to monitoring, managing means that a goal has been set to improve the value of a measure, perhaps increasing the application rate from 40% to 60% or reaching an additional 2000 employees with learning. If monitoring is about making sure the status quo is maintained, managing is about moving the needle and making progress. In this case, a dashboard should definitely not be used because it would not convey the key information or the detail needed to make management decisions.

Every month a manager needs to know whether their efforts are on plan and whether they are likely to end the year on plan. For this, they need a management report which includes the plan or target for the year, year-to-date results, and a forecast of how the year is likely to end if no additional actions are taken. This type of information is very difficult to share in a dashboard format which is why special-purpose management reports have been designed for L&D. These are generated monthly and focus on both specific programs and aggregated department results. In contrast to dashboards and program evaluation reports, these management reports are not meant to be “presented” but to be used in working sessions to identify where action is required.

In conclusion, dashboards have their place but should not be the only type of report generated by L&D. Dashboards are recommended in two cases: 1) when the reason to measure is to inform and the user wants summary data along with visual elements, or 2) when the reason to measure is to monitor in which case thresholds will need to be included. Dashboards are not recommended when the reason to measure is to inform and the user wants detailed, monthly data. In this case, a scorecard is preferred. Nor is a dashboard recommended to share program evaluation results or to manage programs or the department. In each of these cases, better report types exist and should be employed (program evaluation and management report respectively). Bottom line, it is important to use the right type of report which should match the reason for measuring.

Speak Your Mind