Employee feedback & engagement measurement - the future

Employee feedback & engagement measurement - the future

In the last post I gave some background on how we got to the current state on measuring engagement and employee perceptions. In this one I’m going to give an overview of where we see the market going. It's a personal view, but one from someone at the coal-face.

There’s an expression in english that ‘if all you have is a hammer, everything looks like a nail’. I think a large part of what has driven us to the current position is that businesses felt that all they had was the long, annual engagement survey ‘hammer’.

In the last article I cited Andrew Graham’s 9 issues with traditional employee research:

  • Not frequent enough
  • Single scoring leads to issue distortion
  • Aggregation reduces meaning
  • Does not capture the specifics (context seldom captured)
  • Lengthy or poor response planning
  • Managers are busy & have no incentive to implement any actions
  • Lot of resources & monitoring
  • Surveys get old
  • Causality not clear

There are some other drivers which seem to be contributing to change, or at least a desire to change:

  • Most other business metrics are produced more frequently than they were previously, and business executives question why engagement should be different
  • Businesses have become more customer-centric and as such have embarked on customer listening programmes, many of which are tightly integrated across the business (always-on listening rather than annual surveys). Their experience of capturing customer perception data has changed
  • The rise in sustainable investment and accounting is encouraging firms to take a multi-stakeholder approach rather than just focussing on one group (investors?). Employees are seen as key stakeholders.
  • Technology is changing the economics of employee measurement. I believe that it is possible to be better, quicker and cheaper now. This is not a move along a frontier curve, it’s shifting the curve and shifting it dramatically
  • Enterprises are rapidly building People Analytics functions. These functions always want better data which can be integrated with other enterprise data
  • Digitalisation is unbundling the consulting offering which the major providers have used. New start ups are using technology to capture value in a much more scalable manner and cherry-picking the profitable parts of the offer.

The first response: Pulse surveys.

Let me start by saying that I think the gold-standard is doing both an annual survey with other, more immediate ways of capturing feedback either on a schedule or/and aligned to key employee events. However, in the real world businesses need to allocate short resources to where they believe they get the most impact. I think the annual survey will become rare over time.

The biggest trend which we’ve seen over the last 5 years is the emergence of the pulse survey. These are regular, shorter surveys which typically use technology and automation to address many of the issues above.

All pulse surveys try to address several of these issues, most notably the issue of frequency (and the linked issue that survey data gets old). Reporting is usually automated and several tools automate more advanced analysis showing linkages between variables and engagement.

However there are tradeoffs. There is a burden on the survey taker each time they take the survey and the only way of achieving decent response rates whilst asking more frequently is to shorten the survey, potentially reducing richness. As I noted in the last article, this might not be such an issue if you randomise some questions and use missing-data inference techniques but few providers are doing this (and many HR departments remain unconvinced).

There is also an issue in terms of how frequently you can ask employees to take a pulse survey. There is no right and wrong answer but there are some things that are worth considering.

We’re seeing that engagement at the individual level is remarkably stable. Most people answer the same way month in month out, potentially only changing by by a small amount if they do change (and we have to assume this could be a measurement error). You only need to measure a stable thing frequently if you’ve got a large measurement error. In these instances data should be smoothed before reporting as your issue is how to deal with noise.

Furthermore employees get frustrated if they’re asked the same question on a regular basis & they don’t see action, which is hard to demonstrate if the survey frequency is too short. I think you can get away with this for one or two ‘trend’ questions but there needs to be some change to make it feel worthwhile.

In our experience, a happy compromise is gathering engagement-type feedback on a quarterly basis. The best firms are using the non-engagement months to plan a series of topics where employee feedback is valuable and by capturing experience-type feedback around key events

Better UI isn’t a sustainable competitive advantage

Al pulse survey providers have embraced modern web technology to provide a more consumer-like experience. The is seen in both the survey-taking interface, where a good, mobile-friendly interface is probably now a hygiene-factor and to a lesser degree the reporting side.

One of the early incentives that encouraged us to build Workometry was the experience of how much data cleaning & preparation was needed with the existing solutions. Capturing the data in a way that you need it for analysis and automating the boring data preparation process brings speed, quality and cost improvements.

From a reporting side it’s now relatively simple to build a manager- or even employee-level interface that reports the data in a clean and understandable manner.

Many of the current entrants into the market aim to do just this. They bring a reduced overall cost to what has been done manually. However my view is that the way to compete in this part of the market will end up being on price. This is great if you’re a small to mid sized client as historically the consultancy-led approach was too expensive without scale. It probably means that we’ll see consolidation in this part of the market.

Thinking about the data

Here is where I think it gets more interesting. As I noted before the advent of more sophisticated analysis activities with employee data is creating a demand for better, more integrated data. At the same time there continues to be a need to ensure confidentiality, especially preventing reporting down to small groups.

This is a challenge which can be addressed by technology. APIs can provide data consumption whilst maintaining confidentiality. Newer forms of data stores, like the schema-less one that underpins our Workometry product can provide great flexibility. We’ve got to a stage where realistically we don’t have a constraint from the size of the data. If we want to compare people who answered a particular question this month with how they answered another question last quarter that’s possible. If we just want to be in a position where we can capture and analyse many millions of data points a year technology or cost of technology isn’t a constraint.

The second factor about thinking about data is always to question where the best data is to be found. We shouldn’t be asking employees about things in surveys which we could understand from another source. In particular I think of the questions about process compliance. Surely we should be able to gather this data from a system? Only ask what you need to ask.

A key factor with employee listening is that it’s iterative and conversational. That means you can’t know with certainty what you’ll need to ask in the future, or what data you’ll need to integrate. Enterprises need to select these new technologies with this in mind. How can the data be integrated into models? How can it be queried in real-time for models in production?

Think about the analysis, but start at the action.

It’s easy to report data but this typically doesn’t bring much insight. We believe that with all analysis we need to work back from the outcome - what you’ll do when you have the insight. Understanding what you can and want to possibly change should be an input into what data you capture. 

Dealing with survey data, and especially mixed survey-behaviour or survey-demographic data is difficult. Even if we understand that in many questions we’re dealing with ordinal data that guide us to a certain subset of possible analyses (and not the typical ones which are easy in Excel). As I’ve written in the past, we’re big fans of using graphical models and much of our survey work relies on relationships - relationships between variables and relationships between variables.

As the data size increases along with the complexity Machine Learning is increasingly being used to identify insight from the data. Where we used to cut the data by a demographic function we’re increasingly looking for non-linear relationships, identifying small and micro segments and attempting to link demographics, behaviours and perceptions.

Text

I have written about this before. In our view capturing and understanding employees own words is the key breakthrough that will happen in employee feedback in the next few years. This mirrors what we’re seeing with customer research.

Traditionally there was almost a binary split between quantitive and qualitative research work. In reality there was always a continuum. It was always technically possible to do qual work at scale, however it was prohibitively expensive and time consuming.

What we’re seeing with modern text analytics is that that supply curve has shifted. It’s now possible to ask a survey with mostly open text questions and to ask it to hundreds of thousands of individuals. This opens up great opportunities in terms of increasing the richness of data, the attractiveness of providing data from the user’s perspective and a flexibility and agility in design.

From our experience (and we think we’re probably in a leading position in this space) text analytics is highly domain-specific. It is unwise to think you can parse text through a general tool, or one designed for another domain. 

Bots

I mentioned this in the last article and received some questions. I think we’re heading for a position where technology-enabled employee research will be conversational. The employee will be asked an open question about a particular topic and depending on the response they give a relevant follow-on open question will be asked.

In the short term these technologies will be text-based but it’s likely it will move to voice. I see a future where a system identifies who it needs to speak to, telephones them and asks a series of questions to try and drill-down to the key drivers. It will be able to do this at scale.

The issue to some degree isn’t the analysis, it’s the ability to do this in a way which isn’t intrusive for the user and where there are strong incentives for them to participate. This is what is exciting about research. Analysis is closely related and enabled by design-driven changes. Better interfaces = better data = better analysis = better insight.

The future

Where is all of this taking us?

I see the engagement-app market diverging. On one side are employers who want to understand engagement but do so in a cost-effective manner. These firms will gravitate to the mass of engagement apps offering real-time capture. Buyers should be aware of the value these tools are providing - it’s mainly focussed on delivering metrics in a speedy and cost effective manner. These providers will make most headway in the part of the market where traditional providers couldn’t be cost-effective, ie SME businesses. Those who can add sophistication and scale up to enterprise clients will be disruptive.

The other side are firms and products like our own which see machine learning technology as a way of automating the higher-skilled parts of the historic research bundle. In the first instance firms like ours are using ML to automate much of the skilled work of the experienced engagement consultants.

I think the ultimate role will be disrupting traditional management consultancy. Tools which can ask and understand an almost open set of questions can disrupt much of the information-gathering parts of the traditional consulting offering.

None of this completely removes the need for great consultants, but it means that deep expertise, or the creative parts of the process are where the opportunity arises. Consultants will increasingly focus on developing strategic plans and advising on change management. There might be fewer employed by employee research firms, but I think this shift will increase the demand for those at the top of their games.




ABOUT THE AUTHOR

Andrew is one of the pioneers of the European People Analytics scene. He is the founder of OrganizationView, creator of the open-question employee feedback tool Workometry and the co-founder of the People Analytics Switzerland community.

Andrew chaired the first European People Analytics conference - HR Tech World’s 2013 ‘Big Data’ event and has been co-chair of Tucana’s ‘People Analytics’ conference in 2014, 2015 & 2016. He teaches HR Analytics and Data-Driven HR in Europe and Asia and is a member CIPD’s Human Capital Analytics Advisory Group, setting standards and content strategy for HR Analytics content.

To keep informed of all Andrew’s writing, here and elsewhere please subscribe to OrganizationView’s Newsletter or follow him on Twitter.


Rohit Dhankar

Associate Manager ML at Accenture

7y

First and foremost for either creation of the test set or comparison of the training set - we need a Org / Function specific Text Blob --- not the generic ones available over the net . Then we are reaching closer to correct and effective classification of sentiment .

Like
Reply
Raja Sengupta

Strategic People Analytics Consultant @ Korn Ferry EMEA

7y

"Better UI isn’t a sustainable competitive advantage". Andrew, I see that as good take based on your extensive experiences in this domain This has been a grey area for us too, while doable there are significant challenges (far beyond APIs) in effective synchronization. Of-course there are some clever experimental roundabouts for this! Text Analytics ( my domain ) per say is an expansive term. While a generic broad range sentiment analysis can be executed by standard text analysis, it is significantly complex to establish acute hypothesis. The issue ( as you would well know ) lies at the heart of text analysis, statistical inferences on POS tags. Stemming, generic lemmitization or even the publicly available tree-banks ( Penn being the prime example ) have limited ability in vectorization human intent and thought. This is particularly true in a domain specific or poverty of learner base environment. Also linear n-grams can only get you so far no matter how deep the machine learning! The only probable solution is to painstakingly develop an domain specific dictionary/ tree-banks ( in your words prohibitively expensive and time consuming). This includes symbolic POS tagging based on a combination of iterative statistical and domain expertise based inferences. The access to authentic internal HR survey big data across corporates ( in India at least ) is yet another challenge and the biggest for us!

Larry Levine

Program Manager - retired

7y

Andrew - I agree that text is were the future is if you want to get actionable insights. However, the text analysis of employee responses I've been involved with suggests that even in large companies, function specific and company specific jargon will make it hard to have training sets large enough to get at the particular issue(s) without a lot human intervention. Otherwise you have ambiguous clusters of topics that then require focus groups to clarify. Do you see it differently?

Dale Hintz MBA

Create High Performance: Leaders\Teams\Cultures\Organizations

7y

What I'd like to hear about is how the analytics are driving improvements. Are there goals and milestones set - then what's the results.

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics