Academic research, periodic news headlines and our own experience can all attest to the importance of organisational culture to success (and failure) in business. For senior teams and boards, identifying and communicating the culture they want to see is a central component of leadership. What is the firm here for? How does it go about doing this? What is expected of people, recognised and rewarded? What is seen as praiseworthy, tolerable or shameful?
Agreeing what an organisation’s culture should look like is one thing; establishing what it actually is, and measuring progress in closing any gap between the two, is however quite another and considerably more difficult. This is not for a lack of material; every organisation will have a culture, and most will have multiple sub-cultures. The challenge is rather that there is so much that may be relevant.
Every organisation’s culture will be the unique and continually evolving product of its leadership, history and people and the external environment. Complicating things even further, how that culture is perceived may vary with the person observing it. Senior executives, for example, may have a different perspective to those who are not in leadership roles; people with different demographic characteristics may not share the same view of their team’s cultural norms.
Measurement implies going beyond description and narrative to generate quantitative and comparable data. If every organisation’s culture reflects a myriad of evolving factors, however, measuring it is going to be a challenge; and if each organisational culture is unique, even trying to measure it may appear questionable. If, in the ubiquitous phrase, what gets measured gets managed (and leaving aside the inconvenient thought that not everything that is measurable needs measuring, let alone managing), this poses a conundrum for boards and senior teams. How to ensure that something that is important but does not lend itself naturally to measurement, gets managed?
One answer is to step back from focusing on culture per se and ask instead what we can sensibly measure to help us assess and judge culture. This is the approach we have taken at the FSCB, shaped by a number of questions pertinent to any information-gathering exercise.
- What information are we looking for? Is it about outcomes? Employee engagement? Concerns? Values? Wellbeing? Something else entirely? Is it about changes over time? Differences between groups? Do we need statistically valid data? Are we looking to explain something we know but don’t understand, or to catch what may not be currently on our radar?
- Is the information we want, best found through asking or observing? Do we want to know people’s views? To see how they behave? Both?
- Who has the information we are looking for? Who do we need to ask or engage with? Everyone? Specific groups or individuals? Statistically valid numbers of people? Comparable groups of people?
- If asked directly, will people feel safe to answer? Are survey responses, focus group output or interview comments, anonymous, confidential or neither? Are demographic questions optional? Is all of this clear and credible?
- Are we mitigating or accentuating bias? Longer questions and surveys tend to result in higher average question scores and a smaller range of responses; the tendency to agree with a question rises during the course of a survey. To maximise the chances of receiving good survey scores, run a lengthy set of long, positively phrased questions with the most sensitive questions left towards the end; to obtain information that will actually be useful, don’t. Are the survey and questions longer than they need to be? Is there a mix of positively and negatively phrased questions?
- Are our questions easy to understand and answer? A question that makes sense to the person drafting it, may not to someone reading it. The wording may be confusing, the premise unclear or incorrect, or the question may contain two concepts but allow for only one answer. Is cognitive testing factored into the design process?
- Are incentives getting in the way of useful information? Are some respondents or participants incentivised more than others to answer positively? Do survey results in any one year affect reward and recognition for managers? Are scores a source of firm-wide learning, or intra-firm competition?
- What are we giving in return? Continuous improvement is a shared endeavour; open and ongoing participation demands reciprocity. After the questions have been asked, how are the findings shared? Even better, can we commit to sharing the findings before the questions are asked?
At the FSCB we provide both support and challenge to our member firms in managing their organisational cultures, and engage with firms outside financial services as FSCB Associates. We use a variety of direct and indirect information-gathering approaches in our work, including surveys, interviews, questions to boards, focus groups, roundtables, ethnographic techniques, behavioural trials and natural language processing. Our employee survey, now in its seventh year, provides a unique and valuable data set that enables both intra- and inter-firm comparisons over time. How, taking the above question set as a guide, does this approach work?
- What information are we looking for? We define a good business culture not as a particular type of culture, but as one that produces good outcomes for the firm’s customers, clients and society as a whole. Both good and bad cultures come in a range of shapes and sizes; there is no single template for either. Given also the challenges in measuring culture described earlier, we set out to measure not culture as such, but rather the prevalence of characteristics that we can reasonably expect to be associated with the outcomes of any good culture; honesty, reliability, competence, resilience, respect, responsiveness, openness, accountability and shared purpose. An organisation that demonstrates these characteristics to a high degree is likely to be better equipped and motivated to serve its customers and clients well than one in which this is not the case. (We are, incidentally, currently testing the validity of this assumption, of which more to follow.)
- Is this information best found through asking or observing? We are primarily interested in perceptions of the organisation, so need to ask for views directly.
- Who has the information we are looking for? The people who know the firm best; its employees. To be confident of having an accurate picture we also want to hear from a statistically valid number of employees in each business area, and to collect the demographic data that will help us analyse it and inform subsequent actions.
- Will people feel safe to answer? The responses are anonymous. The FSCB does not know who has responded, and no identifiable data is passed back to the firm. This is made clear to participants. Demographic questions are optional.
- Are we mitigating bias? The core survey consists of only 36 questions on a Likert scale (strongly agree to strongly disagree), with one free text question asking for three words to describe the firm. Recognising the tendency among each of us to see ourselves as cleverer, more rational and more ethical than average, our questions ask primarily about what people see around them rather than how they perceive themselves. The core questions remain constant and comprise a mix of negatively and positively phrased statements in a random order. A small number of additional questions may be added in any one year to explore a particular issue (e.g. speaking up) in more detail.
- Are our questions easy to understand and answer? All questions undergo thorough cognitive testing across employees in different business areas and firms.
- Are incentives getting in the way? This is a question primarily for firms rather than for the FSCB, but a couple of observations from a contextual and design perspective. First, both FSCB membership and running the survey are voluntary; firms taking part are choosing to do so as they value the information they receive. Second, in reporting the results we do not combine the scores of the different characteristics; there is therefore no single score or ranking for a firm, but rather a range of results that may vary considerably. This reflects a principled argument against treating the different characteristics as comparable and of equal weight, but also helps encourage a focus on understanding rather than ‘winning’.
- What are we giving in return? How the survey results are shared with employees is again a decision for firms. The FSCB provides each firm’s results to its board; these include statistically valid results by business area, by demographic characteristic, over time and relative to similar business areas at other participating firms, all of which can be shared by the firm as it considers appropriate. We do not comment publicly on individual firm results, but do publish and (in our wider work with members) draw on the aggregate results from across all firms. This all-firm data set is particularly valuable to firms on issues such as intersectionality, where single firm analysis can quickly run into the small numbers problem (i.e. that it becomes possible potentially to identify respondents). While the overall picture will not necessarily reflect the precise situation in any one firm, it provides a starting point where evidence would otherwise be unavailable.
Measurement is a means not an end, and measuring the right things badly or the wrong things well is likely to have unfortunate consequences in any context. Bringing a rigorous approach to how we assess organisational culture by measuring and interpreting carefully what is both relevant and quantifiable enables us, however, to understand and manage our organisations better and keep learning and improving.
If you would like to know more about the FSCB survey or any other aspects of our work with firms in financial services and elsewhere, please visit our website or get in touch; all questions, comments and ideas are always more than welcome!
This blog follows Alison Cottrell’s contribution to the IBE’s recent seminar on measuring ethical behaviour.