[ad_1]
The Problem Of Bias: Watch Your Thinking!
You’ve just rolled out a brand new candidate experience for hiring that automates many tedious tasks like resume scanning for matching skills and experience. For those candidates who make it through the AI filter and the initial recruiting screening call, the system even sends out a survey automatically!
How Much Do You Think About Your Own Thinking?
The results are in: while there’s a lower than expected return rate for the survey, candidates are absolutely satisfied with the experience. What can be wrong with this? You’ve just got BAMMed!
BAMM Is Not A Thing
BAMM is not an official thing, but in my data literacy workshops, I refer to the collective elements of biases, assumptions, myths, and misconceptions as “BAMM.” At the end of the day, it’s not the labeling that matters, but rather how much you can mitigate them in your decision-making process.
What’s Common In BAMMs?
They are invisible and often undetected while making a significant impact on how you think:
- Biases
These are cognitive shortcuts that skew perception and judgment. They arise unconsciously and lead us to favor certain ideas or groups over others. For example, in L&D, confirmation bias might lead us to only use metrics that support the perceived success of a training initiative. - Assumptions
These are beliefs we take for granted without evidence. Assumptions often simplify complex scenarios but can lead to blind spots. For instance, assuming all employees prefer self-paced eLearning might result in underutilized resources. - Myths
These are widely held but false beliefs. Myths persist due to repeated exposure and cultural norms. The “learning styles” myth—the belief that tailoring training to visual, auditory, or kinesthetic preferences improves learning—is a classic example in L&D. - Misconceptions
These are inaccurate understandings or interpretations of concepts. They’re often rooted in incomplete, outdated, or simply half-true information. A misconception in L&D might be equating high course completion rates with learning effectiveness.
A Lurking Bias: How To Identify And Address It
BAMMs influence decisions at all levels, from program design to data interpretation, making identifying and addressing them systematically crucial. Let’s return to the original story: what BAMMs might be lurking, that we need to be aware of?
Assumptions And Confirmation Bias
First, without user testing, you may rely on assumptions about the new software. From my experience looking for a new role, I can tell you that the most painful and frustrating part of the process was the application phase, including the dreaded application tracking systems (ATS) with selection bias [1].
Survivorship Bias
Next, your system asked the opinion of those only, who made it through the ATS and human screening. This might just be an example of a survivorship bias. Wouldn’t you want to know about the experience of those who did not make it? Or, even worse, those who would have been great candidates but decided not to apply based on their experience?
Courtesy Bias (Response Bias)
And what about the results? A form of response bias is not to tell the truth but to tell what is expected or socially more acceptable. Think about it: these candidates want the job. Would they really tell HR how bad the experience was? And finally, your own confirmation bias: you just invested a large sum of money and resources to implement this system. You really want to hear good things about this effort. Confirmation bias can influence what questions you ask and how you phrase your questions. Confirmation bias also impacts how you accept results that you like and reject results that you do not.
Can We Completely Ignore BAMMs?
No. But, just by being aware of their existence and taking practical steps to mitigate them, you can lower their influence on your decision-making. Here are examples for each BAMM, with strategies to mitigate them:
1. Bias: Confirmation Bias
- Definition
The tendency to search for, interpret, and recall information that confirms preexisting beliefs. Confirmation bias often results in echo chambers (where everyone believes the same thing) and bandwagons (doing what everybody does just because everyone else is doing it). - L&D example
Only collecting feedback that aligns with your belief that a new training program is effective. When you set out to “prove the value” of the program, you may limit data collection to those factors that you believe will back up your theory.
When I do data literacy workshops, L&D teams often state at the beginning that data analytics is important for them so they can prove the value of L&D. By the end of the workshop, they rephrase this statement because analytics is about understanding what works and what doesn’t: it is about predicting what will work and what won’t. It is to understand how to make data-driven decisions, not to set out to “prove” value. - Mitigation
Use diverse feedback channels and actively seek contradictory evidence to challenge assumptions. For example, internally, at Intel, I shared my AI assistant called “Holey Poke” with other L&D folks. “Holey Poke” pokes holes in your idea, argument, or plan. I used it to challenge myself before I would socialize something with others.
2. Assumption: Engagement And Effectiveness
- Definition
This is actually a two-in-one. First, believing that specific metrics, like User Interface interactions in an eLearning, equate to full engagement. Second, believing that if something is engaging, then it is effective learning. - L&D example
The stakeholder says the content is pretty dry, so you need to “bring it live” with interactions. This approach may result in lots of clicking, dragging-and-dropping, clicking-and-revealing, etc.
Engagement is not only physical actions. Engagement has an emotional (affective) component and a cognitive component. Measuring engagement means measuring all three components. Additionally, over-indexing engagement in the affective domain can lead to pure entertainment. Finally, effectiveness in the workplace means employees can apply what they learned on the job to get things done, and do it well. Effectiveness needs to be defined and measured up front. - Mitigation
Measure application and real-world outcomes, not just engagement with content. Always design for and measure all three components of engagement. Remember, people don’t come to work to be entertained.
3. Myth: Digital Native
- Definition
The belief that younger generations inherently excel at technology. But but but… Yes, younger generations may be much faster at sending messages, but try emails with them. - L&D example
Assuming everyone knows how to use Excel to create a pivot table or assuming everyone knows when and how to use email or set up a meeting just because they grew up using technology. - Mitigation
Regardless of age (“generations” are often a myth in themselves, btw), assess skill gaps and set clear expectations. Show what “good” looks like and explain the decision-making, not only the steps to take. Communication skills are often intertwined with technology barriers. Teach the two together. Don’t teach “communication skills” or “empathy” out of context. Show them how to do the task while applying those soft skills. Speaking of skills…
4. Misconception: Skills-Based Learning And Skills-Based Organization
- Definition
Well, that’s where the misconception lies. Looks like every organization has their own definition. Generally, a skills-based organization (SBO) is a business model that prioritizes the identification, development, and deployment of employees’ skills over traditional job roles or titles. And skills-based learning enables that. - L&D example
Donald Clark’s (as usual) thought-provoking blog and LinkedIn post about skills-based organizations triggered some emotions:
In workplace learning we need to stop distracting ourselves with abstractions. The “Skills-Based Organisation” has long been an empty trope, because we have been seduced into thinking that abstract nouns like leadership (lots of spend but so little of it), culture, diversity, equality, values, inclusion, resilience, etc., are “skills” or some mysterious miasma that will encourage and produce skills [2].
I always read Donald’s posts, not because I always agree with everything he says, but that’s actually the point: if you keep reading what you totally agree with, you’re never going to be challenged, and you’re never going to evolve.
Nick Shackleton-Jones had a comment that is extremely important and relevant to this article on BAMMs:
(rephrasing) We often implement not what it is supposed to be rather the convenient version we actually can.
-
- What does a “convenient version” look like in practice?
You cherry-pick a label like mobile-first and implement videos because that’s the only tool you have for mobile (whether you need videos or not). You go with game-based learning for engagement, but you don’t have either the resources, time and expertise, or tools, so you end up with Jeopardy. You start gamification without any deep expertise in motivational theories and behavioral science, and you end up with points, badges, and leaderboards. You go with microlearning, but basically, it’s just shorter content. You get the picture.
- What does a “convenient version” look like in practice?
- Mitigation
Don’t spend years on building out “skills libraries” with abstract definitions and then requesting L&D to build “communication training.” Start with what needs to be done. Start with what “good” looks like where it matters. Not every single skill is equally important. Skills also have a scale. I can dabble in programming but would not build your next enterprise application. Skills decay if you don’t use them. Skills must be measured not only by assessments about skills but also by the output you’re supposed to create by applying those skills.
So, next time someone asks you to build communication or empathy training, ask them what needs to be done and how. Then, show how to apply good communication and empathy skills to that particular task.
Now you know about BAMMs. Don’t get BAMMed!
References:
[1] ATS’ are awful: here’s what you should know
[2] Reclaiming Productivity: Aligning Work, Learning, and Societal Needs
Originally published on December 28, 2024
[ad_2]
Source link