Telerik blogs
top image

On this episode of Eat Sleep Code guest Arthur Doler discusses how our brain interprets cause and effect, the ways in which it wants to think of things as narratives, and all the tricks it does to save itself from having to think. Arthur shares his perspective on cognitive bias and how it effects the software development process.

Arthur Doler

Arthur (or Art, take your pick) has been a software engineer for 13 years and has worked on things as exciting as analysis software for casinos and things as boring as banking websites. He is an advocate for talking openly about mental health and psychology in the technical world, and he spends a lot of time thinking about how we program and why we program, and about the tools, structures, cultures, and mental processes that help and hinder us from our ultimate goal of writing amazing things. His hair is brown and his thorax is a shiny blue color.


00:00 Ed Charbeneau: This podcast is part of the Telerik Developer Network. Telerik, by Progress.


00:16 EC: Hello, and welcome to Eat Sleep Code, the official Telerik podcast. I'm your host, Ed Charbeneau, and with me today is Arthur Doler.

00:24 Arthur Doler: Hi, how are you doing?

00:25 EC: Good, how are you doing?

00:27 AD: Doin' all right.

00:28 EC: We are at Stir Trek 2017, I just got finished talking about ASP.NET Core, and you're here talking as well. What is your talk about, Arthur?

00:39 AD: My talk today, the title is The Saboteur in Your Retrospective, which is a clickbaity title, but it's about cognitive bias in the retrospective and how to help combat it.

00:49 EC: Okay. So that's a very interesting subject, can't say that I've seen that one at a conference before myself. Before we get into that, let's talk a little bit about you, where do you work and what do you do?

01:02 AD: I work for a company called Aviture, A-V-I-T-U-R-E, not Aperture unfortunately, well depending on which way you say it it might be fortunate but I am a software developer, have been for quite a while but I like to talk about things that are psychological in nature. And I think about psychology, it's kind of a hobby of mine, and how we implement those things, how we take things that we've learned in psychology and apply those to things like development processes in software.

01:32 EC: Yeah, one of the things I like about our industry is there's people like yourself that have these interesting hobbies besides what we do, and we see a lot of people with different backgrounds, different hobbies, that bring those things into software development and come up with some interesting points of view, so I'm looking forward to talking to you about what your talk about here at Stir Trek today.

01:57 AD: Okay. So the talk starts with talking about what... Well let's start with this, there is a great book out there by a guy named Daniel Kahneman, who's a psychologist, and he's also won the Nobel Prize in Economics. You don't get to do that unless you're a really good psychologist. So he's the guy that started and helped found behavioral economics, which you might have heard of. And behavioral economics is the same sort of principle where you take the... Traditional economics says that humans act in rational forms, like they always make the rational decision. And that's not true, like self-evidently. So behavioral economics is a lot more about how do humans actually make decisions. So Kahneman wrote this book called Thinking Fast and Slow that's all about cognitive biases.

02:45 AD: And he starts talking about this in terms of these things he calls System 1 and System 2. They're not really regions of the brain, not like your left hemisphere, right hemisphere, but they're notional option, or things that work inside of your brain... Agents. And System 1 is this fast thinking brain that operates and is designed to give you quick answers and easy answers, not just easy but ones that save you time and energy, and more importantly glucose, which you know is the fuel that runs your body. So System 2 on the other hand is the stuff that gets us culture and logic and all the rest of the stuff, it's the slow thinking brain, it's the thing that helps us go through and do mathematics and logically think out the pros and cons of something.

03:30 EC: So these are evolutionary traits, right?

03:33 AD: Yeah.

03:33 EC: So this is like back in the day this might have been life or death decision making versus, "Where am I gonna find food later?" [chuckle]

03:41 AD: Yeah, exactly. I could eat this food now or am I gonna find something better later, which is gonna be... So thinking about it, I like to think about it, System 1 is essentially this group of heuristics that your brain uses to save itself time and energy because that's the stuff that is going to... The brain takes a lot of energy. I know that I say in my talk, when I started working as a developer I came home the first couple of days and I was just exhausted which made no sense 'cause I'd spent the entire day in a desk chair, right? But thinking does use glucose, it uses energy that take your body uses.

04:16 EC: Sorry, I'm snickering a little bit under my breath here 'cause my wife is a fitness trainer and she wonders why I'm exhausted from sitting all day.

04:25 AD: Exactly.

04:25 EC: So I'm gonna have to use that next time.

04:27 AD: Just tell her, "It's all the sugar, I need more sugar."

04:30 EC: Sorry, go ahead.

04:31 AD: That's the reason why everybody loves the candy bowl at work, it gets that glucose burst. But the point is that between these two systems, a bunch of these heuristics evolved and they worked really well, let's say on the savannas of Africa but don't work very well when you pick them up and drop them in the middle of the New York Stock Exchange or at development environment. So basically, culture has developed way faster than the brain has, so we just haven't evolutionarily adapted yet to the culture that we tend to find ourselves in, and given the pace of change, we never will.

05:05 EC: Evolution is a very slow process when your technology is... These days it's just going gang busters, it's exponential.

05:15 AD: Yeah, it's a new JavaScript framework every time you sneeze. I like to think about it in terms of the heuristics that we have, these things that we're woking with, they are great for everyday life. They're the things that get us through, they're the things that help us work. But System 2, this logical stuff, has this critical flaw and that it doesn't ever [05:39] ____ look at those, it tends to just accept what System 1 comes up with and just says, "Oh yeah, okay, rubber stamp, great", and that's this concept called cognitive laziness, basically, that System 2 is cognitively lazy. And so starting to flesh these things out, to realize that they're happening is kind of the first battle. Unfortunately, you can't get rid of them because these cognitive biases just kinda sit around so a really good example, one of the ones I like to start with is this thing called "narrative bias". And narrative bias is this way that humans tend to think about things in terms of stories and the way that we tend to look at causality. We tend to look at two events and say, "Okay, well this thing happened after this thing so it was caused by... " Not in a ridiculous sense but if two things happen that were close to each other we tend to assume that they're related.

06:30 AD: So above and beyond that, we tend to think of people around us as characters in a story. In our story, more importantly, we're the hero. We tend to cast ourselves as the hero. And that has some interesting side effects. For instance, we tend to view people who stop us, or who block us, or who don't agree with us as enemies and that casting, that mentality that we view them as prevents us from looking at their points clearly. It prevents us from trying to find common ground with them and it prevents us from actually saying, "Hey," resolving those differences because we start off with that assumption, "Hey, this person's an enemy." And casting them like that is negative.

07:13 EC: And we stick with those first impressions a lot sometimes.

07:15 AD: Yeah. Especially because there's a whole host of cognitive biases around memory but yeah, one of them is primacy effect, which is the first time you encounter something, it tends to stick with you because, typically, in an environment that... Something where things don't change... The first time you see a tree, most of the rest of the trees are gonna be like that tree. But as a kind of side effect from that narrative bias, we have things like what they call the "fundamental attribution error", which sounds phenomenally fantastic but it's really kind of simple. Basically, we think that other people act the way they do because of inherent traits. "This person acts this way because they're lazy." "This person is smart." "This person is creative." On the other hand, we tend to view our own actions through the lens of the circumstances surrounding us. "I did this because, well, I had to because this other thing was forcing me." "I didn't finish my footnotes for my presentation until last night because I just had a lot of work to do." But if somebody else did that I would look at them and go, "Really, you weren't prepared for this? Come on guy." Right?

08:24 EC: "You're lazy."

08:24 AD: Exactly. And so that mentality is really... It's because we don't see the circumstances surrounding other people. We just see them and their actions. It's active observer bias is another cognitive bias. So, that's a huge one and there are actually a couple other attribution errors that I talk about but those kind of things... You can start to see where that stuff creeps in to a retrospective almost immediately. This narrative of, "Oh well, the BA for this team just hates me. She always has it out for me." That's not productive. Self-evidently so in that case but that kinda mentality is also not... Even if you're not voicing it like that internally, keeping that, you will have that mentality of just like, "Well, she's gonna be looking out for something." Even if we both know, "Yeah, we're kinda on the same team and she cares about the product and the way this goes, she's just gonna look for a reason to kinda dump on me." And it's the fundamental attribution errors especially because then, "Well why is she doing that?" We don't think about, "What are her circumstances?" Maybe she has problems with her boss or the circumstances surrounding her.

09:32 EC: Yeah. I don't know if this sits in the same bucket or not, but I've seen an experiment done where they had somebody go for a job interview and they had that person interview with several people and the first... The actual responses they gave to the interviewer were the same but they ordered them differently. And some of them were negative responses and when they gave those responses first, they were turned down for the job. If they gave the positive responses first, they got the job but the whole conversation was exactly the same.

10:07 AD: Yeah. That kind of stuff is really interesting. There's a really other really fantastic study that they did with interviewing where they took some professional interviewers and had them interview candidates, and then recorded those and took the recordings and showed them to college students. The psychology undergrad is the great unsung hero of psychology 'cause they're the subject in 95% of the experiments 'cause you have them in hand. But they showed this to these undergrads and said, "Okay, can you predict whether or not they were offered the job or not, or they would have been?" And they were 95% right and so the psychologists were like, "Okay, we're gonna make this harder." So they cut it in half and then showed them just the first half and they were still 95% right. And they kept cutting it down to see what it was and at the break-off point where the students started to be like actually fall off worse than the professionals, was 10 seconds, which means that within the first 10 seconds of an interview, you have made an impression and you have probably determined whether or not you're gonna get hired or not, which is crazy. And this is why... If you read about Google's policies, they have people interview you and then those people take a whole bunch of notes and then those notes go anonymized to people who are actually making the decision so that there is a level of indirection between those two.

11:34 EC: Yeah. That's interesting. And are these things that we can kind of turn off, or overcome, or is it ingrained?

11:42 AD: Most of this stuff is completely ingrained. Awareness is kind of the first cornerstone in that battle. And I shouldn't really cage it as a battle because if you do think of it as a battle, like as a, "I have to fight this," you're going to lose and not only will you lose, but you'll be exhausted by the end of it and that's even worse because when you're exhausted you tend to just accept things from System 1 even more, so I tend to think of it as just you just have to accept these things and practice, to get all zen or mindful, to have that culture of acceptance that of yourself and go, "Okay, these are just things that my brain is gonna come at me and provide because that's what it was designed to do. And I just have to be aware that they're gonna come. And when they do, I can value them and evaluate them in them in the context of everything else, just not necessarily accept that first thing."

12:33 EC: Are there any tools for doing that like any practices that...

12:37 AD: Agile helps a ton. If you've ever used the Lean Coffee technique, you're familiar with that one?

12:43 EC: Not myself, no.

12:44 AD: Okay, so Lean Coffee is a technique where you write down on Post-it notes or on a note cards topics to talk about. And the teams writes, each individually writes them down and then you vote. Everybody gets three votes just kinda the traditional and then you sort the topics by number of votes. Start at the top and then after every five or 10 minutes depending on how much time you have, you take a vote of everybody in the room and they do a thumbs up, thumbs sideways or thumbs down as I wanna keep talking, that I could keep talking but I don't care and we're done talking about this topic.

13:18 AD: And doing it that way helps mitigate some of the things I talk about in my talk like framing or anchoring on particular topics. It also helps people not hide topics that they want to talk about. Because if somebody comes out and they say, "Well, I thought the sprint went great and it was amazing and everything is fantastic." somebody who thought that the sprint didn't actually go very well is not gonna come out and say, "Okay, well I actually didn't agree." unless you have a very open and trusting environment which is another way to do this.

13:49 EC: This gets you past like the whole bikeshedding thing right?

13:52 AD: It helps that as well, yeah. Which is, I've done a lightning talk on bikeshedding that's great. But yeah, there's some techniques we can use. Agile itself is a way to help some of that stuff, to help us avoid some of that stuff. It does bring a lot more decisions to play. Typically in a waterfall environment or something, that is very top down, right? You have these decisions being pushed at you and you don't necessarily get to make a lot of the calls.

14:19 AD: Agile development team, is if you're doing Agile right, is right up in the business of making the calls on these business oriented decisions. So in some ways it helps and some ways it hurts. But there are other things you can do in terms of, mindfulness meditation is actually a really good one. And it's something I haven't yet done at work. I've always been tempted to just start any meeting that I'm running with five minutes of meditation. Just to get everybody focused on this 'cause you get so many meetings where everybody, people are just in other places. And when people are distracted, when people are tired, when people are stressed, that's when they're more likely to let things from System 1, from this intuitive, unconscious portion of yourself slide through.

15:07 AD: Because they're too busy paying... Their System 2 is too busy paying attention to something else to notice that the thing from System 1 isn't right. That response is not right for this situation. So, there's a lot of this stuff. Daniel Kahneman, the book is 400 pages but it's the fastest 400 pages I've ever read, if that helps. But it's really fascinating. He goes into a lot of stuff about... He spends a lot of the book talking about, "Well, when can we actually trust this intuition?" And that's a fantastic subject, a fascinating one that I like to think about. Just when you can actually trust the gut reaction that you have. Short answer, it's when you know that it's been trained. When you know that you've experienced the feedback from that decision and you've emotionally internalized it. The longer the decision point is from that feedback, the longer that space, the harder it is for you to connect those two and less likely it is that you've trained it properly.

16:06 AD: So, things like stockbroking... That's not the terms, being a stockbroker, there, is one of the things where when you realize gain or loss on a stock is so vastly different from when you actually decide to make that purchase. That it doesn't train right which is why you see companies going to these automated trading services more and more. This machine learning style things. So that's another fun topic is to go, "Okay, well where are the places where humans aren't gonna succeed? Where we just have these biases, these heuristics. How can we use things like machine learning?" which is... First off, how can we use machine learning to help? And then second off, which is almost more fascinating to me is, well where are the places where our biases are affecting how we set up our machine learning and how we set up these automated systems? Like because at that level, right, somebody's doing the coding. Somebody's setting up the algorithm. And are their biases affecting how they set that up?

17:08 EC: Or even the results, you can imagine maybe machine learning comes back with something you don't really wanna see or accept?

17:15 AD: Yeah, exactly.

17:16 EC: Maybe your personal beliefs or stance on the subject just conflict with the results?

17:23 AD: Right, and so it's kind of like, "Yeah, did you come back with something? Well, are you gonna rerun it it again with different datas?" There's a whole bunch of fun stuff with that... As software gets into a lot more of this big data stuff it starts to pull in a lot of things that science has had to develop. It almost peer revew process of like focusing on this, not looking for correlation, just looking to see if something is there and if it is there, then great. If it's not, then that's also a result. You don't just try again. So...

17:54 EC: Try to strong arm what you feel is there.

18:00 AD: Exactly. Confirmation bias is a huge one and I could probably do an entire hour talk just on confirmation bias. That's the kind of stuff that I think about. I do develop... I almost said I develop on the side. It feels like it sometimes, but no, I develop in my day job so I try to bring some of this stuff too, when we do retros, when we do things, when we talk about stuff and to help the environment that we code in, the culture that we code in, be a place that is a lot more open that people can, they don't necessarily feel some of the pressures that lead to some of these biases.

18:34 EC: Now when you try to work some of the stuff in at work, do you do it where other people don't know you're doing it, or do you talk a little bit about what you're doing and why first?

18:46 AD: I try to give them a little bit of the why, 'cause I know I'm always pissed off if somebody's like, "Let's do this." The first time somebody tried to do like an improv game in a retro, I was like, "What are you... Why? Why are you doing this? I don't... " And they didn't explain because they didn't quite, like they hadn't quite done the research to figure it out but just for reference, that does make a lot of difference in terms of your team trusting each other because improv committee spent a long time working on ways to get your team to gel, 'cause with improv you have to be with each other constantly. Sorry, slight aside but people can tell when you're trying to manipulate them. As humans we're smart, we're social animals, and we understand that. People will understand if you were trying to do something like that, and they will just straight up rebel, so it will work worse than you thought it would.

19:31 EC: Okay. So you don't go straight in with the experiment, you kinda say, "This is why we're trying to do it this way", rather than kinda see what the results are just by tossing it in there.

19:43 AD: Yeah. Like I said if you work for... I work with a bunch of smart people. They're gonna figure out I'm doing something. They may not know what, but they know something's going on.

19:52 EC: They're waiting for the phishing attack or something.


19:56 AD: Yeah. Pretty much.

20:00 EC: So, you get to do your talk this afternoon. It's been a great event so far. We've seen a really good turnout.

20:06 AD: Yeah.

20:07 EC: Is there anything in particular you're looking forward to seeing at the events?

20:13 AD: I just saw one... Just the talk previously, was about why your Agile isn't. So that one was a really good talk.

20:22 EC: I think I've seen this session before. You get into a lot of events, and you see a lot of people very often. That sounds very familiar. I bet I know the person that's doing it.

20:34 AD: Yeah, I've met Travis before at a... Oh Lord, where was it? Somewhere last year, all the conferences rooms... Of course when I want to get...

20:43 EC: Probably CodeMash, that's where everybody meets.

20:45 AD: I actually have not been to CodeMash.

20:46 EC: Never been to CodeMash?

20:47 AD: No.

20:47 EC: It's a good one. A lot of great events up here in the Columbus, Ohio area.

20:52 AD: Yeah, I was kinda surprised. Between Nashville and Columbus there's a bunch of interesting stuff over in this side of the country from where I come from, which is Omaha.

21:01 EC: That's right, you're from Omaha. Where can we find you online?

21:06 AD: So you can find me, my Twitter handle is @arthurdoler, that's pretty much where I tweet irregularly, but most of my... Talking at conferences, stuff like that, it will be there.

21:18 EC: Blogging anywhere?

21:21 AD: No. Not through lack of desire, mostly through lazy...


21:27 AD: Too busy reading books, I can't blog... No. I really should...

21:30 EC: Our biases telling us you're lazy.


21:31 AD: I know, that is true. I really should be, but I have not done that yet.

21:37 EC: It's very difficult to find time. I write as part of what I do so my personal blog really gets the short end of the deal.

21:47 AD: Okay. I guess, technically, I do have a post coming up on our work blog, on the Aviture's blog, which you can get to it Aviture,, but even then it's hard to find time for it.

22:02 EC: We'll also get in touch and get some links to the books and stuff that you've mentioned and put those on the show notes for people. Appreciate you giving me some time today at the Stir Trek event.

22:14 AD: Absolutely.

22:14 EC: It's been a lot of fun and good luck with your talk today.

22:17 AD: Thank you very much.

About the Author

Ed Charbeneau

Ed Charbeneau is a web enthusiast, speaker, writer, design admirer, and Developer Advocate for Telerik. He has designed and developed web based applications for business, manufacturing, systems integration as well as customer facing websites. Ed enjoys geeking out to cool new tech, brainstorming about future technology, and admiring great design. Ed's latest projects can be found on GitHub.


Comments are disabled in preview mode.