TEI Talks 10 of 18 - What Is Business Research
6:26PM Feb 20, 2021
risk management approach
Welcome to this ti talk number 10 of 18. And it's a good time to start, I think with a little bit of context. There have been nine of these talks. And at roughly an hour each give or take. That is nine hours of me talking about this first challenge in the expertise incubator, the challenge of publishing something daily, everyday you work on the internet. And that's a lot, I really hope it was sufficiently dense and useful. One of the things one of the pieces of feedback I've gotten about that, that first nine set of talks is that maybe one of the best insights in there is this idea of cow pathing. wandering through the field, the pasture, defined by the overlap between what you're interested in and what might be important to your clients. with almost no guiding structure, you're just following your interest through that, that pasture. And as you do so, you come across things that are really interesting, and really worth exploring more. And that's happened here, with this set of talks where, you know, if you've been paying attention along the way, you've seen some ideas develop more. And I think that that's one of the unique sort of capabilities or powers of being forced or forcing yourself to package up your thinking into a live talk is it, it's this kind of forcing function for structure, that sometimes helps you develop the ideas beyond what I think you could develop them in just written form. That's been interesting to see, I hope that it has been valuable. We're moving on. We've talked, I've said about everything that I can think to say that I think is valuable about publishing daily. And there's so much there, it's a really rich practice. We're moving on, though, to talk about the next thing, which is small scale research. This is something I would categorize as an emerging field. And you'll feel that as we talk through it, you'll notice that you know, I use different terminology to talk about the same thing, I come back come at it from different angles. And that's really, because it's an emerging field. What I want to do today is established context, I want to ground you in the context that surrounds this idea of small scale research. And we're going to go further afield and talk about research writ large, in order to kind of build up the context and then sort of map of where small scale research fits. With that knowledge, I think you'll be more productive, more effective when you start to deploy this in your own business. Or, if you're going through this set of three challenges and the expertise incubator framework. When you get to that second challenge, I want to make sure you don't get derailed. So we're going to talk about the today and for the next few of these tea talks, we're going to talk about the second challenge of the expertise incubator framework, which is small scale research.
Let's talk about why small scale research. And then in a little bit, I'm going to zoom out and talk about research writ large and build that context. Let me try to establish a case for why you would consider doing this because let me get this out of the way up front. It's an investment, it's speculative, it may not directly translate to massive amounts of value. But there's a good chance that it will translate to value. But it is speculative. This is an overview of five reasons why I think it's worth doing. Just briefly, I'm going to explore each one of these in more detail very soon. But briefly, it will force a literature review which is actually a productive practice, and is something most of us have never had cause or reason to do. It will more effectively connect you to context. That's a good thing. It can lead to the the creation of intellectual property. It will enhance your point of view and it will help your clients make better decisions. So what We're going to explore each one of those in some detail. a literature review is when you say, well, who else has done research about this? Who else has investigated this particular question? That I am thinking I will conduct research to also investigate. So it's a review of the prior art, the work that others may or may not have done? To answer the same question in a literature review is the first step in small scale research. It's something that is easy to never do. Without the forcing function of saying I want to invest some time and energy into some research. What you would naturally do to de risk that research project is you would begin with a literature review. There's a participant in the expertise, incubators named ski home games, wonderful. And I'm going to give some examples from some research that he's currently conducting, because it really illustrates some of these points really well. Jim's case came to the research challenge of the expertise incubator with a sort of question that like, like almost anybody, almost any of us would, when we reached this point, the question is not fully baked yet. It's a little fuzzy around the edges, it's probably a little bigger and broader in scope than it should be. And it's a little not crisply defined. So games question at this point, when he was first entering the second challenge of the expertise incubator had to do with his work, which at the time he was thinking of in terms of strategic narrative or strategic storytelling work. That's how he would have described his work that's already started to shift as a result of the research that he's starting to do. Anyway, that was the context for him. So he had this question is what I do, actually valuable? I mean, the the sort of, you know, drinking at a bar version of his question is, is what I do just kind of bullshit. And people just pay for it? And, you know, on based on assumptions, and no, but nobody really knows if this works. We can't prove it. But is it just kind of something that I've been fortunate enough to stumble into, that people will pay for? And that really was kind of the question. And it led to a research question again, you know, not super specifically formulated at this point. But that's normal. And the research question was, does investing in strategic storytelling or strategic narrative for an organization have measurable benefit? You as he should started with a literature review? Who else has looked into this? Answer? No one, none of his competitors, nobody in the space. Nobody in the business space. But he did find a academic who was applying this idea to a different domain that had to do with criminal defense for, you know, accused criminals in the state of Texas.
And I won't go into any detail on that. But that was the only thing that his literature review uncovered. There was really no prior art No, nobody else doing any work out similar to what he was doing, he would have been the first in his domain to ask that question. A lot of times, well, we do the literature review for two reasons. One is the first order effects want to find out if anybody else has pursued this question, and already, so we avoid duplicating their effort, or we build upon what they've done, rather than duplicate what they've done. The second order effects though, of a literature review are numerous. They can lead to connections, new ideas, reformulating your question, thinking more deeply about your question. All of this makes you smarter, makes you a better self made expert. Even if the literature review itself doesn't uncover very much of value, the second order effects can be very valuable.
The literature review tends to expand the scope of your thinking outward. And so you start thinking about context in a deeper or more serious way. And maybe you've already made me do this all day long. Maybe you've already done this, but A lot of us can easily not do this, we can very easily just move along in a straight line in our career, and never really have to think about the larger context that we operate in. Here's how that worked for you. So again, he was pursuing this question of does strategic storytelling strategic narrative have measurable benefit for businesses? Or does investing in this have measurable benefit? And that question is already massive? Because how do you measure it? How do you measure like 10? Or 20? Different things? How do you make sure those things are consistent across all the businesses that you're measuring? It's a massive question. But what do you actually found was when he moved past the literature review stage, and started thinking about posing this question in a, in a, in the context of a research project, to his market, he kept running into this sort of internal resistance. And the resistance is a sort of feeling of, you know, I'm not sure that people are going to even understand what strategic narrative is. Because I as an expert, I get it. But I noticed when I have conversations with people about it, or what have you, everybody kind of has a different definition, or there's a little bit of a head scratching, glassy eyed moment when I bring the topic up. And so what Jim found was, the context around this idea of strategic storytelling, strategic narrative was really underdeveloped. And it just didn't plug in very well to the context that his clients operate within the world they live in, does have a contextual touch point that was relevant, though, and that is the idea of organizational purpose. And very often, there is a link between, you know, the usage of storytelling or narrative within the leadership of an organization and this idea of organizational purpose. You found that by shifting the question, all of a sudden, things opened up, and he was able to integrate with the context, the sort of mental and cognitive landscape that his environments inhabit. Sorry, the mental or cognitive landscape that his clients inhabit. They have in that landscape an idea, there's almost an infrastructure around this idea of organizational purpose. People understand what it is, there are people like, you know, Simon Sinek, who are been out there, laying the groundwork, I suppose, for people in games world to be thinking about this idea. So game doesn't have to build that infrastructure, he can just connect with it. a research project can be a forcing function that forces you to grapple with important mental modeling work that is very easy to leave undone. Again, it's very easy to just do your work, get well paid for it and not really consider the larger context. But when you start doing research, it forces you to think about and grapple with that context in a more serious way more comprehensive way. And that's a good thing. research can be used to create intellectual property. quick definition, intellectual property is your expertise packaged and made usable without your presence, involvement, or physical slash emotional labor? Again, IP in our world is your expertise packaged, made usable without you there without your involvement. So that can range all over the place anywhere from a book and instructional manual, some kind of framework, some sort of decision making matrix, a way of modeling the world, a simple calculator on a website, a data set. And that's not the end of it. But you can bake intellectual property into software. It can be a framework like Simon wardley advocates for with his mapping framework, any and all of those things, so it's not usually not something like that.
Lawyers are suing other people over anyway, the wonderful thing about IP is it decouples value creation from your direct involvement and research can no guarantees here but it can lead to the creation of IP research can lead to a valuable upgrade, in your point of view. And point of view is something we talk about a lot, it's like kind of an assumed good thing to have a point of view. And it is a good thing. But it's also something we all have. And there's an interesting way to look at point of view that kind of takes a literal sort of perspective on it, and looks at the phrase point of view, and thinks about it as a literal phenomenon, where you're standing somewhere in the world. And as a result of where you stand, you see things in a certain way. And if you were to stand something somewhere different, you might see things differently. And your point of view is sort of the outcome of where you stand what you see, and how you use that to benefit your clients. So this map that I've got up on the screen is a tool that I use in a workshop that I run on point of view, to help people think through Where do you stand? And you know, what comes downstream of that is what you see as a result of where you stand? Well, there are four axes that you can use to kind of locate yourself on this map that would define your point of view. Is your goal to create some kind of significant, perhaps radical transformation for your clients? Or is it some sort of more mild or progressive optimization? are you arguing for that transformation or optimization from your own experience or from data? I should point out that these are all sort of spectrums, rather than, you know, binary, either or things. You want to create some change, either optimization in nature or transformative in nature. What is your style? Is that a really disruptive kind of like, let's just change overnight, kind of style? Or is it a more gradual style of evolution. And finally, is your status as perceived by your clients that of the expert outsider or a pedigreed insider?
I've run a workshop, where we explore and cultivate point of view, enough times to see really interesting pattern. And that pattern is this. Almost everybody who participates in this workshop, which is usually some kind of self made expert, indie consultant, advanced Freelancer type of person is coming from the left side of this map, where they are arguing generally from their own experience, or, and when I say their own experience, I don't mean to diminish that location. And in fact, none of these locations that you could be, in terms of a point of view are inherently better or worse than any other. It's kind of a question of how you use it. I have noticed, however, that most people are arguing from some form of experience, this is what I've seen. I've worked with 10, hundreds, whatever, of clients. And based on that this is what I believe this is how things should be done, etc. It's an argument from experience. Almost everybody wants to move towards the data end of things. Everybody wants to be more data driven, in their point of view. There's some exceptions. I think there's some people who recognize that there are things you can do when you argue from experience that you can't do when you argue from data. So again, neither one of these is, you know, explicitly better. But everybody who is on the experience and seems to have this kind of hunger, to move towards data because data is powerful. Data is also not just one thing. Data is often a story that we tell ourselves to enhance our feeling of well being while making a decision. Also, data can be a way to make better decisions. Data is powerful. It's not just one thing. So what if data is really a way to enhance it? What if it's really a story we tell ourselves to enhance our feeling of well being while making a decision if that's true, too Then it's also entirely possible that, because of that story, we use data to make this decision, we were data informed data driven as possible because of that story, that the implementation of the decision is better. And that leads to a good outcome. Maybe even if the decision wasn't always already wasn't really all that great. Or maybe if the decision was informed by bad data, we still get a good outcome. Because we have the story, we use data, we really believe in this decision. And so the implementation is better. Alternately, maybe the data really is good. And it really does produce a better decision that survives a mediocre implementation. I think you can see that no matter what data really is, in our imaginary situation here. It's powerful, but it's not magic, it doesn't automatically make things better. Sometimes it makes things better, because we feel comforted by the presence of the data, even if the data is garbage. And maybe the data is good. And maybe it really makes a better decision. And our sloppy, mediocre implementation survives, doesn't destroy the result, because it was a actually a good decision. So what I'm trying to do here, is argue that we get more fluent at using data. But we remain in a place of humility around the whole idea of data. There's this album by Nick Cave, it's a double album. avatar blues liar of Orpheus is the name of the two sides of the album. And it's a wonderful experience. So listen to that whole thing, you know, one sides really heavy and rockin. The other side is making use of interesting elements like a gospel choir. And it's a great experience to have those sort of two polarities pulling at you. And there's a book version of that, that I would recommend here. two books that I recommend you take a look at, if you haven't already. One is called alchemy by this guy, Rory Sutherland.
And that's a sort of argument for the futility. Or maybe not the futility of using data. But just the many ways in which that can lead us astray. It's a book about marketing. But it's bigger than that. I think there's another book, which I'll mention, again, repeatedly today, by Douglas Hubbard called How to measure anything, so much less fun read. But if you were to read these two back to back, you would have this wonderful experience of being sort of suspended between two ideas about what data can do. And I think it's the right place actually to be because it's not one or the other of them. And you would be in a sort of intellectual hammock suspended between these two points, I think in a wonderful place to view and regard and think about the role of data. It's not magic, but it does have value. And research can lead here to a shift in point of view that's coming more from the data side of things. Finally, not to be overlooked, data can actually lead to better decision making. This is really the bottom line. This is why I think we should we should invest in this is because it can lead to better decisions. So many decisions are made based on gut feel, or all the heads in the room swivel to look at the senior person. What do you think that's really maybe not a terrible heuristic, but it sometimes can lead to really bad decisions. So data is empowering for you as the consultant and also empowering for the client if they can trust the data. Because it really can and a lot of cases lead to better decision making. So this is, you know, the bottom line reason why we might consider doing this research. This would be the, you know, the first order desirable consequence. As I've mentioned, there's a ton of desirable second order consequences. One thing I'll elaborate on that I want to mention now, doing this whole thing, in a spirit of service is critical. Meaning that's got to be your real motive is I just want to make things better for my clients. If you don't have that, it can go astray in so many ways. So again, deciding to do a little small scale research project forces literature review that is so healthy, it leads you to better understand the context that you should maybe could argue should have already understood anyway, but leads you to better understanding context can lead to the creation of intellectual property can lead to valuable shifts in your point of view and ultimately better decision making for your clients. So I think it's worth asking, why don't we do more small scale research? In fact, why aren't we all doing this all the time? If it's so great, there actually are some reasons why we don't do it. Again, that preview of the five reasons, it's misunderstood, and there's no training really around it. It's intimidating. There's few examples that we can just look at and imitate or be inspired by, and there's actually little incentives to do it. I know I'm really selling this pretty well. So first, I think we miss apprehend research generally, and Business Research specifically. And I'm going to spend some time trying to in the sort of second half of this to better contextualize those things. But I think we just we sort of misunderstand what it is, we apply the wrong kind of ideas and mental models to what research is and could be for us. And so we just misunderstand it. We'll see if I'm right about that. There's very little if I just have not come across any training that exists in this kind of research that I advocate the small scale research. So we may just have no formal training, doing this kind of research. I think back to college, I did, you know, like one little small scale research project there. ran a survey professor who is helping me with this helped me use SPSS
to do some analysis on it. So maybe, you know, when I think about the experiences we have had, you know, some of us have had experiences like that. But most of us, I think, are going to find no real useful formal training out there about this. I'm doing what I can to change this, but not from a training perspective, per se, but an experiential learning perspective. I think we're intimidated by Well, really by the idea of research, but I don't think we move beyond that sort of initial like, Ooh, this is intimidating point to where we can actually see what's required for small scale research because we apply the wrong ideas. we misunderstand it. And I think we don't know that there's this fertile sort of third way of doing research, which I'll overview today and then unfold in much more detail in an upcoming talks. We have a sort of availability heuristic problem when it comes to research. So if I say the word research, everyone has sort of mental cognitive shelving, right, and they're going to reach for on that shelving, the examples they've heard of that fit that word research, and the ones they're going to reach for, are distributed unevenly along that shelving or the the kinds of examples that are readily available that respond to the availability heuristic can just come to mind very easily, are either too big, or small and trivial? And don't they don't seem to have the kind of value that I seem to be talking about here. So the availability heuristic makes it easier to call to mind things like big clinical trials for currently, new vaccines, or just big expensive projects that we could never as soloists replicate on our own. We just don't have the resources to do it. So the availability heuristic causes us to think about big clinical trials conducted with high levels of rigor, and lots of investment and lots of know how about research and we say, well, that's not something I can do. And then on the low end, the availability heuristic calls to mind all the fluffy state of the industry, sort of research products, we've seen Aren't they barely deserved in a research product because they are usually done for not to aid decision making, but as a sort of marketing thing. So there's this hollowed out middle, we think about research and we think about big, expensive, difficult things. And we think about flimsy throw away things that don't have much value, we don't think about what's in the middle, because there's not a lot of examples from that middle sweet spot. We don't have a lot of examples of our peers. I mean, they're out there, they're just not a lot of them. And they're not easy to think about. because not a lot of people do this kind of small scale research. Again, we've got tons of stuff from companies like MailChimp, and so forth, where they want to tell you about what's the best time to get a good open rate, on an email, that kind of thing. But those aren't great. There's not great examples of decision making AIDS. I think the reason there's no hot examples from this middle part, if I'm honest, is there's very little incentive to do it. We practice an unlicensed profession. And a lot of us have great opportunity over the short term, to just stay busy fulfilling that opportunity. Especially if you're doing something like software development, software engineering, there's just so much opportunity to just focus on the short term,
be very billable, high utilization, be very busy. And if there's some kind of crisis that comes down the road in terms of demand for that thing that you're busy doing now, it's not that hard to take the sort of a corpus of skills that you've built up over the previous years, and do some retooling, pick up a new skill sets a new language and basically continue on without being overly interrupted or impeded by that crisis, that crisis in demand. So this creates a situation where there's not a lot of incentive for us to level up our game beyond what improvisation gut feel past experiences and best practices suggest. If we just focus on the software world, it is incredibly tolerant of waste. So there's just not a lot of incentive to figure out how to eliminate waste, for example. And if you know a research project could help with that, there's just not a lot of incentive to do that research project. If we are if we do become incentivized to try to make things better, a lot of times what we're going to do is sort of short acting marketing stuff, you know, outreach, better business development, there's not a ton of incentive for slow acting, long term, real levels of investment required level levers for making your business better. And what I'm going to talk about today is a slow acting, real investment required lever for making your business better. So the context that most of us operate in doesn't have a lot of built in incentives. For us to invest in research, the flip side, this sclerotic context that we're operating in has a lot of low hanging fruit in it. So if you are inclined to risk invest in research, it's a wonderful tool for collecting a lot of that low hanging fruit. It's not that hard to become a sort of thought leader. And research is one of the tools that can take you there. So again, I think the reasons we don't do this is misunderstood. There's just not many sort of formative early training experiences that would prime us to see the value for it, or teach us how to do it. It's intimidating. There's not a lot of examples from that kind of fertile middle area, there's this sort of hollowed out middle. And there's not much incentive to do it. But if you're willing to do it, I think it's a tremendous growing experience. One of the reasons why this is baked into a challenge in the expertise incubator framework is because it has value. And I've discussed some of the forms of value those first order effects of enabling better decision making and those second order effects. One of the second order effects is that it's a real growing experience. Seriously, tackling a really simple, specific research question creates as much growth actually more growth than tackling a big expansive question. Because the the simple specific question is actually doable. So it, it contributes to your growth as a, as a professional as a self made expert as a business owner. It's a great experience. So I'm going to talk about small scale research. Maybe I've convinced you, it's worth considering. When I think about research writ large, I think it's an attempt to understand causation. That's the definition that's talking about the motive rather than the process. And we'll get a little bit into the process in the next part in the upcoming part of this talk today. But generally, I think research writ large is an attempt to understand causation, we want to understand what you know, what causes this, what are the links, what can we do to get the outcomes we want? research is a tool towards that end. There are these contexts to high level contexts where research operates. And then within the business context, which is the only one I'm interested in today, anyway,
is there these three subsidiary contexts. So I want to Overview The landscape of all these contexts, except for the academic one, which I mean, it's it's interesting in a general sense, but it's not really relevant here. academic research is a sort of a trickle down a place from which methodologies and ideas about doing research trickle down. But the way research is done in an academic context is just really not relevant to what I'm talking about today. So I want to talk about the business context. But I want to talk about the larger business context to make sure you know what you're not doing. And, and to know from where you can draw specific approaches, because the research we do in the expertise incubator is small scale really focused. And if we forget what we're doing, and we start doing something different, we're going to get derailed. So knowing what we're not doing is pretty much as important as knowing what we are doing. So within the business context, there's these three subsidiary contexts, what I call risk management research. To be clear, that's a made up term for it. So don't try to Google that term. Because it may not yield very much useful stuff. But the guess the first context is a risk management context. There's this ti hybrid in the middle of risk management on the one hand, and what's what I think of is Innovation Research on the other end. And again, all these terms are made up because that to me, this field is so immature, not a bad thing. But it's so immature, that there's not really great established terminology for any of this. We'll explore each one of these as we do. So you'll see in the top left of the screen here, I've got a reminder of what context it is that we're focused on right now. We're going to start with risk management. Again, this is my sort of made up name for it. Risk Management research is measuring something that is under measured in order to reduce uncertainty or replace something like a gut feel with a probability range. So instead of a decision maker in a business context, saying, Well, I don't know based on past experience, or I can, you know, I just have this feeling like we should do this and not this, you want to replace that with quantifiable probabilities. There's a guy named Douglas Hubbard I've mentioned him already, he has a book called How to measure anything. He's contributed a lot to this domain, and some really gonna pull from what he's, you know, talks he's given the book he's written as I kind of paint a picture for you of what this approach to research looks like. There are, I believe, criticisms of his methods that would come from sort of more serious statisticians. And so, the ideas here are not uncontroversial, but I think they're valuable. Here's some examples of the kinds of things these are pulled straight from a talk that Douglas Hubbard did at some point that you might measure with his approach. And these are examples specifically of a question that some business had. And a measurement inversion, meaning they measured the wrong thing. I'll explore that a bit more in a minute here. So the kind of stuff people are measuring, using doubles, Hubbard's methodology might be things like a government procurement system, how much is it going to cost to implement this thing? What's the risk of a flooding in some mining operation? What are the impacts of Pesticide Regulation, or some cost related to IoT security? One of the things that Douglas Hubbard wants you to know is that you got to measure that there's a cost of measuring things. And then there's a value of what you would learn by measuring that thing. And you want to make sure you're measuring the right things, meaning there's a healthy relationship between the cost of measuring it and the value of what you would learn from measuring it.
So in the case of this government procurement system, he gives an interesting example of what he calls measurement inversion, where what the people who are making this decision we're tempted to do is do sort of detailed time and motion studies to understand what what it would cost to implement this system. And he says, what they should have measured is the price savings that reverse auctions would have yielded. The question of, you know, what's the risk of flooding in this mining operation, they were drilling test holes, they should have been measuring pumping capacity. This question about the impact of pesticides regulation, they were measuring the value of saving endangered species, and they should have been measuring whether this regulation can save any endangered species at all. And finally, with IT security, they were measuring the sort of cost or risk of external threats, they should have been measuring the cost or risk of internal theft. Again, these are examples pulled from a thing that Douglas Hubbard did talk somewhere and you can find these talks online. He's a pretty interesting speaker. And it's well worth I think, consuming one or two of his talks to get the idea. The environment in which this approach to research operates as a closed system. Now, closed systems don't actually exist. In reality, they're a theoretical construct. But when we think about something like a laboratory, we're tend to be thinking about a closed system where there's an effort to control what happens inside that system and understand everything that could happen inside that system. So out in the real world, you might imagine something like a factory, factory floor, where they've got Internet of Things, sensors on everything, environmental sensors, they're measuring a lot of what's happening, or they're collecting data on a lot of what's happening in that environment. And that's sort of more real world example of a quote unquote, closed system. Again, closed systems are never completely closed. That's a theoretical construct. But it's a useful idea. Because this sort of thinking that we're operating within a closed system is prevalent out there in the world, it's not always true, that people think of as closed systems or not really closed systems, but the idea is useful. And and this is the setting within which this kind of risk management measurement is most effective. And so what you're trying to do when you apply this research approach is you're trying to create something like a laboratory in the context of a business. And you're doing things like focus groups or employee satisfaction surveys or all sorts of things, you know, direct or indirect ways of measuring that are assuming that you can know and understand everything That is happening within that system, if you just apply effort at measuring it. That's the environment where this approach to research works best. where something really is kind of like a closed system, the method that's going to be used as a deductive method, briefly, that is beginning with an idea about how things work a theory, you formulate a testable version of that idea. That's your hypothesis, you do some measurements or testing. And your theory is either confirmed or denied. So, again, with this risk management approach, we're working with a sort of simplistic hypothesis that benefits from the relatively close nature of the system where we're doing this research. For example, here's our theory, we think employees at this company will be more productive. If the company has a simple purpose statement that they all are reminded of monthly. That's our idea about how things work, there'll be more productive if they're reminded of the company's purpose, and it's a simple purpose statement. For the measurement and testing, we'll create some kind of baseline, we'll introduce this new variable of this simple purpose statement where companies are reminded of it, sorry, where employees are reminded of it every month. And then we'll measure again, their productivity
after we've introduced this change. And then we should we may be able to confirm or deny our theory, did productivity increase? Did it decrease? because of some reason? Or did it stay the same? So that's the deductive method. Just a simple example of that applied in the context of risk management research. I suppose that example kind of calls to question like, Why did I choose to label this risk management. And that's it. It's one of several labels that could have used that could have used uncertainty reduction research. It all kind of congregates around this central idea that we want to make the decision on some basis. That's better than guessing. So we need some data. To do that. We need some measurements to help with that. If you look a little deeper into Douglas Hubbard's recommended approach, it begins with a mindset that absolute precision, the sort of precision we would get from an atomic clock is not what we're going for here. That's not the bar for what's a useful measurement. A useful measurement reduces uncertainty, and that is a lot easier to do than you think. That's the mindset behind that goes Hubbard's approach to research. So his method just to list that for us to define the decision, model the current level of uncertainty, compute the value of new information, gather that information by measuring it keeping in mind that we're not trying to achieve a atomic clock levels of precision here and then finally, optimize the decision based on what you've learned, and maybe this is a sort of iterative process of refining the decision before you actually take action. This is a quantitative method in nature. So risk management research usually deals with probabilities, uncertainty reduction. And so with surprisingly small data sets, you can use quantitative methods to achieve that goal. This is one of the things maybe this is an area of disagreement between like, you know, proper statisticians and Douglas Hubbard, but again, the context here is Douglas Hubbard is trying to help people who are business decision makers make better decisions without the intimidation of large datasets or sophisticated statistical methods. So Douglas Hubbard is keen to remind us that diminishing returns on uncertainty reduction set in very quickly as we increase the sample size of whatever it is we're measuring, you get the most decreases in uncertainty. That's what we're going for. We want decreased uncertainty. So we get the most gains there with the first few measurements we take. So if we move from zero measurement, To a few measurements, we get significant reductions in uncertainty. And so there's a lot of low hanging fruit to be gathered with simple measurements direct or indirect. Our context here is we're not steering public policy, the lives of millions of people do not usually hang in the balance. When we're trying to upgrade decision making in a business context, to books that can be useful here, Douglas Hubbard's book, how to measure anything. And then this guy, Sam Savage, has written a book called The law of averages, and it tells you a lot about the cut the content of the book right there in the title. Those are recommended reads. I'm going to change our context now. To talk about innovation research. Innovation Research, I would say is trying to do something entirely different than risk management research. Innovation Research, the the goal, the outcome is we want to generate new options. And we want to create narrative richness. Innovation Research is based on a belief that may or may not be true, I'll let you decide the belief is this. If I can deeply understand how some group of people think, or what troubles them, then I can invent a solution for them that will be desirable for to them.
That's the belief that underlies Innovation Research. It's a sort of like, you know, if I can inhabit the narrative that they're living, if I can see the world through their eyes, I can create something that they'll find value in. This is a belief. Now I think it's probably true. Elizabeth Gilbert tells this great story about interviewing Tom Waits for a profile, she was writing on him for some magazine some time ago, when he lived in Los Angeles. And she said, this is a story that he was telling her, you're driving down the interstate in Los Angeles and gets this amazing idea for a song or a melody or lyric, I'm not sure. And, you know, initially, he just he feels this kind of almost Panic of like, I can't, I'm driving or I can't do anything with this. And then he just sort of has this moment of personifying the source of that idea as being outside himself, and starts having a conversation with it and says, you know, I'm at the piano like six hours a day, you could, you could come at another time with this idea, and I would be in your place to do something about it. Why don't you go take this to Leonard Cohen, or, you know, go go find somebody else, I'm not, I'm busy here, I'm driving. I love that story. It it highlights, though, an idea, a competing idea about the source of inspiration, or a source of value, which is, maybe it doesn't come from inside ourselves, maybe no matter how good we get at seeing the world through other people's eyes. It's not going to lead to that form of value. I realized creativity and innovation are not the same thing. Exactly. But I point this out, because, again, I believe it's true. But it is a belief that if you can see the world through other people's eyes, you can create value for them. Maybe it's not true, we should occupy a place of humility about that, I think Innovation Research is trying to generate new options, new ideas, and do so through building up a sort of a sense of narrative richness. There's some examples. I mean, the whole world of digital products where product design and iteration is more fluid and potentially more rapid than the world of physical products, or it has been maybe additive manufacturing is changing that. But some of these examples, some of this, like, embrace of innovation research approach is especially high in that world. That world of digital products. So you'll see you'll see plenty of examples there. But some of the examples from the more physical world. The sort of canonical example is Snickers when they were doing jobs to be done research. They uncovered this idea that, you know, for a lot of folks, Snickers is not a candy bar. That's the kind of marketplace and mental category that I think the makers of Snickers place their product in before. But you know, for a lot of folks Snickers is like this kind of portable, nice form factor on a hunger eliminator. And that places it in a different category. And by doing this kind of Innovation Research, they, they came to that you could call it a breakthrough, I suppose, that realization that, you know, maybe we talk about our product differently, we have this new option for talking about our product, because we understand the job that people want it to accomplish for them. There's another example from Alan Clements book, when I always get these to mix up when coffee and kale compete, or when kailen coffee can beat, I think it's when coffee and kale compete. horribly provides this example about this grocery delivery service called your grocer that did Innovation Research and came to a more clear understanding of who their customer was. And for them, that was a new option that was unveiled by doing this research. environment where Innovation Research operates is the environment of open systems. And an open system you are unable to control measure or even fully understand everything that's happening in that system or the system, as you've defined, it is so connected with other systems. And there's these relationships, perhaps nonlinear relationships between your system and external systems.
You just, you can't get it all in your head. Because it's complex, it's big, has large amounts of flow, novelty and chaos. And so innovation, that's the environment where that kind of research tends to be deployed, it's a better fit for that kind of environment. If you try to apply the risk reduction approach, it's not that it's guaranteed to fail, it's just going to be less successful in the environment of an open system. The styles of Innovation Research include jobs to be done customer development, and the ethnography approach. ethnography is a little bit difficult to decouple from the kind of colonial mindset and language. But it's the kind of modern digital equivalent of observing what people do in order to build up that narrative richness so you can understand them better and build something that they'll find value in.
The method is inductive, deductive. So what's Innovation Research, you move from observations. Based on those observations, you look for patterns, you develop a theory, a model, some kind of conclusion. And then maybe you move into a deductive methods phase, after you've done that inductive methods work. So the observations can be at varying levels of formality, they can be anywhere from now we're going to talk to a bunch of people see what we see what patterns we can recognize, to two much more disciplined, sort of rigorous approaches. Anyone who's, you know, maybe in the last 510 years, like kind of been interested in digital product development will probably have come across the advice of, you know, just talking to people like, you know, prevent pre pandemic, go to Starbucks and give people 20 bucks to talk to them for 30 minutes. And that's going to be part of the research that you're doing. So that's a pretty, I wouldn't say sloppy at all, I would just say an informal approach. And then you can have much more formal approaches to doing these observations.
The method is usually qualitative, rather than quantitative. So you're looking for things that aren't necessarily measured with numbers. And it's easy to get into a mindset. Our culture kind of worships quantitative data. Because it leads into that storytelling that we tell ourselves and we're like, oh, we're making a better decision because we have data. And it's easy to think of data as the Numbers, what's the average? What's the probability range. And I don't say that to diminish the value of data. But there are other other ways to understand things that don't rely on a number, a probability range and average. And those other ways of understanding things are qualitative in nature, you have seen something enough times that you see the pattern. And you understand the kind of variation that happens within that pattern. That's a form of expertise. And qualitative methods can lead us to that form of expertise. So don't think of qualitative methods as sloppy or imprecise. In fact, they can be a really great complement to quantitative methods. two books, well, sorry, one book and one person's body of work that would be worth looking into if you wanted to learn more about this Alan comments book when coffee and kale compete. And then a woman named Andy Young does a lot of really valuable stuff in this, this world of Innovation Research. Now, let's talk about this middle ground. This third way, the TDI hybrid. So I have several names for this small scale research, ti hybrid, better, you know, research that's focused on enabling better decision making. They're all sort of the same thing. So in the expertise incubator, we use this visa hybrid approach. And it's a blend of the risk management approach. And the innovation style of research. We're focused on an outcome which is want to help our clients make better decisions. Think about where back to that idea of point of view where you stand. No matter how like specialized you are, or focused or, you know, kind of dedicated on your clients world, you're still kind of an outsider. I mean, functionally, you're an outsider, you're an independent consultant, you work for yourself, you don't work for them. You're not an employee, you're not embedded in their organization, you might be sort of temporarily, depending on how you structure your engagements. But more or less, you are an outsider. And that outsider nests comes with benefits, it's a trade off, and it comes with drawbacks. The risk management approach to research really works best if you are an insider. Being an insider gives you access. So if you are an employee, or if you're working at this company, and you're sort of charged with applying this risk management approach to a decision, you're also going to be empowered with access, you need to survey people, great, we'll make it happen. You need to, you know, do these measurements that that would sort of pull people away from their primary job duties, great, we'll make it happen. I realize I'm presenting a somewhat simplified picture. But that's in a best case scenario, at least or in a good case scenario. That's the kind of access you would have, it's much harder to have that kind of access as an outsider. So the risk management approach kind of presumes level of insider access that we probably don't have. And also, it operates best in a closed system. And that means that it operates best for people who understand that closed system as outsiders. We may not. So we can do this risk management approach as outsiders, but we have to do it kind of on a more simplified basis. Another thing about this risk management approach, it really produces the most value when it's applied to an idiosyncratic but important decision.
What I mean by that is, you're going to measure something. The reason you do that is because there's significant uncertainty. And the reason there would be significant uncertainty about that decision is there's no precedent. It's not a decision we make all the time. We haven't figured out what the patterns are by observing what happens when we make this decision 10 2030 times in a row. And so we don't know what's going to happen. In my head, I'm thinking of like some kind of, you know, energy company considering buying a generation resource, a power plant. And they don't know if it's a good idea, there's, you know, they're all of a sudden their nice clothes system is is faced with this kind of idiosyncratic potential change. And they'd like to investigate that decision more deeply so they can make a better version of that decision. Well, that's an idiosyncratic decision. The industry hasn't figured out what to do about this one power plant, because it's this one power plant, it's kind of a one off. So Douglas Hubbard's approach is super valuable, but it creates the most value in the face of an idiosyncratic but important decision. And now back to our world. We're trying to use research to improve the cloud the decisions that our clients make, but if those decisions are one off idiosyncratic decisions, then there's not much reason for us to invest in research unless we've been hired specifically to help that specific client with that specific decision. In the TDI hybrid of research, we're generally looking for a class or category of decisions where we can improve the decision making. We're not looking for a one off idiosyncratic decision, even if it's important. And again, the Douglas Hubbard approach is sort of tailored, I suppose, to have that kind of insider access, insider knowledge of a somewhat closed system, in the face of an idiosyncratic but important decision. What we're looking for is this. We're looking for a decision that is important to our clients, but it's a class or a category of decisions. It's got to be from the clients perspective. The decision has to be surrounded with uncertainty. They don't really know what to do. I mean, maybe they probably blunder their way through it. I say that with love, you know, they they look at the decision, the senior decision maker in the room and say, What do you think, or they go based on gut feel, or some other apps, they maybe they think they have a system for making this decision or an approach, but they do, we really don't. And we have to care, we have to say, you know what, I want to see my clients make better decisions here. I'm coming from a place of service and humility. But I think by investing a little bit in some research, I can improve their decision making. Not everybody in the market, but enough of them to make a difference. So it has to be driven by a desire for service. And we have to scope it to a pretty small scope, at least to start with. We're operating we have constraints, time, fund's lack of experience, you know, because there's not that training that I talked about earlier. So this is what we're doing with the TI hybrid of research, we're looking for some kind of decision that's surrounded by uncertainty, it is important. And it coupled with our desire for service, our willingness to invest and take a risk. And when we bring all that together, we can we can make a difference. Using very simple, humble. Crude is overstating it, but just you know, simple basic approaches to data collection and analysis. And we can use that to improve the decision making within our industry. And that's what we're doing with the TI hybrid of research. We are almost always mixing methods. So we're pulling some from the risk management approach. And we're pulling some from the innovation world. There's a book I'll recommend specifically in a moment by Dr. Sam Ladner. called mixed methods. And she advocates. Well, she describes and advocates for mixed methods research and that really is the in terms of sort of the existing
infrastructure and training about how to do research. That's a really good starting point that really informs the TDI hybrid research approach. So with mixed methods, you're blending quantitative and qualitative methods. And you are in the end, the sort of quantitative part of this, you're attempting to understand scale and causation. Language again borrowed from Dr. Sam Wagner's book. How big is this thing? What causes it? How can numbers and measuring things help us understand scale and causation. That's the quantitative contribution to mix methods. And then the qualitative contribution sandblaster uses the language, you're attempting to understand coherence and focus. And I haven't found that language particularly helpful. I would say it, you're using the qualitative methods, which are things like interviews, and listening sessions, and so forth. You're using those to understand context, nuance, diversity, and patterns. And generally, you'll start with, when doing the TI hybrid research, you'll start with something qualitative. And then you'll move to a quantitative phase. Again, we're blending the two, maybe not, maybe it's sort of a serial thing rather than in parallel. Start with qualitative then move to quantitative, but we are blending them. And again, if you're, if you have a bias for or against any one of those methods try to drop that bias because the real strength of mixed methods research comes from the blend not from prioritizing one over the other. There is a bit of a bias though, towards the quantitative culturally, there's that bias. So again, our approach with the TI hybrid, we're blending, risk management and Innovation Research, we're focused on the relatively closed system of a specific business decision. Not the wide open VISTA of innovation. But we lacked the benefits and drawbacks of being an insider. So we tend to start with a sort of more exploratory, inductive, qualitative phase to the research. And again, I'll elaborate so much more on this during upcoming ti talks. We're getting the context straight here to start with. Sam landers book, mixed methods is an enjoyable, short, useful read, recommended. So this hybrid, the dei flavor of research, this is what we're going to really focus on in the second challenge of the expertise incubator. And it's meant to improve decision making. It must be flowing from a desire to serve. Now, the desire to serve is not some totally pure altruistic thing. A lot of times, it's coupled with an entrepreneurial instinct of, you know, through service, I can create value, that creates a really nice revenue stream in some way. For me, that's fine. I'm not talking about some kind of platonic ideal of service that's, you know, completely altruistic. But largely, your motive needs to be I am trying to make things better for my clients. I'm not using data in a cheap way to bamboozle them or win arguments with them, because I'm lazy, and I just don't want to have the argument. emotive really needs to be an honest motive a service of I want to make things better for my clients, or I want to make things better for the entire industry. And there's a little bit of swagger
behind anyone who could actually believe that they can do that. I mean, it's really interesting. It's like I'm insisting that it come from a place of service, but there's arrogance is not the right word. But there is this kind of additional level of confidence of like, I want to be of use here. But also, I might be a little smarter than some of these people, and they haven't figured it out. So it's this interesting tension that you know, kind of couples with the the instinct or the desire or the drive, to be of service to make things better. But it does couple with some things that if you take them too far become arrogance or arrogance, I guess. Anyway, it's fine to live in that tension. But just make sure that your motives are incorporating this idea of service. It's important when you get to this part of the expertise incubator challenges. scope is critical. I'm going to spend more time talking about that in a future talk. If you go just a tiny bit too big in the scope of your question, it'll kill you. We have to sacrifice a lot of things to make this approach to research doable. And sacrificing those things actually increases the value of what we're doing, it does not diminish the value. But at first, it feels a little bit weird and unnatural. At first, it feels like we are giving up statistical rigor or giving up large sample sizes. And we are in service of impact in service of getting through what some may just end up being a sort of pilot or prototype phase may be a sort of MVP of a larger research question that you tackle. But within the context of the expertise incubator framework, you really need to scope down the question to a really small, tiny size because it will give you one win, that you can then increment and build on. But if you never get that when it's frustrating and demoralizing. we're rounding third, headed towards home here. Want to talk about pretty important idea as we head home? Here's a question for you would you want a new vaccine tested by someone who will never use it on themselves? I know plenty of medicines are developed by people who don't suffer the ailment the medicine is for. But asking that question, I think raises this larger question about skin in the game. There's a I got to see kind of both sides of this. It was really interesting. I won't name any names. But I guess last year, maybe the year before a professional research form, a firm published a report with some results. And someone else I know who's in the industry that this report was about looked at and said that makes no sense. I've been in this industry for you know, X number of years. And they're saying there's this number of, you know, something they measured. And I've never seen that number happen in real life, these results seem off to me, there's something not right about them. And they reached out to the firm that made the study and said exactly that. And the firm responded by saying, well, the data doesn't lie. This is a skin in the game problem. The research firm did not have sufficient skin in the game. And I don't want to go back that many slides. But you remember that earlier, I was talking about how context matters. And doing research connects you with context. In this case, this company, Cisco, again, gonna call it a skin in the game problem, they didn't have that connection to context. And so they were unwilling to sanity check their own results. And they were insufficiently supplied with sort of ambient contextual knowledge because they don't have skin in the game. They're not a total outsider to the area that they were doing the research on, but they're not. They weren't sufficiently in it, embedded in it. And to be clear, this area they're doing research on it's not just one company, it's not a
totally closed system. It's more of an open system. And they were just insufficiently supplied with context to sanity check their own results. Normally, with research, we think about skin in the game as leading to motivated reasoning. So we think about this idea that Oh, you've got some personal connection to the outcome of these of this research. And so you can't be trusted to be free of bias. And so we're kind of worried about you doing this research because it might be a biased outcome that was came from a personal sort of connection. And so we think of skin in the game as not a good thing. But I would like to argue that it is good thing. In this hybrid research context. I think the worst outcomes come from not having skin in the game. When you have skin in the game again Do you care about context you, you understand the context and your clients and this world that you're researching sufficiently to sanity check your results. That's a good thing. So one of the, again, I just want to kind of get out in front of a potential objection that I think is it's understandable. But it's it's, it's kind of a baseless objection to the way that we approach research, we will have skin in the game, that is a good thing. It produces better results, if we are just moderately killed, careful to guard against obvious biases. Again, the bottom line with this ti hybrid, we're going to use data to improve decision making. If that's our ultimate goal, we're going to be fine. We're going to produce useful research. There's you know, as a whole category of research that doesn't support decision making, and it does have some value. And I will, in a later Talk, talk about you know, when and where you might want to do that and what value it does have. But generally, our goal here is we want to use data to improve decision making. That's what that TDI hybrid is about. Let me wrap up by reviewing a few points. And then I'll let you go. So why do we do this, that sometimes I think, if we started down this road, we did the literature review. And then we weren't able to continue for some reason, I lose enthusiasm. Something else happens global viral pandemic breaks out, it would still produce so much value just to do the literature review. So forcing the literature review has real value, it pushes us as an out pushes us outward into better understanding the context that our clients operate within. And that makes us better experts, better advisors. It can lead to the creation of intellectual property, which can look like a lot of different things. But it's your expertise packaged and made usable without your presence that's often good for profitability. It can be good for status as well. It can shift your point of view towards data, a lot of us aspire to that. And there can be some good that comes out of that as well. But ultimately, we do this, because it can lead to our clients making better decisions, which is better for the health of the space that we're focused on and serve. And that's good for us. We don't do it. I think because we misunderstand it, the availability heuristic causes us to grab the wrong things off the mental cognitive shelving. And we think research is big clinical trials or research is fluffy, not very helpful state of the industry surveys. Were not trained in how to do it. It's intimidating examples are a few examples of the stuff in the middle are few and far between. And there's little incentive to do it. Because a lot of us work in a world where the status quo is pretty good. We're not the personal status quo for us and the status quo for our clients. So we're not really incentivized to improve it. But some of us have this drive to say, Well, you know, what, if it was better, research is a really valuable tool for us to use.
We've got these contacts, the business context, the academic context. And briefly, in the risk management research context, we're trying to reduce uncertainty, we're operating within closed systems, closed ish systems. And we're using quantitative and deductive methods starting with a theory trying to invalidate or validate that theory. With the Innovation Research style, we are trying to generate new options. Within the world of open systems using qualitative and inductive methods, we start with observation. From that we may formulate a theory that leads into a deductive phase. And with the TDI hybrid, we're drawing from both of these to improve decision making. We tend to be working within a sort of pseudo closed system and using mixed methods.
We're going to talk about the how, quite a bit in upcoming talks, but for now, just know that you need to scope This very tightly this question whatever it is, you're working within the somewhat closed system of unimportant, uncertain decision that's relevant to a class or group of your clients, not just one client. And the motives of service and the willingness to have skin in the game are just critical to what we're doing here. That's it. Thank you for your thoughtful consideration of these ideas.