Hello and welcome to the Big Five podcast from Northumbria psychology department where we learn big facts about human behaviour and experience. My name is Dr. Genavee Brown and I'll be your guide into the minds of psychology students, alumni and researchers at Northumbria University. I'm a lecturer and social psychology researcher in the psychology department, and I love learning more about all fields of psychology. Each week on this podcast, I'll speak to a guest who is either a student, alumni or researcher in the Northumbria psychology department. By asking them five big questions, we'll learn about their time studying psychology, and hopefully learn some big facts about human behaviour and experience. Today I have the pleasure of speaking to Harry Cleland. Harry is a PhD student at Northumbria University and his PhD project focused on understanding the psychology of language. I'll let him tell you some more about it. So Harry, how did you get interested in studying psychology and specifically studying the topic of your PhD? And can you tell us a bit more about the theories behind it.
My interest has always been self driven, I suppose. So my undergraduate course which has studied here at Northumbria was supposed to be a blend between psychology and Sport Sciences. So I did a kind of a combined honours for a few years. But just as I went through my kind of undergraduate journey, I suppose I became more and more interested in. In psychology and psychological research, specifically, I never really had any kind of really strong preferences for any one particular area. And if I really had my mind set on exactly what I wanted to do, or anything like that, just as I went through, I kind of discovered that I enjoyed finding out things that kind of hadn't been found out before essentially, sort of learning about obviously, the mind and in our human behaviour, I found it really cool that you could kind of run projects and discover things that had never been never been found before. So that was kind of my how I kind of followed my interests, I suppose that led on to a master and research course, also here at Northumbria, which was really good, because it was everything that I wanted to do in terms of like, specialising in research and kind of wanted it to be a bridge towards a PhD as well. So I really enjoyed that experience, because obviously, I'm an undergraduate, you kind of cover all bases, I suppose, cover a lot of different topic areas and lots of kind of exams and essay based things. Whereas the modules where I actually performed the best were always research based modules. So any any lab reports, or statistics based modules, I always really enjoyed, and I always did quite well on so then that led me on to the PhD. So my project is supervised by Dr. Matthew here, because an assistant professor in this department, and he was my undergraduate dissertation supervisor. So we already had kind of a suppose an established working relationship.
So that was one of the reasons why I applied for my PhD specifically was to work with Matthew, and then also just an interest in the topic area specifically. So the project is psycholinguistics, as you mentioned, so the psychology of language. And the project kind of applies the principles of Experimental Pragmatics, which is essentially, experimentally manipulate and context to see how that can potentially influence the the production of language or the interpretation of that of that language.
And the thing that I found most fascinating about that was essentially how the same words can mean different things, depending on the context, suppose the difference between what is said and what is meant can be kind of exacerbated or changed by the social circumstances, I suppose. So that was kind of what got me into the area, specifically in terms of interest. And then in terms of theories, one of the key theories that we've used as a framework for developing experiments, very dumbed down version of politeness theory is that essentially, because we're a social species, we're kind of we're motivated to be nice to each other, essentially. So I'm, when we're having a conversation, I'm actively trying not to, to harm you in any way. If there's an opportunity to kind of boost your social self esteem, then I will kind of take that opportunity in terms of like a kind of reciprocal social relationship, we try and build each other up, essentially. So the experiments that I've designed are largely based on kind of providing evidence either for or against the idea that a lot of the motivations for the production of language and the way in which we interpret words that are said to us are largely based on this kind of joint understanding, I suppose. So can you tell us a bit more about the types of studies you run to investigate these issues? So as I mentioned, the project is slides the principles of Experimental Pragmatics, which is essentially the manipulation of context. So what we did in all of our experiments or across the PhD, we did eight experiments and tool is we give participants kind of hypothetical scenarios. So a kind of experimental vignette.
It's essentially where they place themselves in the position of someone who is producing language in some context, or is interpreting language being said. And then within that, we would manipulate some of the kind of social circumstances surrounding the words used to see if that had any impact, first of all, on how people chose to produce language, and also how people chose to or how people interpreted the language based on circumstances, I suppose the specific context that we applied it in was health communication. So essentially, one of the issues that can be caused by kind of polite language and politeness based speech is the kind of potential for misunderstandings or ambiguity. So if I say, you know, you will possibly experience side effects or something like that, it's what does the word possibly really mean? In theory, it could be anywhere from 1% to 99%, likelihood or something like that. And what we find is when you place people in particular circumstances, there may be there may attach a higher or a lower likelihood to those kinds of ambiguous terms, some contextual circumstance, I suppose, in less consequential circumstances. So if you're just chatting to a friend, talking about nothing of particular value, or interest, or no one's under any kind of threat, or danger, or something like that, these potential ambiguities are kind of harmless, I suppose. But when you apply it to health communication, things get a lot more serious. And people will make health based decisions, of course, based on kind of their conversations with the health professional, for example, so if one of these terms like a possibly or or maybe your or something like that is is kind of used, then the potential for misunderstandings is a lot more consequential. So yeah, essentially, come back to the original question. The types of study that we're doing is where we're kind of utilising this these health based scenarios, and asking participants to put themselves in the position of either a healthcare professional, or a patient being given some information or something like that. And then we manipulate things about the social circumstances to see if I can kind of exacerbate some of the potential discrepancies between what is said in a conversation and what the interpreted meaning of that of those words are.
Yeah, that's really interesting about those those modifiers like, possibly, and maybe and how don't necessarily consider what that actually means. So my next question is, is there anything particularly interesting or surprising that you found in your studies?
Yeah, absolutely. So I kind of have two for this, I guess. So the probably the most striking or surprising came from an experiment that we did looking at one index ratios. So if you are kind of in the UK, at the moment, wandering around, you might have seen advertisements for prostate cancer. And the the kind of NHS kind of advertisement is saying kind of one in eight men will experience or will be diagnosed with prostate cancer. So the actual ratio itself, the one in format, is really fascinating. Because if you were to take that same ratio, see if it was a one in five chance, for example, if you were to ask people, What the likelihood of them experiencing that outcome, that eventuality was, that would be higher than if you multiplied both the numerator and the denominator by the same number. So for example, if it was a 10, and 50 chance of prostate cancer, for example, they will consider the likelihood of that to be less than the one in five kind of ratio format. So one of the kind of experiments that we did is well, we manipulate the two things, essentially, we and this was this was based quite closely on some group, like some some contemporary research that had been done in the area is we gave participants hypothetical scenarios. And in this scenario, they imagine that they were taking a trip abroad. And when they take this trip abroad, they face the possibility of catching a particular disease. And then we manipulated this scenario across well across two factors. But we had four separate experimental vignettes, essentially, so we manipulated the the ratio format, so the chance of catching this disease was either a one in x or n. This is a bit of a mouthful, but n in x times n, essentially what I described there before.
So it was either the one and X or the n and x. And then we also manipulated the severity of the actual disease itself. So in the the lower severity condition, Lyme disease was the potential disease, and then in the high severity, it was Ebola, so something a lot more, a lot more severe, obviously. So yeah, we were we were testing two biases there. So the the one in X kind of effect that I was mentioning before, that was only in 2019, I believe was kind of established as an actual cognitive bias. There's enough kind of evidence for that. And then we're also testing the severity bias as well, which is a little bit more well established. So the people that established this best I would say it was from bonnafon and
villager, Albert and maybe butchering names at various points throughout this, but and but that was a 2006 paper that looked at the likelihood estimates associated with possible insomnia versus possible deafness. They were, I think it was in a, potentially in a French population, maybe the prevalence of those two, at the time were pretty much exactly the same around 4%, or something like that. But obviously, the severity is vastly different. So deafness is way worse than insomnia. And people consider possible deafness to be more likely than possible insomnia. So the the likelihood that's attached to the word possible is higher because of the severity of the potential. So the way we measured it was over estimation of of health risk, essentially. So it would be, again, if given information from their doctor.
And it would be kind of, you know, either, for example, you have a one in 13 chance of catching either Lyme disease or Ebola, versus you have a seven in 91 chance of catching Lyme disease or Ebola. So the actual likelihood represented by the ratio was exactly the same. You've just multiplied both by seven, essentially. And then we measured overestimation. So if that one in 13, was a, I think it's like a seven point something percent, kind of raw chance, I suppose we would instruct participants, first of all, not to just kind of pull out a calculator on their phone and calculate an exact an exact response to that. But moreso, their kind of that kind of reaction, to hearing that off to one in 13 chance of Ebola, for example, it will be it was very much kind of like read the scenario, and just give you a kind of first impression on what your actual likelihood is of catching that disease, where you to go on this kind of hypothetical trip. But what we found, I think the more striking finding is how much they actually overestimated their health risk, specifically, when it was the one in x and the high severity condition. So when it was, for example, a one in 13 chance of catching Ebola, as opposed to say, a seven in 91 chance of catching Lyme disease. In the former in the one in X high severity condition, their overestimation was close to 25 percentage points above their actual risk. So it's kind of rather anecdotal, I suppose. But so because we were looking obviously, at the kind of one and X bias and the severity bias, but one of the things that was most striking to me in that was the actual kind of roll over estimation, we were kind of expected maybe a couple of percentage points and some kind of more subtle, subtle differences between perceived risks, risk and actual risks. But probably the most striking thing I would certainly say is, yeah, like, if your risk, for example is that 7%, in your mind, this is on average, in the one in X, high severity condition is almost 25 percentage points above that a person's interpretation of their risk is like almost four times like the actual, their actual risk, which was quite, quite striking, for sure. Is that adaptive, in some ways, because it is a severe disease with a risk of death. So isn't it in some ways, I guess, adaptive for our brains to overestimate in that way? Well, yeah, it is essentially. So the kind of goes back to, I suppose the generally understood psychological principle, I think the papers, Baumeister is the author of the paper.
But it's the principle that pain of bad is stronger than good. Yeah, it's, it could be a representation of the kind of people putting up their their barriers, I suppose it's kind of like a natural instinct is to do a really bad thing over there that we want to keep away from that. So even though that's the the kind of logical risk associated with that thing is, say, 7%, or something, I think that it's probably going to be higher. And you know, if there's a one in five chance of it happening, then it's definitely going to happen to me like, I'm going to be that one. I'm going to be that one individual, I suppose. So yeah, I think in terms of like, explanatory variables, I think potentially, maybe applies to health anyway, maybe some health anxiety things underneath that. So I'll be interested to see if the extent to which a person is susceptible to high levels of health anxiety could partially explain some of the size of the effects that we've seen in this study, for sure. And then,
to come on to as you also asked about sort of exciting findings, I suppose that would well, I would have to switch and talk about a different study entirely for that. But essentially, one of the most exciting findings at least, in my own view, of my own, my own PhD, that's obviously largely determined by other people when it comes to publishing and those types of things. But one of the most exciting findings that I have is a generic language production experiment that we did. So generic languagemight be maybe saying something along the lines of men have beards. It's kind of like attaching a characteristic to a Have a group identity or something like that so and the characteristic is having a beard, and then the group identity will be men in that in that particular scenario in the kind of interesting thing about generic statements, is that kind of resistant to counter examples. So, as opposed to the universal statements saying that all men have beards, all it takes is one man did not have a beard to falsify that statement. But you can state men have beards, truthfully, even in the common knowledge that not all men have beards. So even when only some men have beards on you, some people have the characteristic that they're assigned to you can state, truthfully with a generic statement, that that is the case. And what happens with that, and what has been found previously, in research, there's a paper from me butchered the name again, but Dr. Zeus and colleagues from from 2019, they looked at the use of generics within specifically psychological journal articles. So like the titles of journal articles, and things like that, and they found that first of all, it was generic statements were used, overwhelmingly, kind of no discipline, I suppose it was close to 90% of the codable titles had elements of generic language in them. In then, in some follow up experiments, they looked at the perception of the findings of the studies that were summarised with a generic, so asking participants kind of how important they consider the findings to be based purely on the tail, for example, and how generalizable the consider the findings of the study to be based on how it's kind of summarised in the in the title, I suppose. And what they found was that generic statements compared to non generic versions, so rather than men have beards, it will be so men have beards, that type of thing. The generic journal article titles were considered to have findings that were more important in the real world, and more generalizable. So the findings were perceived to be more applicable to a wider range of the population essentially.
So that's one of the characteristics of generics. And that's why it makes them potentially slightly ambiguous, because you don't need too much evidence to be able to use a generic statement. So if you've run an eight week training programme, for example, to see if, you know, your training programme improves some aspect of something, if you see improvements in say, 25% of the of your participants, you can still use a generic statement to summarise that because it's technically true, it will be a kind of, you know, eight week training programme improves certain things, so you don't need too much evidence to actually use them. And when you do use them, the findings are perceived as quite important, quite generalizable. So there's potential for I suppose, initially question if you see maybe like a news media article or, or something, but something like that, which is titled with a generic
that might be kind of overstating the importance of some kind of research findings or, or something like that, even though the data that lies beneath that claim, isn't necessarily supportive of the art doesn't justify the strength with which it's been put forward, let's say so one of the things that we were curious about in this kind of exciting study was if we were to explicitly manipulate a person's kind of motivation, when coming up with a journal article title, or, in this case, it was a news media article headline if we explicitly manipulated their motivation, would that change the frequency with which a person would choose a generic statement over here, non generic said, with the use of some in it like a qualified version, essentially, again, we placed the pot husbands in a hypothetical scenario, and ask them to imagine that they were working for a kind of pharmaceutical company, and if they have developed a new supplement, which is designed to boost immune system function in men.
Then we manipulated firstly, the efficacy of the data that they've collected. So they're reporting on this new supplement, which boosts immune system function in either 25% of men, 50% of men or 75% of men, that was the first thing that we manipulated, but then we also importantly, manipulated participants motivation was for choosing a headline. So in this hypothetical scenario, they've been in a team meeting, and they're brief, essentially, they've been told by the higher ups to be as either as accurate as possible, so choose the most accurate headline, or choose the most appealing headline. So we had to kind of the accuracy versus appeal. So what we found in the accuracy condition is that as the percentage of generalizability increased, the rate with which generic the generic statement was chosen also kind of increased as well. So that was kind of what we expected to see. I suppose that the stronger that the data is underlying the claim, the more willing participants were to use the generic to summarise it. But then really interestingly, when participants were explicitly motivated by appeal, it essentially didn't matter what the strength of the data was. underlying it participants overwhelmingly chose the generic statement, irrespective of irrespective of percentage of generalizability, I suppose. So when, let's say integrity is sacrificed for producing a statement that's really appealing and kind of gets eyeballs gets traction, if you just want someone to look at it, for example, a person can. So if they're ill, ill intention they can potentially, kind of sidestep the actual strength of the data and just make the claim regardless, and that was done on was very large scale study, I think 1100 participants. So yeah, it was some really, really exciting data with some potentially exciting implications as well. Because if it obviously has some slightly different implications, depending on who it is producing the headline and stuff as well, because you know, as, as academics, kind of part of our identity, I suppose, is kind of integrity and accuracy. And appropriately representing the, the results of our study is not kind of selling them for what they're not or something like that. Whereas if you apply it to a kind of media context, you could, you could make the argument that there is less motivation there to keep things as accurate as possible. And there's much more emphasis on just make this something that lots of people will click on, for example. So yeah, I suppose it could demonstrate that if you are reading a news media headline, and that's summarised with a generic statement, then it may not be the case that there is actually strong enough data in there to support the claim being made by the headline, essentially, unless the writer themselves are explicitly motivated by accuracy, rather than rather than appeal. So that's one of the most exciting thing.
Yeah, that's really interesting, about motivation and, and the accuracy of language that we use to communicate science. I wonder if you had any other thoughts about the impact that your work might have? Or advice you might give to science communicators about the types of language they should use? But what kinds of things should science communicators think about? And what kind of things might they want to like, avoid or do in good practice when they're talking about science or health?
So I can probably, because I've got eight experiments at all, there's a kind of a lot of potential things that I can I can draw from there. And this is one of the things that I've kind of talked about in my thesis, I suppose, is that the kind of experiments that I've run shouldn't be considered, you know, the just singular individual studies, they're not to be taken as gospel or anything like that, when it comes to generic statements. So we, what we normally find that there's a kind of smaller batch of studies, which kind of look at this type of thing is what we find is that when research findings are kind of maybe overstated, or over generalised intentionally or unintentionally, when they're kind of overstated, those particular studies will be picked up more often by the media. So when universities put out their press releases, for example, the studies that are more exaggerated or you know, overblown, again, most likely unintentionally, they just, you know, of course, you want your project to sound as good and exciting as as possible. So these, I'm not suggesting that these things are done with, like, explicit ill intentions, let's say. But the Yeah, the studies that were kind of more exaggerated or over generalised were picked up more often by the news media, and then audit and the media, and stuff like that. So in terms of in terms of avoiding over importance, and over generalizability. The key thing is to be based on our findings anyway as to be explicitly non universal in the way that we communicate. So one of the other studies that we did was looking at the interpretation, so that the study that I've just been talking about there was a was a production experiment, essentially. So finding out what can alter people's production of generic language. But we also did a, an interpretation experiment as well. So what we did with this is we took, I believe, 20 genuine news media articles, on an aspect of health that was summarised with a generic statement. So it could be like, video games help all people with Alzheimer's or something like that. And then what we did is we essentially presented those two participants, we also we manipulated the generic. So we took all of the generic versions, and then we made two non generic versions, we made a past tense version. So be all video games helped all the people rather than helps. And then we also create a qualified version. So video games help some old people, and this was off the back of the day's newspaper that I mentioned before, this was kind of a very similar methodology to them. So I'm certainly not the kind of originator of that. But we, we did that and then we gave all of those to participants and ask them very similar questions to the day's newspaper, which was kind of like how important Would you consider these findings to be? You know, how many participants were in the study? how generalizable are the findings? and those types of things. And what we found was, there was no difference in the perceived importance and generalizability of studies between the generic and the past tense, non generic statement. So, no difference between video games helps all people, or video games helped all people. But what we did find was a reduction in the perceived importance and perceived generalizability between the generic and the qualified, non generic based on that it seems to be the case that in order to avoid the potential over exaggerating, and over, over generalising over generalising effects of generic statements, you have to be explicitly non universal, because the generic and the past tense non generic still make a kind of universal claim, it's not explicitly qualified with the word sum. So it appears based on our data that we need to essentially qualify any anything that's not. So when the data isn't strong enough, essentially, we need to make sure that we would qualify the statements that we're making. So that will have a knock on effect, as well, because when findings or or like, findings are exaggerated, in their primary research journal articles, that will carry over into the secondary reporting of those findings as well. So it carries over into the media and things like that, which can potentially potentially be damaging. So yeah, it's one of those things where we have to be quite cautious, but I suppose it kind of originates from us, I suppose if we, if we make sure that we're properly representing the implications of research findings, then that will bleed into how that then will hopefully anyway, that might bleed into how we, how we communicated about it in the media. I have to although research questions in my PhD thesis, these are a little bit more relevant to while still hypothetical, but doctor patient communication, as opposed to kind of generics and kind of scientific communication more generally, one of our research questions looks at uncertainty terms, which is similar to what I was mentioning earlier about the words like possibly, and maybe these kinds of words that are considered to be non referential. So they don't have a solid referent. They don't mean anything, essentially. So the first research question was looking at those and how the use of uncertainty terms are perceived in high severity, kind of face threatening contexts. So that context was was breaking bad news. So this was the very first experiment that we did on the project was asking participants placed themselves in the position of a health professional,
and to break bad news, essentially, to give a diagnosis. And what we suspected was that participants would manage the kind of face threat the high level of threat involved in that situation by kind of caveat ng diagnoses or potential prognosis with uncertainty term. So it'll be kind of like, you will, you will possibly experience some side effects, there is a potential for a diagnosis here or something like that, they will kind of soften the blow by using uncertainty terms, or at least that's what kind of we thought would happen based on based on some previous research. Instead, what we found was that participants stead of using explicit uncertainty terms, so the only time that participants would use uncertainty terms would be when they were genuinely uncertain about the diagnosis. So that was one of the things that we manipulated. In one condition, the participants were certain that this was the diagnosis that they were given. Whereas in another condition that was kind of on the edge, it was 5050, as to whether or not they will be diagnosed with something. So uncertainty terms, were actually only used when participants were genuinely uncertain about the diagnosis. But instead, to manage the the threatened the severity of that situation, participants would use kind of subtle indirectness instead. So rather than being very direct, and just kind of come across, coming across very kind of cold and plainly, very short and sharp with a diagnosis saying, This is what's happening, they would be quite empathetic, they would use a lot of disparate word markers, which were essentially kind of emotional, emotionally driven words to like, I'm very sorry to tell you this. Unfortunately, this may be difficult to hear their support here for you these these types of things, and they would use a lot of this preferred markers and emotional words. And they will just use more words, generally as well to deliver the diagnosis. So this for us anyway, this was characteristic of a more kind of subtle form of blindness strategies. So like in a way to soften to soften the blow using politeness strategies. This was more through kind of subtle indirectness. And yeah, we found that that was quite an effective and effective method I suppose, but
Because we also we also took the language that was produced under the circumstances, and then give that language to an entirely separate set of participant them and then ask them to kind of interpret that language as kind of like how certain is the specialist in their diagnosis, how polite is this letter, those those those kinds of things. And actually, in the second experiment, we saw in terms of kind of certainty, and directness, accuracy of the diagnosis and these types of things, that there was no difference between the kind of certain and uncertain conditions and the kind of high face threat. So what we found in this, the second experiment was that irrespective of the circumstances, essentially, there was no difference in kind of directness, politeness and accuracy between the different circumstances. So, that kind of suggested that this kind of more subtle form of indirectness was a way to still acknowledge the emotionality of the situation and to still kind of care for the care for the patient. And that the sensitivity of that information, whilst also not sacrificing too much kind of accuracy, and directness and those types of things. Because that's where the potential ambiguity comes in. If you say like, oh, well, it's possibly going to be this and it's maybe going to be that, then you, you achieve the politeness strategy, which is soften the blow. But it's then less accurate, because it's not true, you're kind of knowingly saying something, which is less true. We also looked at the production and comprehension of quantifiers, specifically. So proportional quantifiers are terms like, some, many and most and those types of things. So these are very similar to uncertainty terms, and that they're non referential. So if you say, some people, like how many people does that refer to a variable. But they're slightly more accurate in the sense that they have kind of a plausible range of values, I suppose. So if you say most people, you will assume that to be maybe more than 50%, at least.
So they're slightly more more accurate in that sense. And these according to date are anyway, these are quite overwhelmingly preferred by kind of family doctors and physicians, in terms of like communicating health risk, you know, so when that's talking about a specific situation, they'll say, you know, some people who have this who have this procedure, make a full recovery, or many people recover well from this or something like that. And what we, we did a number of experiments on on these, which I won't go into into too much detail, but essentially, what we found across a number of experiments with this is that from the perspective of the kind of patient, so if you're told, you know, a few people suffer side effects, or some people suffer side effects or something like that, as the person hearing that information, they will be able to detect that that is potentially a politeness strategy or a face management strategy or something like that. But at the same time, that doesn't influence their perception of likelihood. So, when I was talking about the one and X ratios before, you would assume potentially, that those are more accurate in terms of in terms of getting across a communicated likelihood, because it represents an exact likelihood you can pinpoint it you can believe that is 7% or something like that.
But obviously, what we find is with the overestimation is like, the interpreted likelihood is very different to the actually represented likelihood. But what we find with proportional quantifiers is that irrespective of the severity of the situation, a person will consider the likelihood to be the same whether or not it's a low severity or a high severity condition. So they can acknowledge that the use of some many almost or something like that is potentially the doctor or whoever trying to trying to manage the situation, trying to be to achieve face work. But that doesn't change their perceived likelihood of the outcome as a as a consequence, which isn't what we expected to find, but is potentially a good thing, really. So we did one study which looked at we used the kind of classic example that I used before of insomnia versus deafness from the bonnafon paper. But we applied that to proportional quantifiers sort of became of a few people suffer insomnia, versus a few people suffer deafness. And then what is the likelihood that you will suffer either insomnia or deafness and there was no differences.
So essentially, so we looked at a few people suffered this some people suffer, many people suffer, and most people suffer. And the likelihood of the outcome attached to all of those, all four of those proportional quantifiers, across the high and low severity conditions were exactly the same, essentially. So it, and this is quite promising as well, considering that they are used so frequently, it might be the case that to avoid some of the kind of potential a person's pick a proclivity as opposed to overestimate their risk, when it's a really high severity situation, it may be useful to use a proportional quantifier to do it, because it seems to be the case that even if it's a very, if it's a lower severity, situation, that doesn't necessarily change how likely they consider that outcome to be.Oh, yeah,that's, that's where we're at.
It sounds like we all kind of have an agreed upon understanding of what those those terms kind of mean, which obviously is helpful if we're gonna use them in everyday conversation. So my last question is, where do you hope to go from here? And I know you have big plans. So do you mind sharing those with us?
So I'm going to break it down, I suppose in terms of research, so I'll still be I'll still be doing things in this this area for sure. I'm currently working on one of my supervisors papers, who looks at consensus generics, which are, you might see a headline and it's like, scientists agree that such and such that that kind of snippet, the scientists agree, that's the kind of consensus generic, it's kind of implying that all scientists agree, which of course, is rarely ever true. So he's done a number of a number of experiments in that in that area. So I'm working on a paper surrounding that at the minute, which is, which is very exciting. So that's on the research side, and then from more of a kind of personal side, I'll be moving into more kind of meta sciency type things. So I, very shortly will become a postdoctoral research fellow at Elte University, which is a Hungarian University in, in Budapest, and they're working on a lot of kind of large scale, collaborative projects, essentially, doing scientific research on the scientific research process, essentially, software, kind of like an introspective thing. It's looking at kind of how are we? How are we doing things in research? Could it be improved upon? Could we make things more transparent? Could we make things more rigorous? And we're doing that in the hope of? Well, essentially, first of all, addressing the kind of replication crisis that we have, it's obviously a very well known thing, also, yes, just generally to make things as open and transparent as as humanly possible. So I haven't really gotten into the nitty gritty of the types of projects that I'll be doing yet. That is that is still to come. But yeah, that's the, that's the next step, which will be really fun. So they have some ongoing projects at the minute. So they have what's called a multi 100 project, which is essentially looking at replicability and reproducibility across kind of pre registered open science, which papers, so they have recruited 100 academics from all over the place, and essentially given them some data that's been sourced from a from a study without too much context, in terms of like, the, what the study is about, or what the rationale is, necessarily, but they'll just give them the give them the data, and the hypotheses, and see if different researchers will come to the same conclusion based on based on the data and the kind of open access published, analysis scripts, and things like that. So I would imagine I'll be involved in projects, kind of similar to that. And it's quite exciting as well, because they do lots of it's quite a collaborative work environment, they have a lot of people that they work with, which is a nice change from the PhD, which is, as anyone who's done a PhD before is often quite a small venture. So it'll be it'll be good to, to kind of expand, expand my network a little bit with some, some fantastic people. I'll be working with Barnabas and blash LT with who are really, really good people. And yeah, I fly out there next week to to go and meet them. So yeah, amazing. All good things, for sure.
Awesome. Well, congratulations on finding that amazing postdoc and getting it so thank you again, Harry, for speaking to us today. listeners. If you'd like to learn more about Northumbria psychology, you can check out our psychology department blog at Northumbria psy.com. You can also follow us on Twitter At Northumbria Psy. If you want you can follow me on Twitter at Brown, GENAVEE. And if you'd like to be interviewed on the podcast or know someone who would please email me at Genavee.brown@northumbria.ac.uk Finally, if you like the podcast, make sure to subscribe to our podcast on your listening app, give us a review and rating. I hope you've learned something on this voyage into the mind. Take care. Until next time