January 15 Registered Reports Q&A
3:35PM Jan 16, 2020
Speakers:
David Mellor
Chris Chambers
Keywords:
registered
journals
reports
question
replications
hypotheses
reviewers
report
analysis
results
area
power
submission
confirmatory
submit
authors
data
stage
point
manuscript

Alright, good morning and good afternoon, everyone. Welcome to the second in a regular series of Registered Report q&a webinars, inspired by the fact that always the most engaging and most compressed portion of any talk or webinar about registered reports is the q&a session at the end. So these are planned for monthly or perhaps bi monthly events where all we do is Q&A of what's come in and any live questions that occur
Or if you have questions, use the q&a box and submit those. Joining us again. Thank you, Chris Chambers from Cardiff University Chair of the registered report committee. And registered report editor for several seven journals, I think. And without further ado, welcome, Chris. And we can get started.
Thanks, David. Welcome, everybody. So, you know, as David says, This is our second in this series. And the idea is that we're here to just answer all your questions about anything registered reports related. So that could be a general question about policy or practice, it could be a specific scenario that you'd like feedback on, perhaps you're going through the process now as an author, and you'd like some advice on what to do next.
It could be a concern or a criticism you have about the format or an idea for the future, or you might be an editor thinking of implementing. There's all kinds of interesting questions that we get routinely. And you know, as I said, introducing the very first of these webinars, we cut straight to the chase with these so there's no summaries about what registered reports are, there's none of my usual guff if we go straight to the q&a straight to the meat, it's the registered reports for nerds session. So, as David said, jump in at any point with any questions you might have, sometimes the questions that we we answer kind of trigger more questions as we go through. And so feel free to raise those questions in real time and we'll do our very best to answer everything we can in the time available. So shall we begin, David?
We've got all the Q and A's in one big slide deck here, so we're going to be kind of appending to this over time. So our first question:
"Supervisors, have you supervised the student or postdoc you conducted a registered report or pre registration? If yes, what advice would you give to other supervisors considering It? If not, is there something that puts you off? eg worries student will run out of time?"
Well, I can answer this from the yes perspective. And, you know, I guess anyone who's watching this, who maybe can chime in on the "no" perspective, please do. But from the "yes" perspective, I think there's a number of, from a supervisors point of view, there are a number of points to get clear in your own mind, I think before you decide to go down the registered reports track specifically, which which is the one that I'll focus on here. Much can be said about a pre registration, the broad concept of that outside register reports. But given the focus of today's q&a, I'll just focus on the RR component. So I guess the first thing to think about if you're in a supervisory position, is how much time you have. And this is for two reasons. First of all, you need to accommodate the need to be thinking about and to take into account The stage one review and editing time at the journal. At the journals where I'm an editor that ranges from about two to four months on average, most submissions from the moment they're submitted have a stage one final decision within four months. Different journals have different pace, you know, some journals are faster, some may be slower. You can always contact the editor of the journal and ask them for some general advice about how long submissions typically take when they go through. So that's that can be a good --just to get clear on your own mind, make sure you allow that period of time to go through the review process. So this can impact on the decision if your student or postdoc is on a short term contract. You know, if perhaps the study is going to be very large, do you have time to go through this review process? Now as many have said over the years, you get that time back at the end of the project because you're much more likely to have your your stage one registered report accepted at the first journal you submit to because the acceptance rate for registered reports typically outpaces regular articles because of the ability to change the protocol based on review or feedback. So you typically get that time back at the end. But still, you need to make sure that there is sufficient time within the term of the contract to whoever is actually running the project. From a supervisors point of view, you need to make sure that there's that time available to actually do the research. Once you've got that nailed, so once you figured out okay, I do have enough time -- you know this is this is achievable, the next thing to really think about is power and sample size, particularly for doing human research. Registered reports can often require larger sample sizes than the average or typical sample size in any given field and the reason for that is the chronic underpowering or under sampling in many areas. And when you when you apply a priori power analysis or any other kind of inferential sampling plan at the beginning of a project and you power or sample to a smallest effect size of interest, you often find, and we often find as editors, that authors typically come back with sample sizes that are maybe two or three times larger than usual. So that's something also to take into account when you're planning the timeline. After that, it's really a case of anticipating all of the requirements that the student or postdoc will need to meet in order to get accepted at a journal. So I have on my talks folder on the OSF -- so if you look at my talks folder --
the top most recent talk, which was one that I gave at the reproducibility workshop last week at Cumberland Lodge, slides 51 and 52 I believe of that presentation includes a list of the Top 10 ways to avoid getting rejected at stage one. And I really strongly recommend that anyone considering submitting your registered report pays attention to that list. And there's also a very, very useful primer on registered reports that recently came out in trends in neurosciences, I believe is that right, David? Do you remember the authors? I'm always terrible with remembering author names. We can perhaps post a link to this later on Twitter. But it's brilliant primer for how to approach the registered reports challenge from the author's perspective. And I think this is something that applies to both the supervisor and of course, also the student who is running the project.
And I'll post links to those in the chat window in just a moment. Next question, many studies aren't deserving of In principle acceptance, we think, because of the results would not be informative. If they are null for example asking to redo or replicate an experiment or account for likely confounding variable. A null result would be uninteresting. So I guess my question to you, Chris, why do you believe that such nulls would be meaningful or relevant?
I guess you can tackle this question in a number of ways. I think that there's an underlying premise here, that replicating an experiment in order to account for a likely confounding variable is not a useful endeavor. Because it could produce an uninteresting null result. And I'm not sure that I agree with that, that premise. Therefore, I don't necessarily agree that such nulls would be uninteresting and lack meaning in the first place. I mean, I can imagine a situation where it's very important to replicate a previous study maybe which showed a positive results deal with a confounding variable and if in that case, and null result was obtained, and that null result was different from the original study, that might suggest that the original finding is not particularly reproducible or that there is some impact of that variable. If you get the same result, then you've you've controlled for something that is potentially important. You've got to see the stage one process as the screen for the kinds of questions and methods that are important and interesting and useful. I've not really come across many cases as editor where authors have gone to the trouble of designing and writing down a detailed stage one protocol only for the reviewers to come back and say, "This is pointless because of null result will be an interesting." Usually, I suppose those designs that may fall prey to very asymmetrical value, you know, it's only useful if you get this result. It's not so useful if we get that result. I suspect those kinds of designs are probably self-filtered out by researchers when they're thinking, "do we really want to go to the effort of putting this in as a registered report, when only one outcome would tell us anything at all?" You know, the best kind of registered report is one where all outcomes are informative in some way because of the virtues of the design, well-controlled design, statistical power and, you know, rigor. So, I guess in summary, in that respect, I'd say, you need to think carefully about the underlying assumption that a null result is uninteresting in any context. It really depends entirely on the experimental design. David has just posted the link. Yes, of course, Kiyonaga and Scimeca. So this is a really good, really good primer for authors, whether their supervisors or for ECRs, it's posted in the chat there. So you can read that.
And all those links are available on the registered report website cos.io/rr, and I'll put a link to that also. And there's also a link to the webinar that they both participated in about two months ago. So take a look at that if you're interested.
Next question. And we've got a couple of questions coming into the q&a. So we'll make sure to get to all of those as well. "Can you discuss the relative merits of using registered reports for replications versus novel studies, novel confirmations, and all the ones that of course are more rewarded researchers. How do we get to the point where we're in a sound, confirmatory zone while still testing a new question? Seems like the only solid confirmation can be a replication."
Okay, so I mean, I'm reminded of Brian Nosek's slide where he talks about the sort of scale of culture change, and you begin by making things possible. And I think that's what registered reports do. I think that historically, at least in in many areas of the life and social sciences, the barrier to doing replication studies has been that it's simply not worth the personal investment of time and resources to do a massive replication of a previous result, when regardless of the outcome, it's going to struggle through the publication's pipeline. Because if you get the same results as the previous work, reviewers will turn around and tell you that we've learned nothing from this because we already knew it. And if you get different results from the previous finding, then the reviewer who is likely to be one of the original authors of that paper will claim that you change something in your method, which is why you got the different result, which means you get rejected. All roads in a way lead to rejection, or realistically I think going down the standard path of replications, at least in these fields. And I think what registered reports do is they lift reputation out of these doldrums and they say "here is a track which enables you to get approval for your replication study before you invest all the resources and eliminating all of the bias in the review process, which would, you know, go against you, which regardless of your results using the typical group." So we begin by making it possible. And this already is having a huge impact. So in a lot of registered reports that have published replications, because they obviously provide this mechanism for researchers to do replications in the first place. But I guess there's more to this question than just doing replications. It's about reward it's about incentives. The answer, I suppose, viewed on a short term for me in a short term way is simply -- publish more of them. Show them having an impact show them being cited, show them having an influence on the field and on theory developments, on making people sit up and pay attention. The, in fact, the received wisdom in a particular area may not be entirely correct. Watching the self correction process unfold in real time is likely to put a lot more pressure on the system to recognize replication because it is having demonstrable impact. And I'm seeing this as well in a kind of personal way, because at Royal Society Open Science, I'm editing a format there called Replications, which are a little bit separate from registered reports. They can be pursued through the registered reports track, or researchers can submit replications that they've already done in the past, in a results blind way. So they submit a stage one manuscript which just describes the rationale and the methodology and withholds the results until it gets stage one acceptance. This is the idea of this initiative, is not just to, again, encourage more replications to be done through registered reports, but also to unlock the file drawer of all of the replications that have been done out there, particularly in my area in psychology and in cognitive neuroscience, which have been done and consigned to the dustbin of history. And a lot of these papers, I think, have incredible value. We're getting quite a lot of submissions coming in. And they're proceeding really well through reviews. So I think you take little steps. If the question here is, "how do we get to the point where, you know, sound, confirmatory zone, while still testing a new question?" Replication has to be normative. And the way you do that initially, is you just make it what as widely available as possible, you reduce as many of the barriers that are out there as possible. And then we just see what happens. Then we can start to build incentive structures around doing replications, many of which have been proposed already.
Yeah, I think one other theme that comes up from this question is "how do we know when we're ready to confirm something outside of the zone of direct replications?" And it gets to a theme, I think it's gonna be coming up in the next question, about how do we know when theory and explanations and probable suggestions are sound enough that it's worth doing a very highly structured sound confirmatory study about, you know, potentially a question that hasn't been put to a thorough test before. So it's not a replication but it's, we're at the point where testing a hypothesis in a very sound, confirmatory ways is worthwhile. So that leads us to the next question. "Talking about a lot of the discussion that's been going on, on social media and in published literature about the need for preregistration. And similarly for registered reports for a large portion of the research that's being conducted. It seems to me that there's truly not that much confirmatory research that deserves to be pre registered. However, we can continue to try to upsell the importance of every finding as if it were earth shattering."
Chris, what do you think is the correct balance here?
So this is an interesting question, I think, because I think underlying This is perhaps that this idea that in some areas, maybe in psychology, especially, theories are not mature enough to really support a program of truly conformance confirmatory testing. And we are perhaps beginning to learn this from the high rate of negative results that are coming through from registered reports in psychology. So we're no longer fooling ourselves into seeing what we want to see in the data. The data is simply giving us the answer. And we're finding out that that answer is no. And I think perhaps one of the lessons that could be learned from that-- it's not the only one -- but maybe one of the lessons that could be learned is that in some fields, confirmatory research is premature and we need more observation, we need more exploration. We need more just charting the landscape before we begin formulating theories that in turn, generate specific predictions that can then be subjected to rigorous confirmatory verification or otherwise.
Yeah, but it seems to me that these, you know, prevalence of null results is the evidence necessary to sort of shake us into realizing that. And it would be hard to imagine getting that realization without those as well.
Right. So in a funny kind of way, registered reports could be the death of confirmatory research, an area where they reveal an extraordinarily high rate of disconfirmation of hypotheses. If every time I make a prediction, I'm wrong, then maybe I'm making the wrong predictions and I need to go back a step. Now, I don't know whether that's the case. So you know, and this is purely a subjective point because nobody knows what percentage of hypotheses need to be supported in an area? If that question even makes sense, in order for us to decide, "hey, we need to do more exploratory research, or we need to do, you know, develop better theories. And we need to invest all of my resources in that end, rather than testing predictions." And I don't really know how much of this is even specific for psychology. Because if you put registered reports into any topic, so far, you find you get a lot of negative results. Okay, so you could put them in cancer research, and you're going to find a lot of negative results, you could put them probably in plant biology and get a lot of negative results. I think -- one of the interesting issues here is whether there's just too much confirmative research in general, across science, and maybe that should be reserved for areas that have a much longer richer tradition of very specific theorizing. I don't know the answer to this question. It's well beyond my paygrade to make such pronouncements really, but it's something to be thinking about, and I think it's also, perhaps a slightly paradoxical benefit of registered reports is initiative which champions rigorous unbiased confirmatory research, that it may prompt us to say, actually, we don't need or we're not ready to confirmatory research in an area. So I think it's certainly good to be thinking about these. I have no idea what the correct balance is. I think that's something that a community as a whole has to decide, based upon its priorities.
Alright, so we've got a lot of questions coming in. So let's dig right down to everything. Thank you everyone for submitting. "How can we decide whether we need to supply pilot data for registered reports?"
So you need to think "what's the purpose of pilot data or preliminary data in a registered report?" So typically, in the submissions I handle, it is to provide a proof of concept for a particular method, perhaps a novel method. So that needs to be verified in some way that's independent. This is crucial. That it's independent of the actual hypotheses that will be tested. So some kind of independent verification that this method works in some way. If it's an analysis pipeline, pilot data might be useful for confirming that the pipeline does what it says on the tin that the data if you put data in, you get a sensible answer out. Again, not in the context of testing the original hypotheses most likely, but instead just confirming that the pipeline passes the smoke test and doesn't catch fire. Another way sometimes authors use pilot data is to provide an effect size estimate for power analysis. This can be tricky because it's usually not advisable to use any single point estimate when doing a power analysis, because point estimates are biased and you know inaccurate and have a have a narrow bar associated with it and whatnot. But still, there are cases where this can be useful for deciding on a zone for an effect size estimation for the actual pre registered study. So really I guess the overriding point there is you need to supply pilot data if there's some element in your method that you can't really pre specify without knowing more about the general landscape in which you're going to be collecting the data. Some fields rely on this a lot. So most, for example, neuroimaging papers that we get as registered reports include some pilot data of some kind, to verify that the very complex pipelines that are used in the analysis actually work as intended. Other areas, in very well developed parts of say cognitive psychology. These kinds of preliminary experiments aren't necessary. In some other cases it goes the other way entirely, where researchers will submit very large, very comprehensive experiments in stage one of their registered reports, the purpose of which is to generate hypotheses for the pre registered protocol. So these aren't really pilots. They're sort of "we did these experiments which suggested this hypothesis or these hypotheses, and now we're going to put them to the test." So it can go the other way too. Basically anytime you want to use data in order to decide something about your method, or something about the question you want to ask in your pre registered protocol, that's the point where you probably need to be considering pilot data or preliminary data.
And I would also point to the importance in field research demonstrating that you can manipulate the system or reach the community that you need to reach and invoke your impact evaluations, economics research and in ecology and field research. It's often a point of pride to be able to demonstrate that you can do what you're proposing to do and pilot data can be quite helpful for justifying your comments or your ability to do.
Feasibility. Yeah demonstrating feasibility. Specifically if you're doing something out on the edge.
Jesse asks, "is it acceptable to include a pre planned analyses without hypothesis as well as hypothesis testing analyses? I've seen some journals such as Cortex allowing this but others (BMJ Open Science) specified that all analyses must test the hypothesis. When testing novel research questions, it's not always possible to have a hypothesis."
I think in theory, yes. I think in practice, when researchers pre plan an analysis or a series of analyses in great detail, in order to answer some question, they kind of end up with hypotheses anyway. It's just that they've perhaps not specified or that they perhaps don't stem from some very clear theory, it might be the case that they're kind of exploring, but they want to explore a series of paths in an analytic chain. And each of them is, in fact, essentially a hypothesis. But they don't have a strong rationale for any individual one. Instead, they're going to kind of test all of them. I mean, [ineligible] studies come to mind as one one way of doing this. "Hey, let's just test 15,000 hypotheses and correct our alpha." Because we're exploring the landscape. We want to do it in an unbiased way. So I think in principle, yes, it's possible. It's unusual. We don't get many submissions, at least not that I've seen where researchers are able to articulate in sufficient detail what exact analyses they're going to run and what conclusions they will draw from what outcomes which is also very important, without having without that either way, just becoming a series of hypothesis tests even if it's many, but I wouldn't rule it out. And I think, you know, in these situations if you're on that, if you're doing that kind of work, and you think you can meet that condition where we've got a question or a series of questions. We have no predictions whatsoever. We have a big data set, we're going to run these analyses, these very specific analyses, and we're going to draw these conclusions based on the outcomes. But there are no hypotheses. Provided bias is control throughout that chain from question through to interpretation of outcomes. It's almost of secondary value, if any, to the registered reports process that there are explicitly articulated predictions that's not a requirement really provided everything else is locked in and bias is control.
"What is in your opinion the best power analysis tool that you can suggest for mixed factorial ANOVA designs such as a two by two by two by two mixed design.
Why on earth are you doing a two by two by two by two mixed design? Well, you know what, it's funny. With any kind of two by two by two by two design, everything pretty much in the end boils down to a T test. It's just a difference of difference of difference of differences. I think things get tricky when you start dealing with multiple levels of like more than two levels of a factor. So g power can do some of this. g power, of course, notoriously struggles with factorial repeated measures ANOVA designs. Daniel Larkins has published a really nice preprint on this, which David can probably conjure in the chat.
give me a few minutes here.
I've got it in one of my slides. I talked about it in some of my workshops. It's a simulation based -- I think the preprint is actually called simulation based power analysis for repeated measures ANOVA -- or something like this, and it's very nice way of doing this outside g power using I think there's even a shiny app that goes with it.
There's also Pangea,
which I think was a tool built by Jake Westfall, I might be wrong about that. But that's also a really nice tool for doing complex factorial ANOVAs. And it's quite flexible as well. Perhaps not quite as user friendly as G power. But some g power does have its limitations. The other thing you can always do is simulate. And this is something that Dorothy Bishop advocates quite often, which is that if you can't find an analytic, an analytical solution for your power analysis, just generate data. So generate some data, feed it into your experiment and run power analysis that way based on, you know, predicted effect size and number of participants, that particular part of the design that you're testing hypotheses within and go from there. The one thing I also would say is that most of the time for most registered reports that we look, the key hypothesis tests are usually not the highest level interactions within an ANOVA. Those high level interactions within an ANOVA might be necessary in order to go further. But they're usually not sufficient. Because in any kind of high level design, where you've got three, four factors interacting, there's numerous patents the data could take, which would produce a significant interaction, there might only be a handful of those results, which would support your hypothesis. So it's really important to think about what pattern of results would confirm or disconfirm your prediction. And usually when you drill down to that level, in a stage one registered report, you end up with some kind of test that is comparing one condition with another one or one difference between [ineligible] with another difference. So you know, whilst I wouldn't say don't go ahead with mixed factorial designs, think carefully about what part of that design is the crucial test of your hypothesis.
Yeah, yeah. There's there's probably a nugget in there. But that's the main focus. All right, very practical question: "A few journals recently accepted registered reports, but they do not yet have clear guidelines for like the length manuscripts should have for submission. Can we use the non registered report published papers as a guideline for the outline of the submissions? Or are they different enough to assume larger pieces of writing will be accepted?"
What a good question. I've never had that question before. I would say, if there's nothing written specifically in the policy, there's two possibilities. One is that there are no word limits and that kind of thing so it's basically just a big zone of freedom. And that's how I run things. So you know, the journals I edit for, there are no word limits on registered reports at all. And the guidelines are the guidelines and you follow those guidelines you'll be okay. And they're completely separate from the regular article guidelines. Not all journals work the same. Some guidelines are not as informative as they could be. Some journals do impose word limits. And sometimes those word limits are not stated so clearly. So I think if you're in doubt, and you're considering a journal, and you think you might run afoul of some arbitrary formatting guideline, or where the middle whatever it might be, I would just drop the editor a pre submission inquiry and say "we are considering submitting a registered report to your journal, we have the following questions." You might even use this opportunity to tell them a bit about what your study's about to make sure that it's within remit. You know, I get a lot of these sorts of inquiries that can be dealt with very quickly. But in general, a lot of the guidelines are quite specific. Where they're not specific, what I wouldn't do is just submit blindly to a journal and hope. Because they might come back and say "thank you for your 10 and a half thousand words stage one registered report. Our word limit is 3,000 words. Go away and change it all." And you'll be like "oh for God's sake." So you know, just perhaps get clarity get sufficient clarity in your own mind before taking that leap.
Yeah, those pre submission inquiries can be quite helpful. Point you in the right direction a lot of times.
And don't be shy about doing that. You know, maybe some people feel it's a bit inappropriate to email and everything with the question. It is not. You know, pre submission inquiries are routine at journals across all article types. Use it.
All right. "Referring to the recent Nature editorial from June 2019, where the registered report concept was described. Why don't need your journals themselves except registered reports yet. What is the status of registered report adoption in medical journals?"
Okay, so, Natural Human Behavior does offer registered reports. There is one Nature journal that does adopt them so far. And without speaking for Nature or the Nature Publishing Group, I can tell you that they are very keen on the concept generally. They are very supportive of it internally and they're doing a lot of work to discuss amongst the different editorial teams ways of implementing. And I've been involved in a lot of the discussions with them right up until the end of last year. And I can tell you that there are two more journals which are coming along very soon in the nature group, which will be very significant adoptions. Nature is also in the process of considering adopting the format and, and the chief editor, Magdalena Skipper is very positive about road reports. But also they're cautious. They want to make sure that they have, I think, from their point of view that their main concern isn't publication bias or having to accept papers regardless of results. It's making sure that they have the editorial expertise on hand in order to assess these manuscripts at stage one without making mistakes. I think this is quite prudent for a set of journals, which employed professional editors rather than academics. They want to make sure they're getting it right. And I think that's okay. So there'll be a bit slower to come online. But I think we can look forward to a future where most, if not all, of the journals offered within the nature Publishing Group to accept registered reports eventually.
The status of registered report adoption by medical journals is a separate issue again. So BMC medicine was the first major medical journals offer registered reports. I think they launched in August 2017 or 2018. And they've already had some submissions, which is good. They've been getting some good some good submissions coming in. There are a couple of smaller medical journals which are now offering them that none of the Big Five medical journals like BMJ, New England Journal of Medicine, [ineligible], etc. None of those are offering registered reports yet. They do know about the format. They have not made many, or if any, in many cases, any positive noises about them yet. We don't entirely know why that is, I suspect when publishers or journals go silent on registered reports, it can often be because they are concerned about having to accept manuscripts regardless of results. If they say nothing, that is often what fills the silence. But they're reluctant to say that in any public way because it's kind of unpopular to say that your marketing model depends upon publication bias. So we don't yet know why these big journals aren't offering the format. I would suspect that some combination of, of fear of eliminating publication bias and also perhaps consequent effects for impact factor and, and rank and this kind of stuff, which these big journals care a lot about. But we will keep pushing. And I would suggest anyone who's watching this who wants a medical journal to adopt registered reports, go and ask them, you know, the more people that put pressure on these editorial boards to do the right thing and to offer this as an option, that eventually they'll just fall over. They can't say no forever. I've had seen numerous cases over the years of journals, shaking their heads and saying, "No, we couldn't possibly do this." And then after a while, they change their minds because they realize which way the wind is blowing. So keep blowing wind, keep pushing them and, and eventually, I think we'll succeed on all these fronts.
And I think that the old is a natural fit for the model. They're extremely used to the real process of registration, preregistration. There's been a whole lot of work done in that field by Ben Goldacre, and many, many folks who have been looking at the difference between what's registered and what's reported, and looking at different types of outcomes reported on registries versus those that are reported in the in the articles. And so that field is quite aware of the issues. And keep on pushing. And if you're in that field, as Chris said, asking editorial board for it or even do a pre submission inquiry saying "I would like to run this study and I believe the results should be published no matter what." A couple of journals have come on just from direct inquiries of people, you know, asking for their particular study to be submitted as a registered report. So those are all possible.
David, this might be a good opportunity to post in the chat the link to Registered Reports Now, which is a crowd initiative to increase pressure on journals to adopt registered reports. It works by providing template letters for consortia groups of researchers at all career levels to lobby using collective action to write to journals and say, "please offer registered reports. These are the common objections. This is how you handle it. This is how you do it." It puts me in David right in the firing line of helping these journals set up, which is something we're always happy to do. The more groups of researchers the more critical mass there is, as I say, the more likely these journals are to eventually flip. And also, the nice thing about Registered Reports Now is that there's a public list of every journal that has been approached, that shifts this entire lobbying initiative from behind closed doors, which is the way we used to do things right out in the open, where everyone is accountable for the decisions that they make in positions of leadership. So I'm sure David will post a link to that. There it is. It's right there now in the chat. So please read and use this and assemble. Avengers Assemble. Go ahead and approach these journals.
Next question: "would you consider a secondary data analysis study, for example, on a longitudinal cohort for a registered report if the lead author has never accessed the data set before?"
Yes, very easy one to answer. Absolutely. Where the authors have never accessed the data before, there is no risk of bias or minimal risk of bias. And so I would personally consider that perfectly fine for a secondary analysis registered report. That's just me of course. I edit seven out of 223 journals. So if you are considering a journal where I'm not an editor, check their policy. If they don't say anything about secondary analysis for registered reports, perhaps again, use pre submission inquiry to lay out your scenario and what you have in mind, make sure that you emphasize in your pre submission inquiry, the steps that you have taken to prevent or minimize bias and overfitting, which is always a risk when you're analyzing data again, and see what happens. Most of the time, I think if you, particularly if you haven't even accessed the data, then a secondary registered report will proceed in much the same way a primary one would.
Yeah, absolutely. Alright. Victoria asks, "after about 10 months and two rounds of reviews, our stage one registered report was rejected, because, 'the power analysis is likely to be an over estimate of the true effect size.' It's not actually the effect estimated in the design and does not match the actual analysis plan. While we do agree with reviewers on this response, we felt we'd done the best we possibly could given the frustrating lack of existing effect size. It seems the reviewers think our design simply wasn't a fit for the registered report, which we interpret to mean that all registered reports either must be exact replications, or two: use Bayesian analyses. Do you think this is true? If not, what room is there for power calculation estimates with registered reports." There's a little bit of more background, but let's talk about that generally.
Um, wow. Okay, that's not so -- that's a bit disappointing to hear. So to see a registered report rejected after two rounds of review. You'd have thought this issue could have been addressed much sooner in the process. It's also I mean, from from my mind, and we see a lot of registered reports in which -- this is very common for reviewers to raise this issue that an effect size estimate is is overly optimistic, and therefore, that a much larger sample size is needed. And the best way to address this is simply to ask the office to increase their sample size. And to, of course, as part of this, you know, to align, everything. Making sure that that link in the chain is exact as you pass through between question hypothesis, sampling plan, analysis plan, and interpretation. So I, in answer to the question, should all registered reports either be exact replications or use Bayesian analysis. I think that is not true. I know it's not true because most registered reports I handle are not exact replications and most don't use Bayesian analyses. So we know that that is not true generally. It appears that In this particular situation, there's been some problem that has led to this conclusion. In general, the main issue, when you say what room is there for power calculation estimates with registered reports, the key thing to really nail is making sure that that you are tackling something that everyone would agree is the smallest effect size of interest. And this can be a point of contention, particularly in areas where there isn't a huge evidence base or theoretical base to motivate that effect size estimate. Usually, what happens is that in areas like this, so this is a study on in infants, yes, bilingual infants, usually, there's some -- usually reviewers from the same area appreciate the limitations of the feasibility of doing very large studies. So usually there's a there's a natural kind of realization that doing a registered report that is already larger than typical within an area where these sorts of resource restrictions apply is better than doing it the old way outside the registered report format in a small sample and introducing all this bias. So usually, this is an unusual situation, as far as I can see where reviewers, you know, after two rounds of review, there's been this massive disagreement. What I would suggest perhaps is that Victoria, you contact me offline. And we could talk more about some of the details. Or I mean, it might be worth just looking through here some of your additional background that you've given. It might be the case that this is worth appealing if it's not too late. Particularly given that you've gone through two rounds of review and there might be -- any registered report that goes this far, there is always a solution. Right? It may -- the solution might be just recruit more, change the analysis. There's usually a solution that can lead to acceptance, but it might be worth exploring that in more detail, and I'll be happy to talk about that.
Yeah, yeah, often, early rejections are for not the right fit or, you know, the answers could be uninformative. But these should be solvable problems, although they are of course, as you described, challenging ones, but there is a solution there.
Yeah, say look, just drop -- if you want to talk more about it and it's very detailed, so I don't have time to look at all the details here then respond to everything -- but contact me offline and we can we can we can discuss this in more detail. And who knows, maybe we can come back in our next q&a and say, "Hey, yes, this registered report ended up being appealed successfully at Journal of x."
We'll be curious about this Victoria so follow up.
Martin asks, "What effective alternatives direct reports and pre submission inquiries are there to reduce the risk of manuscript rejection?"
What effective alternatives for registered reports?
So can you elaborate on this question, Martin, if you're if you're listening. So are you talking -- is the question here alternative article types? Or is the question about alternative approaches to submitting your registered report?
I think maybe focusing on the -- that besides a pre submission query for register report, what what should we do to maximize the probability of it being accepted? And we did post that top 10 recommendations.
So maybe it's worth posting the top 10 recommendations again, if you've got that document to hand. I include this within the registered report guidelines at, I think most of the journals that I edit at, because there are lots of common ways that manuscripts fail to meet the criteria sufficiently to get to in-depth peer review, and I should point out, you know, it's not necessary to nail every single criterion 100% to get to the reviewers. But you have to get probably about 80% of the way there. So there has to be some -- the editors have to see, because they're not going to be specialists in every area, they have to be able to see that peer review in this context will be constructive and isn't just going to identify a whole lot of glaring omissions, because we try to avoid that for everyone's sake. I will not send the manuscript or registered report submission out for in-depth review if I feel that it falls a long way short of meeting the stage one criteria, because that risks wasting everyone's time. If the reviewer sees an enormous gap between what they're reading and what they think a registered report needs to be. It's much more likely that you'll get three reviewers recommending outright rejection and then the editor is in a difficult position of having to decide whether or not to invite a revised manuscript or not. It's much better if authors get closer to that point when they initially submit. So that's where these top 10 recommendations -- this is not the top 10 recommendations. This is the -- what you've put up there David -- that's the checklist for building registered reports. But it's actually also great. I recommend using this as well. By the way, coming back to the last question. One of the things you can do before that disappears, is in that checklist, Question nine has a table that I've started using in my registere reports workshops, where if you really want to nail the linkage between question, hypothesis, sampling plan, analysis plan and interpretation to everyone's satisfaction, including your own, then this table is really useful and I would recommend you actually complete this and put it into your stage one registered report because it'll it'll help everyone understand exactly where you're coming from. But as I say, this is not the top 10. The top 10 is a separate document we've got somewhere.
Okay, we'll find that and [ineligible] post it [ineligible]. Sorry about that.
That's the best way really -- the best way of making sure you don't get a desk reject. Or even worse, perhaps, it goes out to in-depth review with a whole lot of omissions and problems. And the reviewers are just like, "what the hell is this?" And then you get a whole lot of negative reviews back and then you go through this torturous process. The best way to avoid all that is to nail it when you submit. And those top 10 reasons for desk rejections should help you do that. By the way, one other thing you could think about -- I've seen a couple of people do this -- is to post their stage one draft as a preprint for a few weeks before they even submit it to a journal, and then get community feedback into it, and that can be quite useful.
Yeah, that's really good idea post it online and share around as many colleagues you can. And that's a really good way to -- [ineligible] omissions that you can get some good feedback at that point. All right. We'll find that top 10 recommendations and and share it out. But I think that is all the questions that have come through. You're welcome, Martin. And just double checking the chat window here because a couple of questions came through the chats and the q&a. Okay. All right. So I think that's, that's it for now, we will make sure to send the recording of this webinar out to the panelists. We're looking into ways to provide a transcript for that as well. That can be a useful way for disseminating some of this information. And, Chris, thank you very much again for your time.
pleasure as always. And as I say, if you've got any questions you'd like to follow up with me 1:1 you can always drop me an email, and I'll take a look at any individual cases. But until next time, we'll do this again. Probably what next month?
Probably Yeah.
Yeah. Super.
Take care everyone. Bye bye