February 27 Getting Involved with TOP Factor
7:56PM Feb 27, 2020
Speakers:
David Mellor
Keywords:
journals
policies
guidelines
data
transparency
top
steps
practices
reported
replication
published
badges
standards
precisely
expectations
required
article
publishers
factor
registered

Alright, we're going to go ahead and get started. Thank you, everyone for joining us today. Getting Involved with TOP Factor. My name is David Meller. I'm the director of policy initiatives here at the Center for Open Science.
So the Center for Open Science. We are a nonprofit organization independent of anyone else. Located in Charlottesville, Virginia. We're funded by both government and private family foundations with a mission to increase trust and credibility of scientific research via transparency. And we work to achieve that mission by identifying barriers to reproducibility through
replication, large scale replication projects, we advocate and educate for incentives, policies and practices to address the problems that lead to low reproducibility. And we build and maintain the OSF, a platform for connecting and collaborating
policies and practices that more directly align with the ideals of scientific practice.
And what the needs were that led us to come out with the top factor as a way to more universally describe precisely what steps are being taken to promote open Science practices.
Towards the end, the second half will be about how to get involved with the TOP Factor. How to point out policies that need to be updated, or how to submit journals that you have, recommendations for journals for us to evaluate or how to
evaluate journal policies yourselves and send them our way. Finally, we'll talk about some of the future plans we'd like to see over the next couple of years and how this can evolve.
In case you haven't heard, but I bet most of the people here online have, the transparency and openness promotion guidelines, the TOP guidelines, consists of eight standards that can be applied in three levels of rigor. They're
directed, they are tools for journals, publishers, and funders to take more direct actions towards promoting open science practices. They cover data citation in order to
incentivize the publishing and sharing of those data sets initially. They should be treated as citable objects and individuals should get credit for that work. Data materials and co transparency are kind of the core set of practices we want to see underlying empirical articles.
Somebody wants to build upon your work, key details aren't missing.
Registration of studies involves putting a study into a registry prior to it being conducted so that it can help open the file drawer and help understand what the denominator is. How much research is actually conducted every year, compared to how much gets published. Preregistration of analysis plans starts to address some of the misuse of statistical analyses, and makes the distinction, most importantly makes the distinction clearer between confirmatory hypothesis testing research and exploratory hypothesis generating work, or work that's involved in model development or theory development. Keeping those two modes of research distinct is important for a variety of reasons. And then to encourage replication studies. Replications are the bedrock of scientific evidence in many applications, but they can be very hard to get funded or to get published. So TOP takes that on.
I'm going to give a couple of examples of the three levels of rigor in which these policies can be applied. The status quo for many journal policies is to encourage, or sometimes even discouraged, some of these types of practices. And we know for a variety of reasons, and there's been a lot of empirical work showing that those encouragements aren't very effective at actually seeing the type of practices we want to see happen. For example, encouraging one to share data when it's not required, doesn't lead to many openly available data sets. So all the top guidelines start just above that status quo with a disclosure requirement. "State whether or not data are available," for example.
Level two is that mandate, data must be made available in a trusted repository. There are exceptions for ethical and legal concerns. But otherwise, there's the expectation that data be made openly available. And then three is a reach goal,
where data must be provided to the
maximum extent, ethically and legally permitted. And somebody takes the effort to computationally reproduce the reported findings using the author's original data. That is a reach goal. It's not going to be widespread in the very near future, but it is possible and one of the main points of the TOP Factor is to show what's possible by seeing what other journals are doing in related disciplines or in your own discipline.
Preregistration, a little bit similar, a little bit different in some ways. Level one standard is the same disclosure expectation the article states whether or not the work was preregistered. It's important information to have.
If the work was preregistered, the journal checks for compliance with the plan or for transparent changes from the preregistered plan. Most preregistered plans that we see when they are reported there are deviations and changes from what was expected to what was actually conducted.
But it is important for others to evaluate the timing and the rationale for those changes. So undisclosed deviations in preregistered plans could be cause for concern, not always.
And then finally, level three mandate: inferential or confirmatory studies must be preregistered if they're going to be published in this journal. Again, that's a reach goal. A couple of journals do that and they're making clear expectations about the types of studies in which preregistration is expected. If you're going to do a confirmatory, well justified hypothesis test on a population and make an inference from that to a wider population. Preregistration adds a lot of value and some journals and some funders are requiring it.
The TOP Guidelines were published. They were created in 2014 and they were published in Science in 2015 under this heading, promoting an open research culture.
And the purpose of the TOP Guidelines were to provide recommended language and a framework for implementing best practices in scientific publishing and funding.
One of the barriers to changing policy is just not knowing precisely what to include or how to include it. So the policy language and the example templates provided by the TOP Guidelines are all CC-0. And we've been working with editors, publishers and funders for the past five years to help adoption and to help implementation of those practices and those policies in those venues.
to this framework, to the TOP Guidelines framework. There are over 5,000 signatories of the top guidelines, representing every major publisher and every major discipline, wide support for the philosophy of promoting these types of practices and working towards implementing one or more of the different
practices and policies. A signatory of the TOP Guidelines is an entity that is saying that we support the principle. And we will review the the policies over the course of a year to determine which, if any, are appropriate for implementation. We have seen a widespread implementation of one or more of these TOP policies. So we've been doing our best to track those, and to point to examples of publishers, journals or funders, implementing TOP policies: we know of just over 1,000 journals that have (and funders) that have TOP-compliant policies.
But that's actually a pretty crude measure of implementation of these TOP standards. We mean, the definition of a TOP-compliant policy is that the author guidelines have at least one policy that is at least level one of those eight different standards. So that doesn't give much transparency, that doesn't give all that much information about who is moving and who is
Many of these are doing, as you can imagine, at various different levels. There are several that are taking very direct, very high level steps of computational reproducibility, implementing two stages of peer review, that Registered Report format.
And we see widespread adoption of that kind of level one philosophy of stating whether or not the work is, whether or not the data is made available, for example, but we also know as looking through those 1,040 implementers, the most common implementation of a TOP compliant policy is describing how a data set should be cited, which is a great practice of course, but there's a lot more that can and should
Wiley, Springer Nature, Elsevier, Taylor and Francis come out with a series of data policies that can be applied to their, to the journals in which they publish in a modular way.
So, they described them in slightly different ways, but there are policy types or policy levels, or descriptors of basic: share upon request, made publicly available, encourage, expect, mandate, verified, those are all different types of policies that have been implemented by the major publishers and by individual societies.
And so we wanted to give a little bit of clarity into which of those policies again comply with the expectations of the TOP Guidelines. Remember, a basic level one implementation of TOP requires it does require something. It requires that the authors, that each article has a data availability statement,
comply with that level two mandate that data be made available. And then a couple of journals and a couple of publishers have policies that comply with that higher expectation, and there's actually quite a bit of
three policy is that extra step of computational reproducibility. And there are journals that do that. And there are publishers that point to how that can be done.
But that's only gone so far. So we are looking for ways to provide more specific information. We wanted, we did not yet up till this point, have a comprehensive database of journal policies, as they relate to everything covered in the top guidelines. We very frequently were asked for examples or or requests for information about how much any given policies being implemented across a particular discipline or sub discipline. We had no public way of providing that information clearly and quickly. We didn't have a means of providing very direct feedback on specific attempts to implement policies covered under the top guidelines.
We're in frequent communication with a wide number of journal editors, policymakers, academic societies and publishers. But that have been quite ad hoc, as you can imagine the variety of different ways to interact with them as opportunities became available. We have now on the TOP Factor is a very specific and direct feedback on how this policy relates to the framework covered by the TOP Guidelines. Up to this point, it's been difficult to compare and and to learn from what others are doing in your discipline, or in other disciplines, or across publishers. So a good example is a lot of the work that's being done by the political science community or the or the economics research community, American economics Association. Both of those have several journals where that computational reproducibility is, is being taken, being tackled, head on. Lessons learned from those should apply to many other communities and comparison across communities is going to make that more more easily accessible. An individual looking to up the expectations for work that they publish, will be able to see examples of what other disciplines are doing. Or, within disciplines. Which practices are being taken from within a discipline is also the same type of comparison to be made and lessons learned can be made that way. We do want to give more consistent recognition for those implementing best practices.
We have, as they came across our desk, we would promote them, put them on our website and showcase what we were able to see. But that was, again, a little bit of an ad hoc process. We wanted to provide more consistent recognition of journals, publishers, journals and publishers that are taking these steps head on. So those who are doing these level three practices deserve recognition and credit for the work that they're doing to to put that in. And this provides a way to do that that's unbiased by whoever I happened to have heard of, most recently, for example.
And then finally, awareness of those not taking the minimum steps. We -- there's often a discussion about carrots and sticks for how to make progress in scientific reform. Up to this point, we've been promoting a lot of best practices and encouraging those to take, take further steps. But we've known that a large number of journals and a large number of articles that are published there aren't meeting really basic expectations and what I would say should be requirements for the minimum level of transparency that should be expected of empirical research claims and we're at the same time providing very clear guidance and tools and resources for for taking a step up and implementing some of these better practices. But it does take awareness of how many journals and how many policies are not being implemented to the degree that they should be.
So that all comes to the TOP Factor. A database where you can evaluate and see the policies and steps being taken by large number of journals and publishers. Let me do a live demo. Got a video backup just in case I crash something. This is available at www.topfactor.org. And it's a database of journal policies. Each of the TOP standards are listed here at the top of the page. Data citation (make this bigger), data code and materials transparency, design and analysis guidelines. Looking at for those reporting checklists. Study and analysis plan preregistration, replication -- whether or not the journal encourages replication studies. Level two would be incurred during replication studies and reviewing them with the results stripped out of the review process. And level three would be encouraging replication studies to be submitted before the study is conducted. That's of course a registered report.
There's a separate standard for other types of registered reports. So does the journal encourage submission of novel research studies as as a registered report. That would be a level three policy. Level two would be what's known as a hybrid registered report, submitting those studies but the results removed again. Or level one, a basic policy of just stating that we'll publish regardless of the novelty or significance of the of the findings of novel research. Then finally, open science badges a way to indicate whether or not data materials or the preregistration is available underlying the reported results. That's just again, that visual indicator located on the journal article to demonstrate and to point to the fact that more transparency is underlying the reported findings that might be required by the journal.
So you can, if you're particularly interested in seeing what journal steps are, what journals are taking steps towards that data transparency standard, you can sort by data transparency, you can sort by analysis code [...]
So by sorting by level three there you can see all the journals that are taking that highest step towards [...] how many journals are for example, at that level three of data transparency, and those are good examples to follow. You can see journals where registered reports are accepted for replication studies.
You can, maybe you're focused on areas of empirical research that where preregistration or replication studies aren't of interest to you. You can filter those out and just focus on empirical articles, or journals that publish empirical articles, and the steps that they're taking for data materials and code transparency, for example. And the total in this column is updated as that as that filters are applied. The total represents the sum, as you can imagine, of all their policies. So one, two or level three. The highest possible at this moment is a 29. The Open Science badges have two possible points for giving one or multiple badges. And you can sort, of course, by whatever total you're interested in looking at. You can filter by seeing what steps various publishers are taking. And you can filter by discipline.
Again, you can see what economics journals are doing and you can compare them to psychology for example. Or just focus on what the psychology literature is doing.
Okay, that's it for the demo for right now, but feel free -- I encourage you to play around with that more. And questions might come -- one question just came in about that level one and level two for badges. And again, that's whether or not they offer one or multiple badges.
The specific rubric is available for describing zero, one, two or three points for each of these different standards. I'll show where that is made available. Oh, that's a really good -- thank you Malika, something I had in my notes that I forgot to demonstrate. I'll go right back here above these little blue dots. These are the justifications or explanations of policies. [...]
So many, there is ambiguity in lots of these author guidelines about what precisely is required. These blue dots give a little hover text for a justification for why a level zero, one, two, or three is warranted in this case. I'll give a couple of examples later on of where we see a lot of ambiguity. But this is just an explanation to the author or an explanation to the editor describing why this level was determined.
For example, there could be a very in depth data policy that doesn't actually require anything so in that case, it was rated a level zero, despite the fact that they have a lot of explanation about how to share data, we would explain that there is no data availability statement required, or data transparency is not required. And so that does not comply with the TOP guidelines. Those blue dots give a little explanation of why that score was applied. Here's one, that code availability is strongly encouraged. But again, that's not one of the top policies.
I'll show you where you can get the link to this evaluation rubric. But each of the scores comes down to asking whether or not all the underlying data must be made available, or whether or not there must be a data availability statement.
A couple of summary stats for what we've seen right now, we're up to having 346 journals on the TOP Factor database. I'll show you where you can download that and see precisely where those are included. They range from zero to 27 out of a maximum of 29. The mean score is 4.9. The median is three, meaning that half the journals that we have in the database have, again below that, are at or below that three. The modal response used to be one, but we've been updating over the past couple of weeks that the most frequent TOP score is a zero. As of last week, the modal response was one because of a large number of journals that have that data citation encouragement. But, but now the most frequently included TOP score is zero. And we've started sharing this with journal publishers and editors about four weeks ago. And we've seen, it's a little bit of a rough estimate at this point, but about 35 journal policies have changed based on discussions that we've had and or expectations that editors thought were explicit, but that weren't actually explicit in their author guidelines, based on seeing the results of these TOP scores. So all this information is available at the website cos.io/top. We have some information about the rationale for the TOP guidelines and how to use it. And importantly, what steps you can take to get involved. And that can be suggesting journals for us to include in the database. We have a little bit of a backlog of journals that we'd like to get up there. We share them with the journal editors before we put them on the TOP factor publicly. And so we just track that internally. But please do send us your suggestions. I can't guarantee precisely when they'll get up there. But we do track those requests and we'll, we'll add them when we're able to. You. I'll get to this in the next couple of minutes. Please do submit journals that you have evaluated. We have a form for that. If you have a couple of journals that you would like to show up on the TOP factor. Send us your evaluation. We'll check that we'll compare with, with what we see. And then we'll upload that to the website. And then finally, we do make mistakes. If you see a policy that's not accurately represented, please let us know, either by email or that suggestion form notifies us that we need to take a second look at a policy. And that often will start a conversation between us and you or us and the editor or the publisher, in order to make sure our expectations are clear. So if you click on that, submit your journals [ineligible] evaluated form that will get to this submission form. You can send us information that you've evaluated based using that rubric that I pointed to. The rubric is available on the website right here. And you can send us send us this information. So it's a pretty clear set of questions. Do they have a policy on data citation? Do they require it. Do they have to statemet that they'll check that.
And getting to that blue dot that was asked about just a minute ago. That justification if you're not sure if if you think you're being too lenient, you can say make a note about that. If you think you're being too strict, you can say it doesn't appear that they actually require something, but the language is ambiguous. Data transparency, again, state whether or not data sharing is encouraged or not even mentioned. Article must include a data availability statement, data must be made available or computationally reproduced. And again, you can add a little justification if there's a little bit of ambiguity. We see several common issues when we're evaluating these data transparency standards. So the three most frequent ones, I think, gets to about 80% of the questions we have when looking at a data policy.
The policy must apply to all the data underlying reported results. Oftentimes, author guidelines will state that a subset of the data had to be made available, for example, in communities where there is widespread agreement within the community that this is the repository that everybody puts their genomics data in, for example. That's good. We encourage those policies, we have nothing against that, obviously. But it's it, the the TOP guidelines policy states that all the underlying data that was used to generate the reported findings must be made available, especially that statistical data those samples taken in order to make any inferences to a wider population, see a lot of benefit or transparency into that type of data set. "Available upon request" is not compliant with the TOP guidelines. So something saying that the authors must make available to the reader community. If there is a request to see the underlying data. That is not compliant with the top guidelines, there's a lot of empirical evidence that that doesn't actually lead to too much data transparency. So that is not compliant with the TOP guidelines. And we see a lot of policies that strongly encourage or say, say that data should be deposited, for example. And unfortunately, that's sort of an unenforceable expectation. You can't go to the journal and say that this article does not comply with your policies. Can you please help me figure out what to do about this? If the policy is simply saying that the article should have data available. So those are the type of statements we frequently see in author guidelines that do not comply with these standards. Design and analysis or use of reporting guidelines or reporting checklists. Again, this the purpose of these standards is to make clear precisely what was conducted and to report all of the important statistics and design elements necessary for understanding precisely what the method's done. Many people might be familiar that methods sections have been shortened over the past few decades to such a degree that it's often impossible to tell precisely what was done. That's one of the major barriers in our reproducibility projects. It's impossible to tell precisely what the design was. So a checklist can help remind the author of what important details to reply to include in their manuscript. So we often see article or author guidelines pointing to resources such as the Equator Network. I'm not sure how many reporting guidelines the Equator Network curates, but it's several dozen or probably several hundred, but some of the major ones are the consorts guidelines, or their Rive guidelines, or the Prisma checklist. Different, different disciplines and different study designs require reporting different types of information. And there are well curated sets of communities that have taken that on.
A couple of questions coming up, make sure you get those at the end. The journal created checklists, by Nature and the star methods at Cell Press are good examples of of an individual journal, stating, this is what has to be described when reporting the results of empirical work. And there are lots of societies that have taken this on to APA JARS standards are good examples of items that need to be included when reporting empirical research. And finally, a couple of other steps. Know whether or not the journal encourages replication studies or if they encourage replications as part of a registered report format. We, most journal policies don't mention anything about whether replications are appropriate for the journal. And there are several auther guidelines out there that specifically discourage. Similar with registered reports, does the does the journal accept this format? There are several benefits with the TOP factor. It is transparent. All the data available to it is made available on our platform and we point directly to the author guidelines that we're using to evaluate them on. It's based on practices that are directly associated with core values of how science should be conducted, as opposed to significance or novelty or newsworthiness. Those, those are not scientific values. Those are those give other information that's helpful to have, but doesn't get to the importance of the underlying evidence. Evaluate something that the journal controls, so you can't control lots of things in life, but a journal can control precisely what steps they're taking on these fronts. And so that's it's very easy to [ineligible]. Importantly, this is diversifying away from all the other metrics out there that focus only on how much attention is grabbed by a journal article. Again, that's fine information to have. We don't want to eliminate that. I don't think the world would come to an end if we did eliminate that, but kind of a, I think it's important to have other ways to evaluate steps that are being taken by a journal.
There are limits to what the TOP factor is, it is still a journal level metric. It does not directly speak to an individual articles. A good example of that would be a great TOP factor score of eight would be disclosure of all these practices and encouragement for submitting registered, a replication study. You know, that journal is taking measurable steps in the right direction. But if the answer to all those are "no, data are not available. No, I will not share my materials. I'm not going to develop the checklist." If all of that is "no," then of course, that article is no more the evidence underlying that reported finding is no more transparently available than it would otherwise be. So it's still a journal level metric that doesn't necessarily apply to the individual articles published in there. And we wouldn't want to imply that that is always the case. Registered reports are another good example. An article offering this as a format is taking a great step towards, you know, addressing publication bias and the incentive to present exploratory findings as if they were confirmatory. We obviously greatly encourage that. But not every article published in a journal is going to be a registered report, nor should it be. So that's again, an article level metric that that isn't being reflected by this TOP score.
Finally, there's a risk of gaming thrown enforced policies. It's fine for an article to assert that they're doing X, Y, or Z . But of course, there are expectations to to follow it that's being done. The article at the journal states that all underlying data must be made available, or a really good reason described in the disclosure statement of why it's not made available and steps to take to access the data. If they're not following those asserted policies than they are in effect, getting credit for being more transparent than they deserve to be. So. So the solution to that is an auditing process that we would like to help develop with the community.
So I get to the future of the TOP factor. We obviously want to get a lot more journals covered by the TOP standards. We want to get about 1,000 by the end of the year, so we need your help. Submit us recommendations for journals to evaluate or better yet, submit evaluations that you've seen throughout the field. You can also submit it to us directly in a .CSV file, using the same format that the data are available on the OSF if you don't want to go through that Google form. So please do send us that if you'd like to do that. I think we're on track to get to about 1,000 by the end of the year, but I don't think we'll get there without a little bit of extra help.
As I mentioned, audits are going to be necessary. We don't yet know the fairest way to do that, whether it's a sample or every article published over the course of a timeframe, and what counts as an unenforced policy, or how to display that information on TOP factor or someplace else. But we are, you know, transparently showing what the journal is asserting. And I think it's only fair to have a way to check that those are being enforced the way that they expect they should be.
And, of course, there's, you know, we have 10 fields on the TOP factor right now. The eight top standards, registered reports and badges. I think there's a real good argument to be made that transparency into peer review is one additional step that can address some bad practices and scientific publishing. There are probably a lot of others. So all of those are on the table for future inclusion and TOP Factor. With that, I'd like to say thank you.
I'm going to ... there are a couple of questions that have been submitted and I'll make sure to get to those. If you have more questions, please submit those through the q&a panel.
"Does the TOP Factor replace the ICMJE recommendations? The International Committee of medical journal editors has specific recommendations for steps that should be taken in publications." It does not replace what ICMJE is doing. They have a set of specific criteria for particularly in clinical medicine, what needs to be made available and there consortium, I don't know, I don't remember precisely how many journals are on their board, but there's a certain core set of folks in the committee. And then there's a wider community that has asserted that they follow the recommendations of that committee. So the committee provides specific recommendations. I believe, I might get this wrong, so please correct me, I believe their requirement is this disclosure of individual patient level data of any clinical trial needs to be a statement describing how to get access to that data. So it's kind of a level one policy but with an expectation that it would be made available through through ethical means. They also obviously have strong recommendations towards registration of clinical trials. Registration for clinical trials has been required by law for about 20 years now on clinicaltrials.gov. And ICMJE stated a few years after that, that none of our journals, or journals taking our recommendations should publish the results of the clinical trial if it was not prospectively registered and it was not registered before the first patient was enrolled in that trial.
Most of our focus is outside of clinical medicine. There are, as I just described, strong community norms and there are strong legal requirements for rigor and transparency in that field. We see this as a complimentary effort. We would, I think we have a lot to learn from each other. But there's no specific plans that TOP Factor would ever replace, for example, what the ICMJE's doing. That would be, that's not on the table.
Yeah, for the open science badges right now, the score is 0, 1 or 2. There is the possibility of more badges in the future particularly for analytical code. Or for other things. So that could go up in the future. We're just wanting to give a little bit of transparency to what steps are being taken to, again, recognize when data are available or materials or pre registrations are available. So that one is subject to change as as that evolves. And it doesn't have to be the Open Science badges that that we promote or encourage use of. There are a couple of other publishers, other couple of other journals that indicate when data are available through kitemarks or through or through other visual indicators on the table of contents that data are available. So that's the criteria for for that badge.
Okay, "is there anything you can say on the feedback you have had since launch?" The the biggest feedback we have gotten has have been folks reaching out to us saying "I really disagree with, with what you're [...] I don't quite understand why this evaluation is being promoted in this way." Those have just been very direct conversations about what is required or what is encouraged in order to get published in in a given journal. They've all been extremely fruitful in the sense that they point to very specific language and very specific expectations about what is or is not, again, being required as a condition of publishing in that in any given publisher or any given journal. So that has been the focus of most of our conversations with publishers and editors over the past couple of weeks about, about what this TOP Factor means. And again, it has led to several, at least 35 at last count, clarifications on author guidelines about what is expected or what is required. Some of the policies data citation is one that a lot of folks hadn't really considered. It's very uncontroversial. Of course you should cite the data set if you're using it. But there have been a lot of discussions about "Oh, I didn't think of that as being something that's important to use, or stating whether or not it should be in the reference list," which is where citations are often counted.
I'm just going to go through my backlog here.
If I mark yours as done, but you disagree to that answer to your question please just raise your hand again.
"What if the journal suggests an external badge rewarding site, but don't visualize on the paper on the table of contents." That probably wouldn't count. It's it's important for the journal to signal to its readers that that data, for example, are available. And we would have to take a close look at it. But the the underlying rationale for that is to give additional recognition for for an empirical article that is more transparent than its than its standard these days. So it would be hard to see if they're pointing to an external site of awarding badges. If that could satisfy that underlying requirement. I think it would not comply with that policy. But we could, of course, take a closer look. And importantly, if we do see good implementations that are technically in compliance with the way top is framed at this moment, the feedback from those processes are being used to improve the TOP guidelines. And so that would be the work, that will be the focus of future work on clarifying the levels, adding levels if they're justified or giving additional guidance for what counts as best practice in each of these different standards.
"Has it been a manual process to go through the journal instructions to do that analysis?" Yes, it has. We are aware of a couple of drafts of machine learning algorithms or or more brute force attempts to to score these types of policies that probably would be the future. We know that there are about 30,000 journals out there in the wild. I don't think we could get to all of those through a manual process, obviously. So we're looking at more machine-readable ways to do that. It's one of those things that we take a lot of resources to start, but then obviously have a lot of efficiency going on. In the near and medium term, we're focused more on crowdsourcing for folks sending us in their evaluations, us checking them and putting them online. And we think we can get to a fairly decent set of journals that a decent percent of the scientific community would look to, when considering where to publish. We think we can achieve that goal through manual and crowdsourcing efforts. As time goes on, more automations are probably going to be needed. I guess that's all I can say about that. That's my only expectation right now.
Okay. So I'll stay on for another few minutes, just in case any other questions come up, but otherwise class will be dismissed in just a moment or two. If you'd like to get in touch, I'll chip in my contact information on here. [...]
Feel free to get in touch and we can talk more.
Oh, any more questions? All right, thanks, everyone. We'll go ahead and end. So have a good day.