Powering Trusted News with AI: Navigating the Present and Shaping the Future
6:00PM Aug 25, 2023
Speakers:
Keywords:
ai
journalists
newsroom
reuters
customers
work
archive
news
content
good
journalism
technology
enriching
stories
automations
mahesh
enrichment
human
lila
machine
Good morning.
broken the lawn. I feel the pain crashing down this empty town, searching through the lost and found that you don't care you keep moving like the scars aren't even. It's like a present
I gave you was you close by the stormy seas of human two words you make you cry?
In the atmosphere charting a braid for you and catching me and helps you chasten fears on my own you made it so, now Geez nightmare great job when you love me us being KUSA in the home
you. No point in blaming you did not know
you ever read everything it was you grasp the stormy seas and you meant the world to me. says that you
did you go wrong? A try to make you stay
or was it
Why do you want to
tell me why do you want to
love you just pictures on these whoa reminding me of how bad we are. game to you, guys do you want to
tell me why you want to just a friend
that you
there was any sign that love was in decline. I would have begged you not to go I would have given me even more. My heart is breaking up and I
can't believe
why do you want it
gonna just wrap took a swing at a wrecking ball and I prayed for my downfall and found a way to reconcile because in my heart, it's not birth. It's blood. The battlefield sunk down not as cheap. In the end, it's all the same. All you can do is play that good was say goodbye. Refuse to question Sami soul
trying for one day
two many, many to many
want to live?
spitting up my feet and force it down a dead end street when all I had went up in flames. On the dark part on muskie saddlebag the person would want to play. All I can do is take the shame to say goodbye refused the question. I'm too sad to say sorry.
To trying
to care for me too many lows too many too many things to
want to live
Foley's budgets
a good to feel
myself is fine too. Just to find some, there finally feels like oh, this over thinking swim
is being killed.
Me
Back good stuff was done. Yeah. Wanna
sofas, fine. Sue me justify some
I hate this thing. Swim number.
Ya
Oh, hi. Welcome, everybody. Just a quick microphone test. Is it too loud or good? Great. Okay, fantastic. My name is Leila De Kretser. I'm the Global breaking news editor at Reuters. Really excited tonight. We've got this panel. It's about powering trusted news with AI. I've spoken to several of you who hopefully have also turned up for the panel. About how many panels are about AI at this at this conference. And I'm hoping that we can kind of get beyond the headline and talk about what's happening in our newsroom when it comes to AI now and where we want to go in the future. Very lucky. Next to me is Harris Agha. He is our senior director for new product. And then next to him is Mahesh Rama Chandra. He is the head of newsroom technology. Now all of the technology in the world could not allow my colleague Jane Barrett to be here live on the screen. She is the global editor for media news strategy and she's in London. So I had to take a quick last minute emergency video with Jane from my hotel room last night and she is going to join us via that video. So I'll throw to that video straight away and you know we can start there
Hi, Jane, speak to you. Hi, Lila sorry that we're having to this is a pre record but hey, technology. Yeah,
well, I guess that's the first question. Okay, everyone in this room wants to know, are robots coming for our jobs?
I would joke and say I wish But joking apart. I think that the robots might come for part of our jobs. But hopefully the parts of jobs that we don't want to do so when I think about AI in the newsroom, I tend to think about it in three different buckets. So think about replace, what are the jobs that we're currently doing as human beings that I wish we didn't have to do because I just don't think it's a particularly good use of our time, or that robot could do it better or faster or more efficiently. Then I look at kind of how can we augment our work so that's really thinking about how can we get machines to do that. They might be like deep document analysis, or it might be reversion in content, that things like that, that really help us get more out of the journalism that we're doing and help to augment our journalism. And then the final box is transform and this is kind of where I think we're only at the very very foothills of the of the journey, which surrounds kind of what are the changes that are going to come down the way that we interact with our audiences, how we engage our audiences, how we work within the newsroom, how we pull together our news gathering, how we produce things, that that might be true in three to five years time, but we can't even start to glimpse now. So when people say to me as robots going to take my job, it might take the bits of your job that you don't want to do but hopefully that's going to free us up to do more value added journalism. The journalism that we really want to do for our audiences and the journalists and frankly, that we want to do for the good of the news industry and and for the good of society.
So let's get on that because speaking of the good of society, there have been some monumental screw ups with AI and mistakes and areas. How do we make sure that we're acting in an ethical way, and not contributing to any misinformation out there?
Yeah, it's a really good question. And I think that's kind of what newsrooms around the world are really grappling with. I speak to lots of Reuters clients every week. And it's one of the things that we're talking about loads is how do we keep editorial control? And I think that the we're still, as I say, in such early days of some of this technology, I mean, Reuters has been doing forms of automation and AI for now a decade and plus, right, particularly on our financial file on our sports file areas where there's quite a lot of data involved. And even then, you know, we kind of have to keep on tuning the models. We have to keep that editorial control. We have to keep the human in the loop to use the jargon, to make sure that the machines aren't running amok, and to make sure that we are still in control. We have a brand that we need to put to protect right you know, we've been around for 172 years, I don't want to machine suddenly losing our reputation for us. So I think it's really important than any form of journalism that we're doing whatever your newsroom is doing that you work out what Justice you know what your editing processes like, what's your news editing process, your reporting, your production, like who has control over each of those bits? We need to have the same thing with AI we should never give away the keys to the castle, so you're not to machine. And so I think what you've seen over these last months is that lots of newsrooms are starting to come out with their own guidelines and guidance, Reuters has as well. And one of the things that we were talking about when we were setting up our guidance. I don't know maybe that in April, May this year, was really to think about how does what Reuters stands for translate into a world of AI. And we're controlled as you know, Lila, that by the trust principles, which are so the set of values that really hold us and give us the guidelines as to what we should be doing every day. And we focus a lot on trust principle number two, which talks about our independence, our own integrity, and our freedom from bias. So that's kind of like Sinequan on has to be there. But then we also look at trust principle number five that says that no effort should be spared to adapt, expand and develop the new services that we provide. So it's really it's like the two pedals of a bicycle. I always say it's like, we've got to keep on adapting. But we've also got to do it with integrity, integrity, independence, and freedom from bias. And so some of the things that we've been talking about are for instance, a human is always going to be responsible for what goes out. And so that responsibility is still accountability is still within the newsroom. The other thing that we've said is that if ever, a model gets up to a point where we believe that we can take off the handbrake, and it's useful enough, and it's accurate enough to be out there without a human on it. We will make that very, very clear. So we will always be transparent with our clients with our consumers as to what they're reading what they're viewing how something has been produced. So for example, I'm sure that Harrison Mahesh will talk a little bit about a VISTA where it's machine only. And so we tell people that that's the case we do some automated translations where we say it's been done by machine so you know, you know what you're getting. And, and we hope that it's more useful to have this news in your own language than it is with the caveat surrounds machine translation, then is to have hours so it's very much you've got to take step by step and you've got to keep on going back and checking, keep on checking keep on checking, keep adjusting that works one day doesn't mean it's going to work the next day. So we have to keep on developing that and make sure we have that editorial control.
We have people in the room from all the different parts of a news organization, journalists, product, audience development, sales, and of course technology. Now, we all know in newsrooms that can sometimes that collaboration involves a lot of work. Can you tell us a little bit about how Reuters is working together on AI and how you bring all the different parts together to work on it?
Yeah, absolutely. I think we're really lucky to here at Reuters because we've often been working together to technology and product sales, business development, editorial. And so what we're doing at the moment is we have a cross functional team of all of those different bits. And we meet on a weekly basis. Looking at the ideas that we have, how could AI help this area or that area and we come together, and we sort of throw around the ideas and then critically, actually then put them into production through a sort of proof of concept model, so that we can see them through the proof of concept what works for each group of us and yeah, has it is it better for editorial than it is for products or is it the technology think it was a success, but editorial thought it was terrible so that we're all in the process laying together and I think that that's really critical at the moment we have to one of the difficult things right now by AI, particularly with generative AI is that the interface become very simple, but actually, the concept of the technology is very, very complicated. And so journalists, as journalists, we all have to start learning how these LLM works, how these MLMs work rather, we have to educate ourselves but we also if we can work with our technology and product colleagues, we can kind of get to a place where we think yes, that's something that's possible to do for technology for our clients. Editorially, we're happy with it. And some things where we will have robust discussions and sometimes I'll say, You know what, that's just not worth trying. There's an easier way of doing it. We don't need to start doing a proof of concept. So it's about experimenting, experimenting responsibly, but also experimenting together and really leaning into the collaboration across the different bits.
Thank you very much for bearing with us with a pre recorded video. I assure you, it was far worse for me to see myself up there. So let's, let's talk a little bit about this collaboration that Jane just alluded to. I just want to kind of set the scene for you everyone in this room. Sometimes it can be really complicated to understand where it is business, but I like to kind of think of it in two parts. So one we one of the great biggest providers of financial financial information, right so we have we deliver news to Elsa egg. It's a lot of news and we have a high volume, financial information business and then two we also have a media business, which most of you hopefully are our customers. And you'll either be you maybe not everything but text video photos. So I want to start with Mahesh just to sort of talk through how we're using AI with our financial customers in the first place. So Mahesh Can you take us through what we're doing at the moment.
So as Jane was saying, you know we have an writers been in working with AI for close to almost two decades and in an on all along this line. They can only see the Dow Jones force have only grown at reduced. So at least that's thus far, you know, here has only, you know, enriched Not, not the other way around. So the areas where we've been done AI can be broadly grouped into two buckets. This is what we have done. So far is is AI for efficiency and automation and AI for discovery, content discovery. And in the efficiency and automation space. You can think of short no short form journalism things more things more like you know very short form alerts or snaps are very short briefs versus the long stories. So typically a lot of the financial information or the news that we disseminate for companies for economic indicators like you know, like the Fed announcements, interest rates, etc. They're very short form so we we create a lot of those automations using AI, machine learning and natural language processing for both company news automations as well as for economic indicators like things like today, when Jerome Powell announcements came out so we have a sequence of automations that actually run that produce news outputs or the backup that and the key. There are two or three key points here. One is there's always as Jen said, the human in the loop, that every alert is is always reviewed, even though the alerts do go out first, because they have to pass a threshold of accuracy of over 99.5%. So even though they Google they immediately are reviewed by journalists, so if any errors occur, they are immediately corrected. So the human is always in the loop. And and you know, this is not a problem that is solved, you know, so we recently just acquired a company. We made a big investment we acquired a company called PL x in in Europe. And that's because the way how companies disseminate earnings information that changes and the forms of content releases the changes all the time. So AI is not done and done with it. It is always changing and we have to continuously
to give you heart there. The Pale X was a group of Reuters journalists who used to work for Reuters and they created a cool new company where they like automated a whole bunch of financial and corporate news and then they sold it back to where it is. So there you go. There's there's definitely hope.
Absolutely. And that's, that's a great marriage of journalists and technology in slightly the longer form journalism for L sec. So we we do a lot more machine translation, just as Jane said, but again, these are machine translations of more structured news stories, things like earnings stories or economical stories that typically follow a very standard structure so they are much easier to translate and and when the machines do get things wrong. So we know how to correct those parts easily so and we do more sports and market summaries like end of day some reason why and sports stories things like an English Premier League, like typically the match coverage, etc. So those things can be very easily structured, from from from structured data. So we actually compose them using AI and, and they we've done that for now close to a decade. So that's just a couple of examples for long form journalism in terms of discovery. And this goes back to some of the oldest AI is is all about enriching content, all this content, the greatest content that we produce, if they're not discovered, they don't exist. So so how do we enrich this content? With all the entities people companies, and also classifying with topics such as you know, is this about politics is this about markets is about sports, all this kind of metadata and enrichment and we use a lot of AI for for doing that and that is entirely 100% machine generated? And lastly, you know, you you read an article, you're more likely to go and read another article because they're related and using that so those recommendations and, and things that is also something that we enriched through AI and metadata. So that's, that's all the things that we've done thus far. Just another example of what things we do for LC and we do several things in the video space, which I'd like her to talk
about Harris over to you.
Yeah, thanks. Thanks, Mahesh. And thanks, Lisa. I'll start off with the you know, I'm the next step in the assembly line, if he makes the content is created. It's already enriched by the technology. And now it's ready for distribution and consumption by our media customers across the globe. And I'll start by saying that the most important thing that our media customers look to Reuters, aside from accuracy, is speed. Right? And that's where a VISTA comes in. As Mahesh said, you know, we could have just called it AI enrichment but we love acronyms. And so we choose should we decided to go with a VISTA, which is essentially audio video Speech Transcription and augmentation. So what what does it do to our videos that we have today? As of this moment, you can go into our flagship platform called raters connect, and it automates subtitles for English ready to publish content. We also get time coded transcriptions in multiple languages. We also get translations of English or non English back to English. We also get frame accurate shot lists and scene detection. And then most importantly, we get entity and public person recognition. Through this enrichment. It's a software that we've developed in house by a team of very, very capable innovators, data scientists and and product people. And it's really exciting because to my Hunter's Point, if you can't discover this content, it's it's not it's of no use to any of us. The Reuters Connect platform at any given time has about 14 million assets. It has the entire Reuters archive library, which dates back to as early as 1896. And it has over 100 partners that also syndicate their content to this platform. So this is just the very first start of what we're doing in terms of enriching the enriching our videos and our audios. So it enables all of our customers to discover it faster, and it improves their efficiency. Because unlike a publisher model where you want the customer to be there and read the next article and the next article and the next after that, my goal for my job is for them to find it as quickly as possible. Get it into their edit and get it into their publishing platform. Because that's essentially what you know the Reuters wire service is built out for that's in a nutshell, but I'd love to talk about the future of what we're doing with with this technology. We've rolled it out to videos and photos. We also have a massive photo library not just from Reuters, but again from our partners for object identification for landmark identification for topic code. So if you're looking for a story on Narendra Modi on space, that's one curation that you can get as your search results versus if you're looking for an orange or Modi on Kashmir. Those are completely different stories related to the same entity, but how do you discover it and how do you discover it fast? We need that enrichment and we need to provide that enrichment to our customers. So that way they can enable it, use it in their platforms, but also in our platforms. So in terms of the technology itself, somebody asked me that question earlier you know, will it work? Will it work on any CMS? Or will it work on any platform? The answer is yes. Right? Because we're enriching the content agnostic of the platform that it's being used on. And that's the beauty of all the all the technologies that we use, in terms of how we have developed it it's taken us some time because you know, to change point and what Lila and mash have said earlier, there is there two principals that we work with. It has to have a very high threshold of quality. That's a given and we will not roll something out unless we feel absolutely comfortable. And we're fortunate that we work with journalists, both in our internal newsrooms as well as our customers, and they we are customers, some of the smartest people in the world they will know immediately and they will be able to tell us is the quality good or not. And then the second thing is transparency, everything that we do and as you can see, so this is just one example of there's a video, there's sidebar of who's in the frame, and you can just skip through if you want out of these five leaders. You want to just focus on their intro Modi and when he's speaking, you can just click on that and get to that money shots. Getting in your edit as quickly as possible. The Transparency is key, we clearly identify that this is machine generated. And this is something that Reuters human editorial staff did not have an overview on but we feel good enough about the quality and because our customers understand this, they can feel confident enough to use it in their edit or decide not to, and most importantly, that's in my day job and I'm just looking for feedback on from our customer base to understand how do we enrich it, did it work? Is it good enough? Because if it's not, I'll just remove it.
So I want to give you like a real life example of how this is used so I'm your light traditional newspaper trained journalists started in tax was a reporter all the rest of it and then you know exploded with the digital movement like many of you did. I have no concept of editing video or audio right. That's those aren't the skills I picked them. up along the way. But that's not my natural skill base. I now produce a daily podcast and one of the ways that I edit is I use this kind of technology so I can find a clip that someone might have said, so the BRICS summit, for instance, we were looking for anything that might have been said on Wagner, it's very easy with this automated transcription for us to look and see and find those clips to easily add to our podcasts. And that's the kind of change that I think AI can bring to newsrooms and we can all benefit from. So Harris, thank you very much you've made my job easier.
Big credit to my team. Yeah, it's all about efficiency, right? It doesn't matter whether you're a tech journalist, an audio podcast producer, or your video journalist, it doesn't matter. It's the enrichment is across every workflow, and that's what we're trying to bring to the table for all of our customers. And given the fact that, you know, our customer base is so diverse, because what works in a newsroom and at the New York Times is very different from a newsroom in Tokyo. Their workflows are different, their standards are different. Their entire operation of the newsroom is very different. So it's really important that we are able to do this with various various technologies we picked, you know, where we do something in house. We develop technologies that integrate with other technologies, some of the best in class that are available. We're building something, a database of personalities in the news because that's most important to our customers today. And with the different translations again, it plugs in seamlessly into your workflows, which is what our customers value the most.
Typically one one of the things that we we had to build this in houses because a lot of the vanilla the more market leading face recognition software, they all recognize celebrities, they are trained on Hollywood and more media industry, so they're not necessarily trained
Jerome Powell, political and business journalist Exactly.
I'm predicting newsmakers and this is something that we have to invest in. And it was trained by a journalist and that's that's the key. That's
I Mahesh you can you talk a little bit about the future state for AI and financial journalism, just to sort of set the picture here. I think the stat is and correct me if I'm wrong because you might not want to, like we published 30,000 pieces of content a day for our financial clients. So Mahesh was talking about alerts, etc. Even the volume that we're talking about here is huge. So where can you see AI sort of in our financial and business journalism?
Yeah. So think, as Jen said, again, generative AI is definitely the most exciting area right now. At the same time, we like to joke in technology that you know, we think we've discovered fire, we don't know so it's either the be figured out how to use it properly, or it can burn us. Right. So it's, I think the the opportunity here is very, very human. So So there, I'll give a four there are four use cases that we're currently considering or experiments that we're looking to do. Responsibly. So the first one is, even though we have some of the world's largest journalist workforce, and we have so many automations we still don't cover every company on Earth. We don't cover every financial information that we can because of scale. So scaling and efficiency is still another than the even though we've done it AI for so many years. So one of the things that one area we are looking at is expanding corporate alerts, or corporate news for things that don't necessarily follow the typical earnings cycle. Most corporate news items, they are quarterly, they are cyclical and then you can like say, Southwest is announcing a reorg or, you know, Microsoft is announcing layoffs, they don't typically follow an earnings cycle pattern. So we need to be able to generate news of that very quickly and disseminated. So that's one area that we are looking at AI as an example. The second thing as an example is, is anomaly detection. So you know, if title supply stock is moving today, and it's following an anti pattern or the rest of the industry, we don't really know what is happening, why it's happening. So again, we typically the journalists do quite a lot of research the textual research or is that been analyst rating change or is that been some kind of a news that come out extra so so these type of things again, we are using general AI to do better monitoring and better synthesis of this information and then again, bring it back to the journalists so that they can quickly disseminate this and I'm gonna structure out of it and then send it against the cumulus still in the loop to to finally make judgment, but what we are doing is accelerating the pace at which they can make the innovation the third use case we are experimenting with this is explainers. Like I think there's a recent article about Argentina taking the Chinese yuan as a reserve currency, or the sort of big article about it. But there's also followed by an explained why is this happening? You know, what's the consequence and you know, and what's the what's in it for Argentina? What's in it for Chinese yuan? So these are explainers, we write but they are time consuming. So we you know this, the we are experimenting with automating these explainers for more of these types of stories, but the long form stories that that we write, followed by a quick explainers that can be easily sometimes working on can say Instagram, right so styleable spybubble
Argentina one store really gonna go hard and Instagram
and, and lastly, I think it's in the area of search itself Discovery now when we think of generative AI and all they can be temporally think of content, creating content, but one of the very useful features of GPT or other AI tools is to also understand the meaning of your query. So when you write a long form, description and say go do this, it's able to process what what you're asking him to do. So the same thing can be used to make better search. Offer content using that, what what you're able to write across so, so that's the bit that excites me the most. Exactly, exactly. So these are these are four big areas and say, that are key areas of experimentation.
And just to kind of set the scene there. You know, what that allows us to do is if let's say someone is we're able to have aI help us with an explainer about wine and Argentina that helps you write your media customers, you're not necessarily completely nuanced in the day to day movements of currency, which are orders we talk about a lot, so that explaining can kind of serve one audience but at the same time, our reporter who, last week she broke three scoops on what happened between in the currency decisions that China made and that is a big deal, right? Like our reporter is freed up to do reporting that can break news and for us, that's one of the biggest things we really really see as the hope is more our reporters can be focused on the new news gathering, and the less that they having to do the work that, you know, we on both sides, writing stuff that's already known or monitoring, which is a big part of what if you're a business that relies on something happening, monitoring takes up a lot of our staffs. Time. Okay, I want to get to questions but before I do, we're gonna do a very quick what excites you the most about the future of AI? I'm gonna start with you, Mahesh. You go first.
Yeah, so I think the, the opportunities are immense. But I think the biggest the biggest opportunities lie as in discovery and search is I think, is our biggest opportunity. And then the second thing is, we have still telling stories the way we have told, you know, for close to 100 years, I think so the modern, generative AI gives us the ability to tell stories in a much richer way. How the news is presented, not necessarily just the pure vanilla text form, but we can tell stories in a really multimodal form with a lot better explained as in and linked with videos like what we had previously. We have news as an asset. differently. We have video as an asset, we have text as an asset, but we never had visibility of automatically stitching everything together. Text and clips and all of this stuff. So the ability to tell rich stories with actually all of this stuff I'm helping a journalist to produce as creates rich ponder stories that excites me the most.
Harris you know, my answer changes on this question on a daily basis. Frankly, I the search and discovery experience and what we I can do as a product person on that is immense, right. Getting sentiment analysis and speaker identification and understanding, you know, detecting logos and vehicles and music recognition. The efficiencies that it can create are frankly immense, right and then as you move towards visual search and voice search and generative AI using things like chat GPT to really build something from before you even get to the dots great from your morning, you look at your planning calendar and you know that these are the stories that you're going to be focused on for the next week or two weeks and have a module ready to go. Just starting from the archive and you can determine I just wanted you know, in the last year I want it in the last 10 years up to the second while you before you even get to your desk and have a package ready for you. I think that's really powerful. Yeah, that's really exciting.
I was thinking about that with the progression jet the other day, so you know, you're getting progressions jet crashed. Big story for Reuters, as you can imagine. And we were waiting to see what Biden would say. And of course, Biden said what we were hoping he would say which was I warned you about this before, right. And it just got me thinking the ability to very quickly see all of the points in video that Biden had spoken about what might happen to recursion or other people in the Pentagon because the Americans has actually said quite a bit. That's quite exciting. And we were able to do that very quickly. Yeah.
Give it provided in in a timeline or in any other format of a mixture of videos and photos and text. And then, you know, as media customers, you do whatever you want with it, but you already have half of your work done before you even get started. That's really the power of that I think this technology brings us
so we're gonna give Jane the last word on what we think what she thinks is going to be the most exciting. So if we can play that tape.
Hi, I'm so excited by the fact that AI can really help us in newsrooms to keep going. We've been so financially challenged over the last 1012 years and we've seen some newsrooms go out of business. We've seen others have to shrink hugely and with AI. Hopefully what we can do is to empower our journalists a to get rid of the stuff that they don't want to do so they can focus on the journalism and then be to learn to want to be able to do new skills without actually having had the experience yet. For example, you Lila editing a podcast having never done audio editing before because of AI. And so I'm very excited by the potential for AI really to empower newsrooms so that journalists can focus on doing what we do best, which is our journalism.
Okay, so we've got some time for questions. Just a shout out from Jane. She's watching along on the live and you can send her a question via LinkedIn or Twitter if you need or, hopefully I can answer it and we've got Kate in the audience and some others here. too. So please hit us with your questions.
becomes one.
Hello, thank you for this. It's really great to see concrete examples. I didn't realize all of this was already happening. How would you recommend a newsroom that's maybe not quite at the same scale, start trying to implement these things.
So first recommendation become a Reuters customer. That would be the first one. I think what I've been impressed by so for instance, we mentioned that example PL x. So that was a couple of guys who were it was a totally two guys. It was not a big, big investment in a lot of capital. And what they what they The reason I believe they were successful is that they really cared about what the outcome was in the journalism right. So these guys had spent their days and lives monitoring companies with their disclosures and understanding what news is needed out of those disclosures for their customers. And then figuring out the like, minutiae of how that information was structured and all the rest of this subject matter experts, and they understood it very well. And so they were able to kind of work with the robot to figure out what needed to come out and I think that's got some that that matters to small newsrooms. So whatever you know, very well think about what are the kinds of structures that come with that, that you know very well, and then think about what you can start teaching. Now in the old days, like I think some people at Reuters used to write macros to help them you can imagine right you're writing texts and millisecond counts, everything you can do so people would come up with like ways and cheats and hacks and and I think that's probably the culture. So what is it that you cover? What is it that you know, is going to be the same every single time or predictive or can come out how can can you use a robot to help you get that information out faster, so you can be doing something else? Does that make any sense? Yeah, yeah. Yeah, I mean, if you're on video, though, the other thing is use AI tools like
yeah, I'll just I'll just say you know, for for any type of product development on this, sorry. For any kind of product development, right, it's identifying what problem you want to solve first, right? And what is your what is your clear objective and if you can do that, I think there's so many tools even internally at Reuters, we get you know, sometimes we get overwhelmed by everything that is available and you know, basically to someone sales colleagues or my other colleagues, my technology colleagues, of course, we want to do everything, but pick the one problem that's going to solve your you know, it's going to give you the biggest bang for your buck. I'd start there. And of course, you know, we're, we're happy we, you know, shameless plug. We just acquired a new digital asset management system, and we are enriching our content. We're going to potentially enrich our partners content and we'll be happy to, you know, speak to your newsroom as well. What is your name, sir? I didn't I don't think I caught that earlier. I'm with the Enquirer. Thanks.
I'm sorry, sorry that we're looking at you funny. We literally are blinded.
I'll ask my question, then I'll step back because Hi, my name is Marissa Meyers. I am the head of video at CNET. I have a question for you. I mentioned archives. And that's something we're really challenged with CNET has been around for 20 plus years, we're, you know, one of the few media organizations that has been covering tech for that long and we have an incredible, especially video archive that really nobody else has, but it's sitting in tapes. And I imagine while you know, you guys probably hopefully don't have that same problem. But I don't I'm just asking like you have an incredible archive, like how how did you guys tackle that? Like, I know, like obviously, you're taking the metadata from now on from the content you're getting, but like, how did you tackle dealing with essentially getting your archive into the system? Tosun we can easily searched. I
can pick them at all. So I can say that it is not a solved problem even for us. But what we have essentially it's because it's an economics issue. So we have the ability to go back and process all of Winston Churchill's archives and process all of them. But it's it's a value exercise, not all of those archives are are are useful, in the same in the same level. So we this is a this is an area of study for us. Because we did a couple of years ago, the predecessor to Arista, what we did was we did go back and tag the 100 plus years of archive. And what that's what would that be, we found out that not all the qualities is the same and not all content is is created equal. So one of the things that we're looking at right now as we look back at our history is what exactly do we process with Avista right? We do go by culture, we actually start with five years, 10 years, going back before 100 years rather than try everything at the same time, but it's it's largely driven by you know, what's the usefulness for customers and and the archive. So that's the approach that we are taking rather than do it all by 100 years.
You know, very little to add there. You know, the digitization is something that we that we do for ourselves, we can do it for others, but it's really going back to the to us the customer to eye for you to identify what archive is the most valuable, you know, is it or do you just want to preserve and digitize what's on tape or do you want to eventually monetize and syndicate through platforms like ours or your own perhaps at some time? But it's an ongoing process, right? It's, that's what we've we've done that's what our customers are doing. It's not a one size fits all. It's an ongoing process, you define where you want to start, and then kind of start enriching the metadata as as you kind of go along right. And depending on the quality of the archive as well in terms of, you know, what is the resolution of the video once it's once it's digitized, then we'll be able to understand, you know, how do we need engine a or engine B because engine is not good enough for this resolution and you need a different more powerful engine to recognize the the face that's being shown in the video.
It sounds like you shouldn't wait 172 years to digitize your archive.
But let's chat after
I sorry for the need for you to squint. I'm Dan Pacheco, I'm a professor in journalism Education at Syracuse University. And um, it's really interesting to hear how you AP others are just going full bore into AI, but in education just generally and this is don't shoot the messenger. What's happening is some places are saying you're not allowed to use AI for anything. There's even, you know, vendors, there's one called GPT zero that will flag content, as you know, for its percentage of AI used in the content. So it's sort of like a two part question one I'm curious to know for, you know, hiring students say, who are starting today and then they graduate four years from now they're going out into the workforce. Does it concern you that? Like, like, should they be using AI in their writing and editing or not? And then also, knowing that there are these AI detectors I can imagine like a browser plugin showing up maybe there will be a backlash to this kind of content at some point. Right? I think that's sort of inevitable. Do you are you worried about that for your your content and yours, you know, you're you're being shown as I Oh, you know, writers is 80% synthetic or something like that from some plugin?
Yeah. So I think I share all of those concerns. So I just want to like set the stage here. You know, where where we have automations but we are we are very careful about how we're going to do this. Right. Sorry. There's there's a large like tension between those two trust principles. One is the ethics and accuracy and the and the last is you must innovate for the benefit of your customers. And where we see this tension kind of playing out the most is if we are going to be doing this and this is something that I think everyone in this room is going to be dealing with what's the value of the information you're giving your customers? And so you know, we could see a world where the 300 word explainer is no longer King, because let's be honest, a robot could do it. So how valuable is it really? And in that world, the journalism students who can break news, find news be there first. I think they're the most valuable so in some ways, the robot isn't really the point. And what that's what I'm hopeful with is, and I'll give you a real life example, playing out right now. Our economics team is on the ground at Jackson Hole. These guys are crazy good and they've crazy crazy smart, but they're also crazy fast because you have to imagine if we get out a piece of economic news faster than anyone else, that is the biggest thing we can do for our customers. That's that's numero uno. And until very recently, they would have to like the fastest fingers in the West like these guys would do this. The automation now if they are able to check it, check that it's running, but it's there. When it breaks, they lose their minds because it means then that they're having to do the stuff where they could break news. So this morning, we have a scoop on Christine Lagarde. What she's going to do with the ACB with interest rates, so that's massive. That's a huge scoop. But if Bilotta had to sit there running the typing out the alerts at the site, he wouldn't have been out reporting. So I guess that's what gives me hope. In fact, like really good hope because when I was a reporter journalists couldn't write anyway, they're just good reporters. So I'm alright with them, not necessarily knowing how to use chat JpT or, or if they do or if they can ask prompts. That's going to be great. I think the bigger question is, can they get a story and can they report and bring news into the world? And hopefully, we go back to those core principles a bit more as well. Does that answer that question?
And this one, the second part of the question is, you know about the detectors, the detectors and for us, you know, the we've always put transparency as paramount. So ever since we've done automations, you know, going back in decades, right? Our content is clearly labeled. We are always labeling content that this is auto produced. The customer don't need any detectors B tell them that it is auto produced because those customers leave the financial customers they care about speed and accuracy. And of course, they trust Reuters brands. So severity is auto produced. It actually carries a piece of murder scenes that are produced and bad it's an auto translated, it does contain a disclaimer and all the metadata saying that it this was translated by a machine. So So for us, you know, that trust and that, that transparency is key and Paramount. So yeah, so
I think regurgitation might disappear in the journalism world if you know if everyone has a robot who can do it, what is the value in everyone doing it and you know, hopefully your students you know, come out stronger journalists because of that.
Hi, guys, my name is Louise Tierney. I'm the Global Product Manager for human in the loop of Bloomberg. And previously I did something similar to Twitter. I was curious about your mention of subject matter experts. And if you use in house journalists to do that, or if you do it externally and in house, do you struggle with a clash between people who typically like to work as reporters or journalists having to work with machines, and how do you think about or deal with that kind of issue?
Do you guys want to take that first guy and then so I'll
give the it's the example that Leila talked about with Alexa as an example that happened to be both an ex Reuters and an ex Bloomberg journalists so that who works closely with technology to to essentially prioritize what is most newsworthy and everything can be automated, that is not necessarily newsworthy, but it's to put that the same rigor and quality of judgment, even in the automations. It requires a very tight participation and listing of the journalists to create that product, that automation product so that is a key example. Then the similar examples is what Lila was talking about with with Palash. Who, who spends a lot of time with the technology team in creating these automations for economic news, like the ECB, interest rates or the all that announcements because unlike the Fed announcements, the ECB announcements can be like a soap opera. So you requires a certain level of expertise and and to train because technologists cannot do that. And so So, but are all journalists like that? No. So it requires those that are have that interest in making the machine better, which will in turn make their jobs better. So we, we, you know, we are always trying to partner with editorial and it's an evolving process to find the right candidates to then come partner with us to do that. So it's, I would say it's not there's not a single single solution, but their partnership is
and we're also very fortunate because some of our key editorial leadership, they also understand what our customers want at the end of the day, so they're able to provide an this insight and to building and training the algorithm. That to my expense. It's not for everybody. But once you understand how it will be how the end user will consume it. It changes the magnitude by a lot.
I think it's also about doing a lot of listening. Like we would have project groups on automations, for instance, and you'll have someone from products and from technology, someone from the newsroom. And you know, when Mahesh starts talking about entity extraction, I like oh my god, but like I actually have to listen and understand what that means. And you know, to understand that people and places and things were really important like you and it did lay and I didn't want to listen but I eventually was like oh, you just need a verb for jump like what is all the verbs that you could use for jump right so I think that kind of collaborative approach is really important, but it's not always easy, like people are you there'll be tension back and forth. What I've been really impressed by though, is the testing. So you'll see this kind of moment where they get to a proof of concept. And they realize that people have something that's going and that's when you see those teams kind of 10 I think work work really well. Thank you. You're welcome. Okay, do we have any other questions? Okay, well, we're very excited that you came today. If you have other questions, please feel free to come up to us. We've also got this QR code where you can scan and read more and just gonna make a plug for that podcast that keeps coming up in this discussion. So if you haven't already listened, or it is well news is on Apple, Spotify, Google wherever you listen to your podcasts or on our app, please have a listen. It's the best 10 minutes you can spend in the morning getting to know your world. All right. Thanks very much, guys.
You with me? As being Kusa I believe the home is just you and me