Opening Remarks and Featured Session: The Promise and Peril of Artificial Intelligence
1:00PM Aug 24, 2023
Speakers:
Keywords:
ai
newsrooms
journalists
technology
data
create
question
journalism
generative
trust
community
talking
tool
tech companies
conversation
product
opportunity
policies
big
learning
And I, I would say AI is not a critical thinker at the moment. So yeah, Jarvis Definitely.
Thank you. So second question, which system is more dangerous Skynet from the Terminator, or the matrix?
So this one I did have an opinion on. And I say, as someone that works in information integrity, around myths and disinformation, I say the matrix, the ability to manipulate reality, and, you know, basically control humans in that way and change in a virtual reality way, the way that humans perceive their own existence, I take as a more dangerous threat in this environment, particularly in the line of work that I do.
I don't want to do I would I would like to not agree with Matt, actually, that we disagree. That's common. Yeah. Just just earlier today, we disagree. But I have to agree. As a person who makes a business in understanding human behavior and using that to kind of translate. I think the most dangerous thing is to sort of like sort of mask human behavior are sort of deceive and be deceptive so the matrix I think did that so vote for the matrix
yeah Paul I have to thank you you launched me and my colleague Christian and to like a 15 minute discussion on this question when I was trying to get his take because you know it's to your points do we want to know when we're about to die or do we not my vote was I would like to know I would like to go into it eyes wide open so my vote was first kind that but I told him They see the argument for the matrix.
Okay, and my final Rapid Fire question is if anyone remember that movie AI with Jubilee how Osman right. So if Who would you rather be friends with in terms of AI human nori stock tracks data? All Blade Runner was Rachel.
So I chose data. I think data has a more human side, for lack of a better term, even though he's AI in his own way, and has an ability to connect or at least eventually developed some sort of ability to connect. And so I chose data. He seems like it'd be a great friend a very helpful
it's really good at chess. Yeah.
Sure. I agree. I chose data to mostly for the cuteness factor. I'm not gonna lie.
Oh. Yeah. I also chose data because because of the rule following and just you know, this idea of like having someone who always knows where the sort of moral compasses Isn't the moral center that I have to say I want to convince Rachel you know and sort of like be able to kind of bring Rachel's to the other side so there's a there's a case for like the sort of aspirational friend in Rachel
well all jokes OSI in terms of these questions you know one of the you know since the AI have come on the scene late last year or there have been a lot of imaginations and a lot of chatter about what AI can and cannot do so first of all AI is nothing new Right and so what a about AI right now particularly Joe Not if aI had really spoke These why spirit discussion means across industry and And what is what are some of the rules otherwise driving it to current prominence Right now
I'm I would probably say it generative AI did what A lot of technologies before it nai were not able to do in the sense that it was able to show not tell. All of a sudden everyone had access to that technology, they could play around with it, they could test it for themselves, they could see its own capabilities for themselves. So you have people coming back and being like, well, this is the closest I've ever seen to magic. So, you know, I've tried it for myself. And I know what it's like, no one is telling me that it can do all of these different things. And so I think that kind of led to a lot of it's widespread. And just to give you folks an example, last year was my first Oma. And it was wonderful. And I remember, like talking to a lot of people here, and you know, they're like, What are you doing? I'm like, I work in responsible AI. And they were like, That's really cool. But like, also, why are you here? Not in like an aggressive way. But like in a like very inquisitive, like, how does this relate to journalism? And this year, here we are, you know, talking about AI in the opening panel. And so I think all of that change in one year, is just anecdotal to how much AI has kind of become ubiquitous in such a short amount of time.
Yeah, I think a few things. One I'm a I see Gen AI. And sort of like I agree it's been sort of around for a long time, but Gen AI as we know it today seems to pass the Turing test is sort of more indistinguishable from what we can sort of recognize as human created content. And that is Intel. I sing in many ways and risky in other ways and I think it's sort of access it is also intriguing and I think The sort of low barrier to entry has both it's Positive and negative but I think it's To access and ubiquity, and its speed has been quite intoxicating, and leaves room for the human to kind of strategize creates, be creative, and kind of accelerated innovation and growth in ways that we hadn't seen before. And then the fluency, if you think about, you know, very, very early on in the fluency that was needed to create a simple web page, right, and the fluency was needed with brackets and code and, you know, all the thing, you know, JavaScript, all these all these words and language that we, that we needed to teach people and it was hidden behind things, right, in order to get the interface to show up these the code, the prompt engineering that we'll perhaps talk about later, is right here, right? It's right. It's sort of accessible to us. It feels very plain language. So those are some of the sort of, I think, three kind of aspects that that are at play right now.
Not much to add beyond what they've said except for the, the sort of practicality that that it's presented for, for many, I think of the converse session that I had with my wife as you know I was working on these projects and it came home to her her least favorite task throughout the week is meal planning for the week and Her ability you know I kind of walked her through you know let's talk about what kind of ingredients we have let's see and you know within 10 minutes We had a meal plan and ingredients list and a cost list from the local grocery stores and that you know that practicality for her was attractive Have around a a task that otherwise is seen is frustrating And and sort of tedious So I think folks have seen the opera 2d to eliminate some tedious tvm said a word will say it is a eliminate tedious tasks and and really engage to improve and gain back perhaps their most valuable resource which is time around some of this so
question about sort of the development of the technology how fast is it in terms of evolving from sort of like you know a couple of years ago when we think about AI we We use like algorithm right and then machine learning and now it's generative and how long what is the next big thing like beyond generative like how asked is this technology actually evolving
That's a good question. I feel like it's already we're already at the next thing. I mean, if we talk about the next thing we were, you know, looking at, you know, web pages, the social, we looked at Mobile, we looked at sort of the cloud. We had these many sort of touch points. And now I think we're at this moment of generative AI but yet all along we were We're looking at AI How fast is a great question I mean we're it's at the intersection of what time how fast technology is developing and how quickly humans are ready for it at this you know All right and so the question Jun is only partially I'm worth answer With all due respect you know because it's sort of It doesn't matter unless sort of the the culture in society It's sort of ready to accept it as well That's me I thought so I don't know if we're sort of almost ready for for what's next after generative AI because we're still sort of processing where it's played pieces and what's next for it
Um I would probably say that like as far as AI development In the way that we see it in You know like a regular touchable impactful way I do don't think we're going to experience as big of a deal Deep similar to generative AI In like the immediate near future but I think what we build on top of the pre existing tech allergies is going to be very interesting And I think like anecdotally, the first couple of months after Chad GPT came out, the amount of software that was being built on top of that technology was amazing to see like it was insane how many different websites how many different products popped up so quickly after it was made available to the public. And I think that trend is only going to grow. I think for AI itself. There's still like a lot of questions that we're still puzzling. With how do you create so much data to feed such like large algorithms How do you power such large algorithms with you know All of the environmental impact that they have and you know all of these side effects so like that's I think a question that we're still very We're still struggling to solve so don't necessarily think that we're going to see him major leap in the next year or two But I do think that the products that are being created on top of these technologies These are wild Wow That's my thing
so that's encouraging it means that We have a little bit of time To catch up right and So when we think about you know into Is Jonathan AI who are some of the The biggest winners and losers you think that you know whether it's journalism or other sectors who Who are some of the winners and losers of technology come And he's gonna continue to win you know all you know retail Those are the losers
Good just a mean I think that's yet to be determined Right I think there's so much to play out as far as The uses the investment Certainly As someone that's added a time accompany in order to sort of be on the winning side is the questions frame it's incumbent on us to continue to engage with the communities that are trying this to technology it's why we're here today I'm having these conversations because we can't meaning Fleet impact The world in the way that you Microsoft attempts to do with that out engagement conversations skepticism criticism from the community that it's intended to support I think we've seen early on that those who have at least begun to use or experiment with or engage with the technology have found value and begun to see that the what efficient sees that can be brought we've seen we have a partner that's actually here now named Noda that's working with journalists actively to increase the reach A that that newsrooms have into their communities the engagement with With content and things like that I think as we continue to see that engagement you know it'll flush it out alfalfa winters I don't know We're at a point of determining winners or losers I think we're at a point of trying to bring Have everyone together and have that mean In full conversation about timing pace And what's appropriate to support the various communities whether it's journalism medicine, certainly the conversation in government is an interesting one as well. Yeah. I
was listening to the NPR one app this morning. Shout out to the NPR one app. And and I was listening to a story about how India Um reached you know was launched into into space and and the woman who was being interviewed used the pronoun we were be reached I'm missing some sort of misquoting to a roomful of journalists but you use the pronoun we to communicate Now that we've reached the You know we've done this And the question was When you say we Is this about you know are you referring to to the country Are you referring to us as a Planet right and So the conversation was about you No this is an achievement for all of us us and I think to your question that's what What we are doing right now had the opportunity the way winners the losers are not tech companies are the the citizens or the consumers or You know the journalists it's We're we're not all in this together because clearly people have to For advantages and disadvantage edges across communities and so I'm not saying everyone is equal by any means However We are at a moment where or the opportunity is so great And we're standing at this precipice of you know where we had those touch points right from from web to social to mobile to the cloud out etc and this is another They're precipice and so what opportunity do we have in front of us and what are we good Do with it you know what what are we going to do with it is the question Jen in a world of increasing complexity with for economic disparity with You know people being laid off from Twitter and not being able to post things to their account out how are we going to use this opportunity to sort unpack and untangle And I would hope But the opportunity is that we think of it as we both in the wins and the losses and we sort of create this partnership and foster trust
I agree think that a lot of it is still to be seen as far as winners and losers but If I were to kind of look in The tea leaves and say You know who I think might be winners and losers I think everyone can Have wins from the accessibility and the ability to kind of do things faster and more efficiently all of that All across the board so I think that that's it A great thing to kind of look forward to on the flip side I think it can If I miss you was hurt you know oh no we're talking Think about journalism primarily or her a journalists most important asset and that's their credibility And the trust that they are building with their audience and so I think when we talk about the winning and losing aspects of using some of this technology I think we need to kind of look at you know all of the promises that it has but also realize that when misused or when not appropriately governed the kind of the consequences can also be equally as intense and I think you know by far if I was to say who loses the most or who was in a position tend to lose the most I probably say the consumer the individual Mostly because I think in a lot of these instances We have passed on and the responsibility to discern are in truth from falsehood or To use a technology appropriately or to be able to read what As you know a deep fake from what is Not to the individual so now they have to know Not only be technologically aware they have to continually make some of those expert Second decisions about what they believe in what they Don't on a fairly regular Our bases and so Oh I kind of wonder if we've To set up ourselves as individuals, you know, in our communities at large for success to be able to make some of those decisions.
Yeah. Because I think there is a shift in burden where it goes from the people who are creating the product to the people who are consuming the product. Matt, you were talking about, you know, it's about collaborating. You know, the you don't there's no determination of winners and losers that, but yet, I keep wondering, I mean, because journalism's collaboration with technology partners is not our first rodeo. Right? We've done it with web one and web two, and time and time again. Like To be frank, many of us feel like we've been burned by technology companies and technology partners who want to collaborate. So what is our reason to collaborate with you and really trust you this time around? Like, we been on many dates, but and, you know, you know, like, we get flowers, some investment and fancy dinner, but they don't really last? Yeah. I know, what will be different this time around?
Yeah, that's not the first time I've heard that complaint, unfortunately. is for me on dates? No, it's a really fair question. And the answer is, is one, that there shouldn't be an expectation that you just trust us what what needs to happen, and just speaking for Microsoft can't speak for, obviously, the other tech companies. The engagement, the work that we're trying to do, we released for those that haven't seen a guidebook for local news yesterday, that that we have at the table and available online. That's our attempt to get out in the community and really understand the challenges economic, operationally security wise, right, the threats that journalists are facing, and really engage in a meaningful way on the community. So that the trust isn't just about I bought you flowers or a nice box of chocolates, but in fact, trying to be out in the the environment that you operate in, meet you where you are, and understand the needs so that then the tech can be responsive to those needs. That information I think you'll see reflected in the guidebook a understanding from our experiences with those pilot newsrooms, the very real challenges and the possibilities that are outlined in the guidebook about not just AI but other technology that can that can help do that. But that that exchange needs to be constant. It needs to be real. But the good news is, as we all know, journalists are naturally skeptical and have an either even healthier skepticism of us and so my message would be continued to be That skeptical community that challenges us holds us accountable. But it's also, you know, out working in the field with us to try to understand. Another project we're working on that I think shows that is something called the media viability accelerator, a partnership between us and our news and USA ID for sort of worldwide engagement to use data, to use AI analytics to really understand the economics behind newsrooms across the world and some practices Some solutions that can take place to bolster and create more stability better reach and better engagement because what we all recognize certainly this room more so than ever Anybody is that the the loss of local jury realism of local news across the world results in less accountability less responsibility In less community engagement and less healthy democracy He's across the world. And so that's for us. You know, that constant feedback loop is is critical. It's why we're here. It's why we're having the breakfast tomorrow to have that same conversation. But it's it's right to hold us accountable and not just trust us.
Is it a hot breakfast?
You know, I sure I'll be making omelets.
I think I would have had, you mentioned community engagement. And I think that, you know, on the trust topic, you know, there are we have audience we know people are using, talking about Microsoft, given that that's what you mentioned. And that's what we have expertise on. People are using Microsoft products, Microsoft services, and the question is why? Do they trust it? Or is that their only choice? And how can we engage with people? How can we engage with people to understand what is working, what is not working and start creating kind of that feedback loop that you're talking about, not just at a one to one omelet, making a kind of scale, but like, scale that out to really understand and bring the feedback into the services and the products. So we can sort of strengthen that at every turn. And I think that's the responsibility that we know we have. And that can sort of create this bridge between, you know, not only users and consumers and humans, and these bigger tech companies, but also maybe between journalists and tech organizations as well.
I think, you know, incremental trust building is really important, you know, you're saying, you know, we've been burned before, and that's understandable. And so I think through these, like incremental exercises of, you know, meeting and convening in the same way that we do here and elsewhere, and that kind of leading to some of those governance pieces that Matt talked about. But also, you know, real results, I think that kind of builds trust. And I think I would challenge everyone to think of the opposite of that, like, if we don't work together, you know, what options do we have other than that, and so that is like, one of the things that we really believe in, in partnership on AI, is the power of convening the power of bringing people together to find collective solutions by bringing, you know, tech companies around the table, civil society, journalists, academics, and we we have sometimes difficult but thoughtful conversations around what some of the solutions might look like. And, you know, not everyone is going to see eye to eye all the time, but at least we are talking about real steps that we can collectively take. And so that is part and parcel of the work that I do, and others do a partnership on AI and a lot of it ends up being guidance that is CO developed by everyone. And so everyone gets an opportunity to see themselves in the work that we do and buy into that process. And it often means that we start very small. But that means that what we do does build that trust over time.
And you know, for Liz and Matt, what does building trust look like with journalism community for you for Microsoft?
Yeah, so happy to answer that and I'll use an example from sort of my my prior life that the socks I'm wearing, say trusted info 2020. That was a campaign with election officials that I worked on when I when I was in government to to provide information trusted information from election officials to the community He's about the security of elections that those socks if you were to ask me, as I promised, I'm answering, if you were to ask me in 2018, when I started working with election officials from the federal level to the state level, whether we could get to the point where we were co branding socks together a real goal, I would have said no, we were literally being accused of trying to hack them subvert elections and things like that. The only way that trust was gained was not only meeting them out in the community, traveling to all the states engaging, but but being responsive. I think that the feedback will only continue to come if there's a response loop to show you know, yes. We've implemented yes we've heard you or no we can't do that and here's why from a design standpoint or otherwise and so to me that the trust is being out and supporting those people eyelets supporting projects like the media viability accelerator With the journalism community being at events like this and Being open and willing and empathetic to the prior experience that you mentioned right and the need to Be responsive In the feedback loop not just show up listen you know sponsor We're at a conference and so For me that trust is built on that constant feedback of trust and recognizing The skepticism is real and well founded and that that validating that Trust is a constant exercise now One that you can just do once and kind of walk away from
it And I just add that The same approaches that might be helpful in your work musicians that help you communicate between the new his room and your product group So if there is a divide And I'm sure there is not but between the newsroom and product Um But if there is and there has
Oh never silo was in newsrooms
never I know but just for example If there might be, and and things have worked for you. So for example, something that had worked previously, in projects that we had done was we realized that we are all focused on the same goal, we're all focused on audience, we all have the same goal of understanding, we're all pushing toward the same thing. So sort of having the same language having the same goal, we all realize that we all interview some of it to sort of rustle up product insights and user insights and some to, to actually conduct interviews that would go on air or in print. So if you can sort of take that, as Dahlia and I were describing earlier in a conversation, that kind of like that microcosm, and sort of bring it out to the macrocosm of tech companies or larger companies and journalistic organizations, those same strategies probably apply, and creating those those bridges in those partnerships.
Yeah, I mean, I guess the reason why I bring up trust as being really delicate, because, you know, in many different journalism conferences, or in somebody's narrative, there is a very legitimate fear that AI gonna replace journalists, right. And so how much of that is actually legitimate? Or is there actually more nuanced perspective on how AI going to impact the day to day work that we do?
I think there definitely is a lot of nuance there. I think in a lot of ways, the AI that we see being used in newsrooms now is being used to support a lot of the mundane work that journalists would rather not do. And so it is, in a lot of ways, playing a support capacity, or should be playing a support capacity, let me say, but if we're being kind of honest with ourselves, and we look a lot more of the mid to like, long term, is it going to change the nature of how journalists do their work? I think so. But I think it is also a vote of confidence for good journalism. Like, I think that the way that we think of how AI can play a role in supporting data journalism, or in supporting in depth research or in supporting long form journalism, I think that's kind of where the trend is, is going to be heading. And I think that a lot of the, you know, mundane, repetitive work that journalists might not necessarily want to do, is what is going to get crowded out by a lot of these AI technologies. And I think we already see it in a lot of ways. There's, you know, a lot of support for like long form podcasting, there's a lot of support for like long form, YouTube videos or content in general. And so I think that that, you know, high quality investigative journalism is going to become that much more worthwhile.
I completely agree. I mean, I, I think, the fears, the concerns are legitimate and need to be discussed, or real. But there's also these opportunities, particularly in in under resourced, under supported newsrooms, across the globe, these opportunities, not just for the Menteng mundane tasks, I think that's right, but to be able to really dig into data, sustainability, you know, items that that would allow newsrooms to, to find additional partners to increase their reach, increase their their sustainable investment, and to empower journalists to go do what they do, what what we believe, at Microsoft and talking with journalists and know from engaging is that AI can't develop sources AI can't create that human touch, to build out reporting and to, to have the networks and to connect with communities, and to to engage in a very human empathetic way with the stories that that you all produce. But what AI can do is help bring about those stories and create really powerful data visualization that's very targeted within your community to tell the story that you're going to tell and you're very human connected way that AI can't in that way, and so that you can't replace the ability of a good journalist to really go out and develop those sources, those leads and tell a really compelling story, but it certainly can help support that in a variety of ways, including, once that story is done, you know, can we turn it into a newsletter or video content or something for social in a very efficient way so that the reporter can continue to do what they do really well, while that story is, is reaching the community in New, really important ways.
So, you know, when we think about the journalists is sometimes we say the media, they're more than just people who sort of report stories. All right. So you know, there's, you know, video editors, photographers, designers, data analysts. Now, what are some of the new skills you think that every sort of different journals need to have to actually stay relevant? You know, like, where's the conversation around that is sort of like whether it's inside Microsoft or through partnership is like, how is their job actually fundamentally going to change in the next three years? Or even the next 12 months?
Yeah, good. Good question. I think we touched on prompt engineering earlier, where i i I may have touched on it earlier I were thinking a lot about that just you know with on the design side so in creating kind of like visual visual assets and visual processes so I think the same is true for content so we've been looking at that as well and so I think Creating new kinds rolls around prompt engineering and having People who are then sort of prompt engineer directors to a certain degree So they're in a sort of a position of sort of shepherding and guiding as well and then helping others On the team learn and kind of educating the team so doing as much as one can to sort of try to help at team understand how to sort of Push down the prime bench mirroring processes into the work so that could mean generative AI image creation And using that throughout the design processes Something that we're experimenting with very Very much so I'm using that to kind of create copy And you know UX writing that kind of populates your Your interfaces um and like having these kind as it rolls be sort of Standard roles on the team This is something that we're looking at on A user experience and design side
I think there needs to be like a law We'll have a base level of understanding of how AI works And I think a lot of the time times we think of AI is like this like one big thing that is fairly similar Across the board but a I in a lot of ways is a catch all term For a very diverse set of technologies and how they operate and so if we're asking the consumer to mirror the person on the receiving end to be able to To discern how AI plays a factor in the information that they're receiving I think we can also ask from ourselves and others in this industry to Have that like understanding of how the tech ology works because once we understand it and why Once we kind of take away this is almost Mississippi them around how the technology works we're able to apply I have some of those pieces of, you know, ethical approaches, governance, all of those and be able to make some of those decisions about the technology in a way that's a lot better informed, and in a way that empowers the journalists to be able to kind of ask the right questions from tech companies, from developers, and really interrogate the technology that's being sold to them beyond just the very obvious opportunities that it presents. And so I think that that understanding of the technology and how it works is fairly accessible. It is not difficult to understand, at a basic level how AI operates. But I think, where, or at least for myself, let me let me talk about myself. I think for me, when I was first approaching that field, I was like, oh, like, but it's technology. And I'm not a computer scientist, and I'm not a developer. And I have never worked in any of those capacities. So how could I possibly understand something that I can't develop myself? But I think the more I dug into it, I was like, no, like, this is conceptually easy understand, I can draw a lot of parallels to things that I've learned previously. And I can bring my own background and nuance to that field in a way that maybe a computer scientist or someone who was exclusively in the technology field might not. And so I kind of would encourage journalists that to do the same.
So true or false in this statement, while AI won't replace journalist AI will replace journalists who doesn't use AI
the next caveat in the next five years, maybe not immediately.
At the same time
123.
True, true, false. Oh,
I want to go with maybe. I mean, look, I think
the reason I'm asking yes, I want to do false questions. The reason I'm asking you is basically I want to get a sense of for all the decision makers out there and journalists alike, what is it that we need to double down on? Yeah, right now, for our news organization for individual job, what is it that we need to double down on AI so that we could stay relevant?
Experimentation? Number one, number two, experimentation, number three, experimentation. And I mean, I think that's it, the tools right now are free, the bar is low. And companies all around are open to support and help everyone is learning. So we're in this mode of experimentation and learning. So I think the top three things that we can do are to try things experiment, take big, the biggest risk, one can because the learning will be great. And failure will be tiny. That that's sort of I mean, that's those are broad strokes without any business model attached. But But that I think, that I think could have the greatest return. And that takes people who can help educate that takes, you know, lots of things sort of internally, but that would be where I think people should double down.
Yeah, I agree completely. I mean, I was hung up on the journalist that use AI, certainly journalists that experiment with or try or advance, their understanding of AI is going to be critical, moving forward, and then, you know, doubling down on transparency, transparency, about your own policies, your own approach, the only way to write about those things and be transparent about it is to try it to experiment to understand how it might impact your work, the work of the newsroom and then to reevaluate, you know, what are the policies we need? How can we be transparent about it, and then and then to go out and be who you naturally are, which is curious, skeptical, challenging both the technology, the companies, and the way that you think about using it within your own operation, whether that's on the business side, the actual reporting side, the data side, right, there's so many opportunities, where data can be used to tell a larger story, and then a very targeted story within communities, but you can only discover that by going out and trying the tools and seeing what works within the operation that that you have to work within. And so I think doubling down on that transparency, about the use of it, about how you approach it, and then the policies and having those in place and being very open about it is going to be really critical. When we talk about trust,
and Dahlia for policies. I know you've been working a lot, you know, part of the partnership is looking at some ethical concerns is like what are some of the as journalists experimenting and adopting AI. What do you think some of the ethical considerations are in utilizing this to like, how do we, you know, in terms of bias, accuracy and maintaining integrity of news, like, what have you learned from your work?
I think we've learned quite a bit. So over the last year or so, we've been working with a lot of various partners and folks within the journalism community more broadly, whether it was, you know, tech companies or journalist or local news organizations, or, you know, academics or civil society, all of those kinds of different approaches to journalism are kind of different actors in that field, to start and tackle that question. And I think where we landed was on the question of procurement and like, which tools are we choosing to employ in newsrooms? And so that was kind of a lot of the questions that we were hearing from journalists. And so we started working on a guidebook that really supports the responsible, not just procurement, but also adoption and use of AI in newsrooms, and a lot of that has been powered by, you know, the input that we've gotten from the community itself, and it goes step by step through that entire process, to kind of push newsrooms to ask the right questions as they're going through that process, whether it's asking the right questions within your newsroom first, to interrogate why you need the tool that you need, or asking the right questions from the developers or putting in place the right governance to you know, to Matt's point and the right policies in place. And so to answer your question, I think some of the biggest ethical concerns come in when we're talking about how we're communicating to our audiences, how we're using all of these various technologies and how it's impacting them. And I think that is kind of one of the the biggest considerations that newsrooms really need to think about how are they telling their audiences how they're using some of these technologies? And then how are they considering the data and the information that has gone into developing some of that tech and how it's impacting the results that they're seeing? Because if a lot of the data and a lot of the information that has gone In is of a particular worldview is biased in any way then a lot of the decisions that that technology will then make will reflect those biases you No and we see it sometimes and we see the anecdotes up like oh like x generative AI platform you know All only assumes that like white men can be in leadership Question and that one is obvious right but then you know There's a lot more of those employees missions and how a tool might help If you write a story or how a tool might tell you you know to consult certain sources or how a tool Will my you know use the data Even being provided by some of your all audiences to then suggest further stories to them or prioritize certain Stories and then if we want to go like a level D Per then how Oh Are some of these AI tools collecting data about your audiences You know we're employing a lot of These tools we're giving them access to To our databases or to our audiences Data How is that data then being reused by that tool are they selling it to Someone else are they collecting Personal Information for other you Who says and I think all of these are are real questions that we really need to kind of wrestle with
as the tech companies themselves ask themselves these questions bins and if they do What are the answers are they sharing it With I mean because I think as the journalism community is asking the same Question it will be beneficial For us to know how did you Guys we're gonna say you guys tech company tech partners are answering those questions
yep yeah I bet you know A part of But my team does two different things one is work on something And called being chat which is the sort of AI chatbot and the other thing is on I'm Microsoft start you News Feed the worldwide news My firm the big 10 Our product is new And so over the course of its development since last Last year we've been trying I'm going to put up a sword have published and put out on our our blog and in every sort of public forum that we can everything that we've been learning and that's From you know both the sort of like the way that we've been growing sending the data to the REI the responsible AI Principles and guidance So we've been using and developing so that other people can use them too To do You know user research and what we've been doing going to learn how to feed that feedback back in To the platform so I think that that's one way that we're We're sharing what we're
about heard about over the past few years. So like, how should a newsroom be thinking about that if they're the sort of originators of the data set? Or they have one that was maybe given to them from the source?
Yeah, sort of like when we put the input on you? Isn't the gen of AI taking all of our collective IP to benefit? The agenda, the AI, but very little return back to the journalism community who's doing the inputting?
So yeah, I was I heard to kind of separate issues, but the credit. So just from this is where the policies approach to understanding the nature of the data, the sensitivity of the data, there are obviously options that we offer others offer to have, and use AI in a secure private way. Right, that that is available to reporters and others. You know, part of our core principles are privacy and security around data security, but But that's only as good as the policies of the user that has the data that's inputting the data, and what they're opting into from security standpoint and so I think really understanding how sensitive the data is what policies need to exist around that and erring on the side of either understanding like what your options are or caution on you Using or exposing that data is critical but that's you know that's a practice journalists already already have write really tight protection of that Data it's being equally sensitive to that knowing that you have some secure private aiops Since you but is that is that risk worth it to you in your your sort of policy approach
So I want to be mindful of time I am so to wrap up a quick answer is what is your advice for Any everyone in this room what They said they could do in the next 90 days to actually be better and be more familiar with AI and how they could use that as their day to day work Any tips
I would say First of all start small Um start with you know to Liz's point earlier experiment like that's really important Learn more about the technology I think that's it that's it going as well and to kind of throw it back to earlier In the conversation start by automating things ones that are repetitive things Are you know not super interesting for journalists sudo before you can have go to like the more market technologies out there the technologies that are you know more bright and sparkly but I think automating the mundane is is a really great place to start And then working your way out from there I think also learning from Other maybe larger new These rooms that have implemented some AI I a lot of them have been fairly trans parents about what tools they've been using and how and what the results are Ben so there's no need to kind of reinvent the wheel I think there's a lot out there on YouTube cases as far as what Other newsrooms might have done and to learn from On
yeah it's a it's a place to start I agree with Start Small try to take a moment and imagine the thing that takes you the longest amount of time for the least return in your personal life or your past shall live like what just takes you over A long time and And then turn to you or tool of choice and see if the tool of choice might be able to assist in unlocking or solving that thing For me On a professional note I have to use tool of choice for I won't name any names To summarize and compare gigantic Pete You have documents and Just look at that At side by side And that to me frees up a lot of times To allow me to do more strategic synthesis and AI find that pretty magical and it saves saves me some time on a personal I'll note I read St Louis was in a moment of of purchasing a car and I loved to ask Will All these three things This dog this bicycle and this size and age of child to fit into these types of views calls and lifts the vehicles and Get a comparative chart and saving lots and lots of time so just On a personal or professional note try to sort of evaluate What is it that you spend a lot Have time on that you really does bring you a lot of value and use the tool to kind of investigate that
Mind Come have breakfast with me tomorrow morning and As
you will be making omelet yeah so basically the wrap up is to experiment with cost Should So with that thank you so much For the conversation this morning and hope everyone One will have a great rest of the day In conference