I think we're going to start with Ben. Many of us here are obviously focused on policy where in Washington, we're all tech policy nerds, so let's just start with a broad question about how the White House is thinking about issues of a workforce in AI. Is it leaning positive, negative? What can you tell us about the broad view?
Happy to say a few words on that. As President Biden has said, obviously, AI brings with it enormous potential and promise as well as enormous risks. I think that, with workforce in particular, that's where you really see a lot of that come true, and AI's created a ton of uncertainty in the workforce. After all, if you look at opinion polling, there's something like an four in 10 Americans who say that they simply don't know whether AI will help them or hurt them in the workplace. Another one in four Americans roughly expressed FOBO, which is fear of becoming obsolete. So, given that, obviously there's a lot of uncertainty, there's a lot that we don't know.
But, I will say a little bit about how we're thinking about the benefits and risks, at least at this stage. This obviously was a big part of the executive order on AI that President Biden signed last year, which really stands up for workers, and tries to deliver protections that they need to be empowered and to thrive in an economy where AI is more widely deployed. As for thinking on the benefit side, obviously there's ways that AI can automate certain tasks, that are rote or mundane or less interesting to workers, in ways that improve their lives or make them more productive, or allow them to spend their time doing other things that are more engaging parts of their job.
If you look at healthcare, for example, when the hospital staff see a patient, there's over a dozen forms they will typically need to fill out on average. There's ways that the AI can be used, with appropriate oversight, of course, to bring data from one form to another, from one record to another, and fill in some of that content in ways that allow hospital staff to spend more time actually seeing patients. There are similar examples in the education space. The Department of Education had a really good report out last year that explains some of the ways that AI shows potential, at least, to helps teachers with lesson planning, with tasks like that, that'll free time up for them to spend more time with one on one engagement with students. So those are some of the benefits, of course, but there's a large set of risks as well, that we're very focused on.
I think that there's really two categories I'd highlight, in terms of where I think the risks breakout. The first is risks to the quality of jobs that exist today, and then there's obviously a set of risks around job displacement, or labor disruptions. With the job quality risk, I think that's really where we are certainly seeing these risks manifest today in certain parts of the economy, kind of all over the country. We're also seeing workers, including through collective bargaining efforts, take important steps to address these risks. We hear a lot about this set of risks from the conversations we hold with workers from civil society, organized labor, groups like that, at the White House, through conversations and listening sessions. With call center workers, for example, we've heard about times when AI or automated prompt generation tools generate inaccurate responses to customers questions, but those workers are still responsible for giving those responses, which creates worst customer service outcomes for everyone. We hear of similar risks from, in a wide variety of fronts, from automated surveillance tools, including risks to privacy, risks to workers freedom to organize, fair pay issues, if the amount of time that they're working isn't adequately captured. Those are all things in the job quality space, where again, we're seeing those things really happen today, and, again, it's important to be taking steps to address those quickly.
On the displacement side, I'll just say quickly, that's a set of risks where I think that, in the near term at least, we think that AI is more likely, in general, to change jobs rather than to displace them entirely. There are, of course, exceptions, and some occupations might be more at risk than others. That's certainly somewhere where we are watching very closely what's going on in the labor market, as well, as watching the developments with screenwriters, for example, who recently negotiated key protections to prevent artificial intelligence from fully replacing human script writers. Those are all important developments that we're mindful of.
Over a longer time horizon, there's a lot more uncertainty. We've seen, in the past, how new technologies can really change work, can cause some workers to be displaced, even if they create new jobs in other domains. Again, it's hard to predict ex ante exactly some of these effects will occur. A lot will depend on the pace at which AI is adopted and deployed throughout the economy.
The last thing I'll say here, I suppose is that, all that said, it is really critical to be doing the work now to think about what are the supports that workers actually need from the federal government, to make sure that they actually can share fairly and the increased productivity gains from AI, and that they have the supports that need to thrive, set up in advance before some of these changes may or may not actually take place.
The AI executive order came out last year. When you were having conversations about the EO, obviously this technology is changing very fast. How do you think about future proofing government guidance about something as dynamic as generative AI?
That's an excellent question. It's certainly a real challenge that we grapple with as policymakers. I don't think there's one simple answer to that. Developing AI policymaking, as I think about it, it's an iterative process. It requires multiple layers of overlapping governance mechanisms. I'll say a couple of things quickly, one is, if you look at the approach the administration has taken to AI, we started with outlining a lot of the key foundational principles such as through documents like the blueprint for an an AI Bill of Rights, through the National Institute of Standards and Technologies, through their AI risk management framework, that really established kind of key, what I would describe as landmark or evergreen principles. The way that we apply them in the future might change in certain ways, as AI is being used and developed and deployed in new ways. But, I think it's important to have that sort of really overarching risk management principles going forward. The other thing I'll say, too is, again, agencies, including us, directed by the Executive Order -- I can speak more about this later, too -- but they are certainly engaged in really in depth study on a lot of these issues.
There's a lot more that we have to learn, I think, from the federal government, including, as you know, the Department of Labor and Council of Economic Advisers, they're doing in depth study on what potential labor market impacts might be in the future. This is somewhere where I don't want to front run anything that they might find, but we hope to learn more as well.
Thank you. Athina. I think I'll go to you first up, we talked last year about some of the AI processes that Pepsi had already put into place, both in manufacturing, supply chain management, other business processes. It's been a while since then, can you tell us about some of the lessons learned about the AI implementations you've already done at Pepsi, and what you might change, or what you might add based on that knowledge?
Sure. I'm not sure everyone understands with the footprint of PepsiCo, but just to explain the complexity, we are a 63 billion business in the US -- I'm gonna focus on the US -- between our snacks portfolio, which is the Doritos, the Cheetohs, the Lay's, our beverages portofolio, which is the Gatorade, the Pepsi, and, of course, on breakfast occasions with Quaker. Why I'm giving you this complexity, because we do source to sell, and whatdoes source to sell mean? We are one of the biggest agro-companies in the world. We have the agriculture and farmers, we manufacture our own products, which means everything from the raw materials to the creation of the product, we do it ourselves. We move that product. We are the second logistics provider, after UPS, in the US. Then we provide those products to the retailers with our own people, we send that product as well, and we do also last mile delivery. So, we impact every single household in the US, plus we have almost 130,000 employees, just in the US.
Now this level of complexity creates the following, we have the biggest cloud infrastructure, data infrastructure, and AI infrastructure, than any other industry in the US. That's not an exaggeration, comparing to this second, which is Walmart, we are 30% bigger than Walmart when it comes to our data cloud and AI infrastructure, because of that complexity in our supply chain.
So, back to your question on AI systems for us. Utilizing AI systems at scale is not a nice to have, it's a must have. Why? Because we made to ensure that we always have true efficiency when it comes to our packaging lines. We need to ensure that, when it comes to health and safety of our people in our facilities, we drive zero accidents. There is the right level of quality and audit at everything that we do. We need to ensure that our truck drivers, that is one of our biggest unions, and one of the biggest unions also in the US, use the AI technology responsibly also for asset maintenance, and training and upscaling. Last year, salespeople are able to use AI for guided selling for both the consumers and, of course, whatever they do.
We have seen a couple of things so far in the implementation, which we are in, I would sa,y at scale of the third year of implementation. One is improved productivity and efficiency, from 10 to 70%. It varies depending on the area. We haven't laid off anyone. I want to make sure this is very clear. For us, we're not using AI to optimize the workforce, in terms of number of worked days. It is to drive additional productivity and efficiency out of the existing workforce. That's why we have done an extensive upskilling. We call it the Digital Academy, where a number of employees have been through waves of data, AI, digital training, across every spectrum of technology, in the practical application that we have done.
Lastly, I would say the biggest risk, for companies like us, is not on the frontline. If you know companies in the manufacturing space, who is impacted is the frontline, i.e., the person that sells, or the person who manufactures. Actually, it is not that. It's financial planners. Do you need to have as many financial planners when you have automated AI forecasts? Do you need to have so many HR people, if you can do employee experience in a much more digitalized way. So, for companies like us, the biggest risk is in the knowledge worker base, it's not in the frontline base. The frontline, the NPS score, and the adoption of AI systems, has been off the charts. That's why it was critical in the use cases, when we visit with Vivian in Aspen, is to to hear from the frontline on how AI is making their jobs better, more efficient, and creates a better future for their families.
Vivian, can you tell us a little bit about that project, and what you learned?
Yeah, it was fascinating. As a recovering journalist, I go into everything with a great deal of skepticism. But, I came out of this project as a sort of a true believer in terms of the benefit. If done right, the benefit to frontline workers around AI adoption. So, a couple of things that we discovered, first of all...
Tell us when you did this.
Sorry?
When did this take place?
Oh, in the past year?
Okay.
Look, we're still early days on this. One of the one of the things to keep in mind is not everybody is PepsiCo, the vast majority of manufacturers are small players, they are really not far along at all, to put it mildly, in terms of their AI automation adoption. There are also very few guidelines for manufacturers or companies, in terms of resources to help decision makers and others, but here's what we know. We know that an AI deployment without worker involvement, it's not just bad for the workers...
Of course, it's bad for the workers. It's bad for the companies, because it's lowers job quality. It's reduced retention, and it's reduced worker skills. The need for companies to do this is not about some kind of charity or kindness towards humans, it actually also has a bottom line benefit. Our guidelines that we put out, which we're happy to share, focus on reducing risk automation systems, upskilling workers, and worker retention. And, interestingly, Ben talked a little bit about collective bargaining, this is an area where employers and unions can be aligned, if done right. If done right. Of course, there are ways to do this wrong in terms of retention. It's great that PepsiCo is leading in this space, we need more companies to influence this space, because it will influence the entire supply chain. We need employers to incorperate worker voices for job quality AI adoption. It's about job quality, it can't just be from the manufacturers point of view, the focus cannot just be on numbers and productivity. That will fail, it will fail for the company, it will certainly fail for the frontline workers. It's got to be on job quality, and will improve systems all around. Those are sort of the top line.
So, in your line of work... she just said, Not everyone is Pepsi. So, can you tell us a little bit about what you're seeing in your research, and the things that you worry about?
Yeah, absolutely. And thank you. It's great to hear that Pepsi is doing that. And I think that that's a good point to make a distinction between labor savings and versus efficiency. And because there's a lot of suppose that labor savings that really isn't labor savings, it's just shifting the labor onto workers. And it's done in visibly. A clear example of this and I'll take the federal government. At the same time that sort of some of the medical modernization was happening with digitizing medical records. There was a bill in the 21st Century Cures Act which required that in home supportive care workers had to participate in electronic visit verification system, which was basically going to document down to the minute or 15 minute increments, how they did their work, and this would be determined everything from how much time they had, or how much money how much time They had to provide services to a client and how much time clients were allocated. The problem was the the technology would say you have 15 minutes to be bathed someone that sounds great, most of us maybe take 15 minutes to bathe. The problem is you're bathing someone who is disabled, not fully mobile, or has dementia, they are not going to as easily get into the tub or get out of the tub or even agree to it. So what happened was, there was no opportunity for these employees or his workers to actually say, I need half an hour or I need longer. So what they would do is they would absorb that because they have relationships with their clients with their disabled or senior care people. And so they would do that out of their own pocket and not get paid. So I think that there's a clear difference between how we look at data, and what the impact is that sent now had that been coupled with technology that helped someone get a person who is not mobile, into the bathtub and out of the bathtub, that would have been great. So I think that we need to see that sometimes the costs of doing this work. And sometimes that's risk as well as also chipped in. And another example is sort of the conversation over Uber and Lyft. And how they're classifying workers as independent contractors and not employees. That's another way of like shifting the costs of doing business. Unemployment, health care, retirement, taxes, everything is shifted on the employees. So I think there are different ways to look at that situation. And I think, you know, if that were the case, we talked about the Screen Actors Guild, part of their concern was that the industry was changing broadly, the residuals they were receiving from a lot of their work was not what it used to be. So the shifts may be in the task, but maybe in the industry as well. So to kind of go back to your your question for Ben, about where can government really provide guidance and assistance and to think more broadly about not just the rescaling, but sort of how an entire industry is changing? And the business model for that and making sure that worker protections are part of that? Because some of this is like this is not new misclassification of workers is not new, it's been happening for a very long time. How do we strengthen those protections? Because technology can often offer a rhetorical argument for changing work that may actually not materialize on the ground?
Is there anything specific you would point to that you'd love to see the government doing right now to sort of set some of those protections in order? Oh, that's
A great question. I think one of the things that California has started to adopt, and it still needs improvement is California has has a Consumer Data Protection Act. But it until recently, it didn't apply to workers, and it only applied to consumers. And it takes a very consumer perspective, you are a consumer, you have the right and the the the choice to not use something because you don't like it, you can go somewhere else. When your worker, that really isn't the case, if your boss comes to you and says you must use this app for tracking the metal Google equipment that you're responsible for in a hospital. You can't say no. And then there's a legitimate reason for that, right? You don't want contamination of equipment, but you can't say no, and that equipment is attached to you. So it is tracking you as well. And it could require fingerprint, iris scan, face scan in order to access that technology. So there is no real space there to say yes or no and have control over that data. And it's also collective data that matters. It's not what you do alone, that determines your workflow. It's what your entire staff does, all of the people. So collectively here rights, I think is one place that we can start thinking about national privacy law, what
a concept maybe we can work on that has really worked out. Yeah. That was a couple of students. Let's go that that was the main topic of conversation. Oh, but yeah. Does Athena or Vivian Do you have any response to to what she just said, based on just your experiences with researching and on the ground at Pepsi?
Yeah, I'm for our warriors. As Vivian said, they have participated in the beginning the design of the AI applications and the system. So it has been a much more collaborative approach. And I'll give you kind of the I would say the good and the bad throughout the process, let's make sure we're talking about the bad it's on the good people are coming along in the journey and therefore you know, from the moment your cup is happy, the systems that will use the systems right. Now, on the other hand, it takes more time. Because you they have to be part of the testing experience, not just the design experience. And we've seen and I'll give one example of our drivers, right, it's the current the current landscape and yet slightly beyond scape on AI when it comes to driving is very different in the US compared to the rest of the world. So we have drivers in the US in in Europe, that they are more than happy to have cameras looking out not aiming, right. So forward facing cameras, and inward facing cameras, right? For safety reasons, etc. are a Union's in the US had no yeah, we just want damage as facing outside absolutely fine for us I mean we respect that I mean, we still manage to improve kind of the safety when it comes to the periphery of the way during the predictive asset maintenance. And if they don't want to use other cameras, I mean, we make sure that the conditions are being tracked within the track in a very effective way. But we don't capture the biometrics. And but that's a choice. I mean, we are not forcing the employees in this case, to have one single approach everywhere we are, is so they can opt in or opt out. But what we are trying to do in the lack of rules and guidelines and regulation, we have to create those with our employees. And this is kind of what our ask is, is, can we have some more, I would say universal metrics, guidelines and frameworks and eventually regulation. And for us to be able to move as an industry moves faster.
Did you have anything to add? Yeah,
just that I think that there's you know, there are there are policy recommendations that could help including, you know, establish a regulatory framework to incentivize companies to do this work. And the federal government, you know, has a role to play in terms of your procurement procurement processes are very powerful in terms of incentivizing those behaviors. So
you listening, Ben? Yeah, it is. Yeah. Is there So Athena? So so far, what have you seen out of the government, whether it's been in the executive order or a couple of proposals from Congress, obviously, nothing has passed yet. And a couple, you know, guidances, from different agencies, what has been useful so far and helped Pepsi with its AI implementation? And what are you waiting on? That would be like, absolutely critical.
Yet, there the AI guidance are very positive, it's a it's a step to the right direction for us. And I mean, we're we were to one of the companies that provided to me is that when they had, of course, kind of issued the RFP, and we've had submitted that our own AI framework, actually contribute the dog show to the to the both the guidelines and the framework. And, and we see that there is a decent bipartisan appetite. To find a way, I wouldn't necessarily say that you need AI by providing at least some approach on how to use AI responsibly. And we would like more of that. And eventually, you know, that will lead to maybe the creation in a couple of years. Who knows? And but to do it in a much more deliberate way. What do I mean by that you cannot regulate AI without having data privacy approach. I mean, you cannot say, oh, I want to coverage programmable AI systems, and you don't have the equivalent of GDPR, that there exists in Europe. So there needs to be a roadmap of things that need to be established, that have to get accroche governments and standards can be every election, we have a different approach. And that will be super careful for the industries. So we would like consistency, a roadmap that doesn't that only the sexy part now, which is AI, but I'm sure goes back to the principles of how we started, which is data privacy and data standards.
And Ben, can you name any, you know, before the AI conversation? Do you agree that sort of baseline privacy and other digital protections would be useful to you know, the baseline before we start talking about specific AI regulations? And, you know, can the White House push Congress to do something on that?
Well, you know, the President has been very clear in calling on Congress to act with bipartisan privacy legislation, it's long past time, in our view that Congress take the steps to do that. Meanwhile, of course, the President Biden is doing everything he can in his administration to use existing authorities to protect Americans privacy, and we've seen agencies take a number of steps to do that. But But to your question about the legislative side, I think that the President has been clear on that point. As I've said, I will also add that I think AI really does. You know, it brings a lot of these issues with privacy front and center, I think for a couple of reasons. You know, AI on the one hand, it enables a lot of data collection, and, you know, it enables ability to draw inferences from new data in ways that make the consequences of, you know, poor privacy protections, even more significant, based on the capabilities that it gives us, but it also expands the incentives I think for companies to engage in data collection with the first place given what what companies can now do with with more and more data. It's the issue of not just individual data, but but collective data across large groups, as you said earlier. So So certainly, you know, privacy identity needs to be front and center of this conversation when we think about AI as well.
And what do you think just as far as what else the government can be doing to be helpful at this stage? This for me gets? I thought it was sorry,
I think one of the things that the government has started to do is the Department of Labor issued an executive order, looking at how, because it's the worker surveillance that leads to data collection. And some of the that worker surveillance is it's, it's just a camera, it's watching everything. It doesn't necessarily just say I will only collect this metric collects much metrics, many metrics, and that can be repurposed in many ways. And so the executive order or the, the legal memo basically said, we need to make sure that this is infringing on the worker on workers rights to take collective action. And this kind of goes back and supports this idea of like making sure workers are part of the conversation early on. A lot of people are talking about how workers have to participate in in the deployment of technology. But we all know, a frontline workers opinion is not nearly as important as the CEOs. So how is the collective voice? Or what could amount to like 111 suggestion could be like idiosyncratically applied, it'll I apply it to you, but not everyone, or it will be a fixed that's not long term. So I think that being able to put those protections in place, so there's like a little more even level playing field in these conversations is important. And I think, you know, just to talk about gendered AI a little bit, writers and artists are very concerned because their work has already been scraped. And so there's a lot of conversations happening around trademark, which is one step. But it's not the only one. I think, I just had a talk last week with an artist, a rapper and artist and a model. And they were really concerned about the future of the industry. Because some younger artists were thinking, does it even make sense for me to enter this occupation? If my if my work can just be scraped wholesale? So there has to be some regulations on the companies that are collecting this data, sometimes completely illegally, even using the artist's name illegally in committing fraud? So yeah, and you know,
I'm sure we're gonna have a whole nother panel on copyright and patents or trademarks as well. Yeah, yeah, absolutely. I do want to leave some time for audience questions. But before I do that, I'd like everyone to tell me a thing, they have found useful or hopeful about AI and something that keeps them up at night. And we can start with
I think that is hopeful is that AI does not have to be instead of it could be in addition to. And I think that that, that applies to so many things here, we're talking about, you know, at least specifically the work we're talking about. For frontline workers, it can enhance worker job satisfaction, and retention. And that is true, not just for frontline workers, but for journalists for you know, for for artists, for you know, those in any walk of life. Truly, the thing that terrifies me about AI is, well, frankly, this is this is completely got nothing to do with labor at all. But that's okay, a big part of the work that we do on separate from this project is related to the impact of AI on information integrity, and particularly in this year of, of just massive local elections. And there's a lot of risks there. And I'm not sure either tech companies, or governments or campaigns or election officials, or even the media are influenced prepare
for it, and then we'll talk on behalf of our employees. And because some of them said you have to you have to voice kind of our opinion. And what I'm hopefully is it allows for Guardian mobility is before if you were a merchandiser, your whereabouts and I said before, if you were kind of a warehouse manager you with DPD in logistics, now you can go from sales to supply chain from supply chain to financial finance, because why? Technology allows you to do things that Katherine's very sorry, cuts down the functional silos that every organization has, and of course, everyone that I'm the outcome. So now suddenly, we become an outcome driven economy, an outcome driven company, and not a company that things within the functionality. Keep it ability and business more than that companies used to have for hundreds of years?
Well, in terms of things that are particularly exciting, you know, I flagged a few of those things before, certainly I think in some industries, such as, you know, health care and education as long as we can get the risk mitigations. Right, as a prerequisite, you know, I think there are really exciting potential upside use cases that that do excite me, I think, you know, in AI is applications to science more broadly, in terms of expanding the boundaries of what science can do, is also really thrilling to think about, we already see AI being used in you know, for example, to predict protein structures in ways that could lead to new possibilities with drug development, which I think is, you know, would really be a collective benefit for humanity, if we can, again, contingent upon mitigating the risks, capture those benefits appropriately. In terms of things that keep me up at night, you know, this isn't necessarily a terrifying thing. But but just in terms of something that shows how daunting the task really is. You know, I think that, you know, when I think about AI, it's it's such a cross cutting area of policy right now, it really touches on everything that we we do in government, and it, you know, involves certainly, almost every federal agency, if not all federal agencies, in terms of thinking about what the right approach to AI is, you know, there's there's a reason that the executive order last year is almost 20,000 words, it's so long, because there's frankly, so much to do, you know, both in government and beyond government. And I think a lot of that obviously adds up to a pretty daunting set of tasks. There's a great deal of work to be done, which, which we've signed up to do. I'm confident that we can do it, but But certainly, yeah, it makes you appreciate the enormity of the challenge.
I will start with the nightmare first, because I want to end on a positive note. The other day, I was having conversation and someone was talking about how there's an a movement in Iran, for women to like, remove the hijab, because it's become a symbol of like control by very conservative groups. But every time she asked an AI sis AI generated, I think it was mid journey to create an image of Rania woman with, you know, freedom fighter, they kept putting the job back on note, regardless of what she did. So I think that there is this potential of AI just could consolidate a particular image of people, even though we as a society are trying to evolve. So that was kind of little concerning, because this is like a movement and national movement. But the positive is that she was also seeing new people creating new images and flooding the, the Internet with more, so there's definitely a space to change that. And we, you know, when during the pandemic, even the idea of like a retail worker, or delivery worker became an essential worker, and they were able to use that moment to really change who they were. And technology was a big part of that it was a communications tool that allowed them to really change how they identified themselves. So that's positive.
Absolutely. And with that, I'm going to open it up to folks in the audience who might want to talk to our esteemed panelists. Any questions? Good.
My name is Derek Wyatt, I used to be an MP in England. If there was an AI Council somewhere in the world, would America join it.
It's a little hard to get to. I was about to say it is it is hard to say without any more specifics. I think that what I what I can say, though, is separately from the context of a hypothetical AI Council, you know, it is very clear that, you know, the administration takes very seriously the need for engagement and cooperation with allies and partners and other countries abroad. And that, you know, certainly is a key priority in the executive order as well, which really, I think sets up key next steps for the administration to do that sort of work in collaboration with others in key areas related to AI. So, you know, again, I won't speak to necessarily exactly what future structures of, you know, a hypothetical AI Council might might involve or entail. But I think you know, to get the spirit behind your question, absolutely. It absolutely is a priority to be engaged in collaborating closely with with other countries in that respect.
I'm Kathy Klein, thank you so much for this panel, and part of the group that founded ICANN Internet Corporation for Assigned Names and Numbers and won't be stakeholders. So let me let me give you let me draw the analogy and then ask a question. So we formed when the GDPR was founded, we wanted to know what the implication would be on the global Internet domain system. And we want the group with US attorneys and Europeans and international attorneys and others. And we mandated that everybody study a little bit about the GDPR where they walked in, it's not easy to legislate and it's not. So now when we're talking about employee or an AI decisions, what type of CEOs and advisors and attorneys and data scientists background what training should they be for them for us in the workers so that they can walk into the room and get a knowledgeable.
Thank you. Yeah, maybe I take that first and then. And wait. Firstly, let me say my role was established because of that didn't exist in the organization. Let's start with that. Right. So and I report directly to the Chairman and CEO. And partly because we want it to ensure that everything we do from strategy to implementation falls under one organization. So one is you have to start from the exco position, you need someone who wakes up every day and ensures that whatever we do strategically, looks at all the implications of technology to the business and the strategy. So that's one second, and my two greatest partners in the company is HR America, HR and legal, legal, legal. Yeah. So our tip comes in and our people officer is where we partner both in terms of upscaling and scaling on one also we have the academy we work with the HR and the legal we have together established the responsible business framework for the company was being now which is being assessed by the board as it relates to AI twice a year, and as it relates to other areas like cyber every quarter. So we have innovate the criticality of AI system chapter the board. So this is this is all very important. The last part for me, which is the third pillar of the approach, we have employee forums. So I created what we call a distant Ambassador community, these are people who are represented by all the different aspects of our business. And you will find that everyone who is a an HR practitioner, in what Marya to someone who is a truck driver in Oklahoma to someone who is a salesperson name. And they act like our governing council. So we meet every two months, they provide feedback on the design of the systems on the basis of the systems and the adoption of the systems or we use them as a change ambassadors. That's why we call them that is that ambassadors of the organization. And every July they co present with me to the board. So when we do talk about the progress, or what was digital and AI, they call it that with me in front of the board. So these are the activities that is that we have seen work for a company as complex and as global as PepsiCo try everyone is part of the journey and contributes to the development and the sustain.
I'll just only add that and this was important the work that we at Aspen did, specifically with PepsiCo, but we are focused on this work, as are many others, which is the responsibility to upskill the workforce, the future workforce, the potential workforce, for that very reason. And it begins with, you know, education, even in secondary school. And it is requires a multi stakeholder approach, including for the corporations and companies who are going to be relying on that future workforce, to be knowledgeable, and for the future workforce, particularly to make sure that they are prepared for the jobs of the future. But really, increasingly, it's the jobs of the present. With a focus, I would add on making sure that we do this in a way that level is the play with field, particularly for underrepresented communities. It's Nano. There's a bunch of different efforts around this that yeah, that are that are either in motion or about to be started.
Go ahead.
All right. Joel, previously with Senator Rounds office, I was I was curious, you know, in the AI insight forums that the Senate did, I think it was Darren Acemoglu, who talked about, you know, the dramatic shifts that we're going to see in the employment and labor markets. And I wanted to understand I mean, I spent this question is, especially for Ben, do you know, do Do you think that it would be, you know, more optimal to create more free choice for people in the labor market, instead of going after, you know, more employment, more regulations for employers by basically decoupling things like health care, and making it easier for people to switch jobs, thereby letting them choose from the companies rather than, you know, trying to dictate what companies have to do?
Yeah, I'm happy to start and it's a great question. certainly familiar with Darwin's writing and commentary on this, which is always extremely insightful. I, what I will say is, you know, again, you know, the federal government and in particular, places like the Department of Labor Council of Economic Advisers really are doing the work to study in depth a lot of questions related to what you're saying, in terms of what exactly the right support systems might be, and that include is looking both at, you know, support systems that we've had in the past, but also thinking about new possibility as well. Right? There's a lot to consider. So, you know, again, I'm, you know, certainly don't want to front run a lot of the work and conclusions that folks that are really looking at this very in depth on a day to day basis are doing. But I will say that, you know, I think that, again, the President is very eager to work with Congress, in particular on these issues broadly. And he's made that intention very clear. And these are things that once we have, you know, once we've completed the process of really doing that in depth study, I think that these are questions that I think we'll be able to have a lot more like, shut off. There's a lot of possibilities to consider.
Anyone else got
very so called the National Democratic Institute. On open source technology, what is what is your approach is have made the United States The point is this to be agnostic? The approach distance of a couple of states from Ai, you know, I know that you gave a little bit of guidance.
Yeah, happy to start. Other panelists might have views as well. But you know, there's a lot of questions around open source, as I think, you know, the executive order directs Department of Commerce to be releasing a request for information on these questions. And I will say that, you know, public input on this question from, you know, academia from, you know, industry, civil society, labor, you know, all sorts of stakeholders going to be really critical, I think, for this and for other areas where the executive order directs public input to be solicited. So I, you know, sorry, I apologize, I don't have too much more of a substantive answer to give at this time. But I do want to say that we recognize, obviously, there's a lot of questions around this particular set of issues. And it's one that I think we are thinking about very, very closely and working to get as much public input as possible to get us into questions.
too, do you have any other questions? Going on? Scott was Joel.
You got three minutes.
It's one of the things that one of the things that one of the panelists mentioned was was kind of this international view talking about women in Iran wearing that he jobs and you know, how AI tools we're kind of eliminating them. I think this is this is mostly focused on PepsiCo. But I'd be curious if anyone else has a perspective of how, how US policymakers should think about or if there is a possibility of creating kind of our own domestic Brussels effect when when we think about AI regulation when it comes specifically to work or dignity, and how companies are really going to deal with integrating these very different worldviews. I think, you know, I'd be very curious for PepsiCo, I mean, you're, you're a global company. I mean, you're operating in what I mean, almost every market, including like Russia and Ukraine, right. I mean, so I'd be very curious how you how you incorporate those very different moral views and your workforces across these different regions. Yeah, great question, Joel.
No, wonderful
question. And we are operating in every market in the world. Yes. Including, obviously markets that are in current conflict. Beyond the budget, as you just referred to, a that's why for US policies are broken. Let's start with that. And so we have standards, of course, that we as a company we want to abide with, I mentioned some of the frameworks. But data privacy laws are different if you are in Pakistan versus India, are very different if you live in Saudi versus Iraq. And I just used for my case that we have operations. And so one is we have locally and data privacy teams and data teams and technology teams that make sure that we adhere to those standards, and we have legal teams that do that. But at the same time, what do we want to ensure is that per different types of legislation or regulations that go beyond just data, but to your point, that's areas of dignity and human decency, we are also abide with that. So I will give you Saudi as an example. Now we are one of the few employees that have a color unit, which is only females, because they want we want them to feel very empowered, right. And we have full factories, we have one facility that is run by females because we also want them to feel that they can run they can be plant managers, and not just secretaries, just to be very clear. And we have locations like Vietnam, where because data privacy is super important, but there is no AI regulation that we have given them the standards on AI regulation and AI privacy and we said why don't your stress test that base? On the population, and based on what you want to see in terms of application. So there is what we call freedom within a framework. We have the standards we have the policies that we have, we've said okay, in a utilization in local markets, this is what it is. So we have to abide. And then this is where we allow every market to play within their thresholds. And for us, that is one element that we you know, we always add here and in people's minds, we had an American company, that we are a global company, but we are an American company. Shah has every other American company, we can be boycotted, we can be attack, cyber attack, that etc. But the one thing that we don't compromise is our employees. So the underlying principle is we need to protect our employees, we need to protect the privacy of most of our employees, irrespective of the situation or whatever happens in the market.
Thank you, Dina, and thank you to all of our panelists. We had a great conversation today. And yeah, we got to wrap up. So thanks, everyone, for coming. We'll see you around.