Thanks for having me. I want to say something before you go.
Go ahead.
... even though you're moderating. It's a real pleasure to be here. Normally you wouldn't have an EEOC commissioner here, you don't know my world that well, and I don't know your world. But look, an innovative technology, here, is bringing me to discuss an area that's impacting all of us. So first, I really appreciate having someone else other than an FTC, FCC, commissioner that you're used to hearing from, so thank you at the outset.
It's our pleasure. Also, really quickly, 20 years of State of the Net, I think Tim Lordan deserves a big round of applause. 20 years. Workday has sponsored this conference every year since we stood up our DC office, for the last five years, and it's always a great time.
So, here we have diverse perspectives. I want to start with an easy one, Commissioner Sonderling, which is for all of the budding EEOC commissioners in the audience now, we heard a little bit about your background, but how does one wake up one day as a commissioner at the EEOC?
There's something wrong with you first, if you want to take one of these jobs, but, you know, I was a labor and employment lawyer, I started Government Service at the Department of Labor in 2017, and now here at the EEOC, so this is what I've done. It gives me a very unique perspective on a unique area of law, which impacts every single industry. As long as you have employees, we're involved in that. Our agency deals with some of the most fundamental civil rights we have in this country, which is the ability to enter and thrive in the workforce, and not be discriminated against based upon your protected characteristics such as race, sex, national origin, etc. As we'll discuss, technology is now having such an impact on workers' ability to enter and thrive in the workforce, provide for their families, because at some point, everyone is going to be judged by an algorithm when it comes to the workplace.
That's a great segue to an opening question, and you sort of mentioned it in your initial remarks, which is EEOC is not...
You know, we tend to think about Commerce. There's a bunch of agencies we think of when we think of tech, federal involvement in tech issues, EEOC doesn't sprang to mind. But yet, you've done a really nice job of positioning the EEOC, and sharing thoughts. Can you talk a little bit about why it, particularly the AI issue, is so front and center for you,
Because we really have to. If you think about the EEOC, we really kind of go with the direction of where employment trends are going. So, when the MeToo movement happened, we had to be out there front and center and help employers deal with firing CEOs, firing board members, etc, then COVID happened and we were dealing with workplace accommodations and vaccines, and then the women's soccer team case came about, and we were dealing with pay equity. So, there's always going to be something that drives the agency with employment trends.
But, from when I got there, I said, well, what is the future? How do we stop for a minute and say, how do we prevent those large scale issues, that's going to impact all industries, all employees and all employers? That's really when I started hearing about artificial intelligence in the workplace, and that's when I thought to myself, well, you know, I don't see any robots out there yet, actually replacing workers. So much of the time was about automation, and actually having workers probably be replaced by physical robots. Okay, well, that only impacts certain industries like manufacturing, or retail, or logistics.
And, taking a step back, that's when I learned that AI had been involved in the job decision making process for years, and employers, large organizations, small organizations have been using some sort of artificial intelligence, machine learning, natural language process, through the entire employee lifecycle from the very beginning, to make a job description, to advertisements, to viewing resumes, to actually conducting the interview, to determining who's going to get a job offer, to determining that person's salary, and then, when you get to be an employee, doing your performance reviews, seeing if you should get promoted, demoted, and there's even some software out there that will tell you, you're getting fired. So, when I dove into it, I said, wow, this is already out there. It's already happening, and this is the future, so we need to get involved now, because there's a lot of benefits in the software.
As you all know, the biggest issue in human resources happens to be the human. That's the reason my agency exists. That's why we bring all our cases, and that's why we get hundreds of 1000s of employment discrimination inquiries every single year. So, I think the difference about using AI in HR for the workforce, compared to other uses that you've been hearing about, making logistics faster, making widgets faster, doing doc review, accounting, lawyers, here you're dealing again with civil rights, and there just needs to be an extra care and caution that goes into it, based upon long-standing principles that I found employers are doing generally across the board in HR, but when it came to AI, because the novelty of the software, because all these cool new things it can do, let's just throw out that all the window, and potentially rely on these algorithms.
I've said consistently since I've been involved, if carefully used and properly designed, AI can actually help eliminate bias in the workforce. But, all you have to do is flip that and say, if it's not carefully used or improperly designed, it can potentially scale discrimination to the likes we've never seen before, and there needs to be guardrails, guidance, the whole, you know, each use of it.
I believe my agency, and what we've been doing is trying to get involved in every step, and saying, here, if you want to use this innovative software to make employment decisions, to assist you with employment decisions, you can, it's a free country, go ahead. But, for each potential use of it, whether it's in hiring, whether it's in salary, whether it's performance reviews, here are the potential ways it can eliminate bias, and here's the potential harms, and you have to look at that, it's such an individualized use of it, and I really think that's where we've been able to lead and talk about those specific issues, so employers can integrate this, and people can innovate.
I want to double down on the actions the EEOC has taken. At your time there. I think you've been really clear about how this issue has shown up on your on your roadmap, but given that EEOC is an agency that's rooted in the March on Washington, and the Federal enforcer of workplace and anti-discrimination law, how do you how do you see the EEOC's, how's it been navigating these issues and adapting to the age of AI? And I'd really like top of mind, certainly, as you mentioned, guidance, we heard a little bit about guidance from in the previous panel, also the release that went out with sort of the alphabet soup of agencies, including the EEOC, I'd love to, if you could share, about kind of how you see those unfolding, and what the future might hold.
I've tried to simplify it, because my background is a labor and employment lawyer, I don't understand technology, I don't understand how any of this works, but you can sort of get lulled into this, wow, look at these unbelievable results that we're getting, and that's the key word, I focus in on, results. Because, if we try to regulate the technology at the EEOC, maybe unlike other agencies, we are going to lose. Do you know why? Because our investigators don't have the training on how to see what an algorithm looks like, to how to go through and parse out some of this various code behind the scenes. I've been arguing, none of that matters, because you know what we do know, is we know employment decisions. And, at the end of the day, until one of these tools comes out and suddenly reinvents how employment decision making occurs, we just need to take a step back and say, what are we asking this tool to do? Are we asking it to review resumes? Are we asking it to make hiring decisions?
At the end of the day, there is going to be some result from that action, and that result, based on employment decision making, is what the agency has been regulating since 1960s. So, that's why I try to focus away from the technology, away from regulating the algorithms themselves, because as you've heard, probably throughout, and from the Congressman's remarks, if we want to start getting in that game, we're going to need a lot more funding, we're going to need a lot more training, we're going to need different kinds of investigators, but, that's not why my agency was created.
Our agency was created to prevent and remedy employment discrimination and advance equal employment opportunity, and that's what we do. How do we do that? By looking at the results. Fortunately, the way Title VII of the Civil Rights Act is designed, that's what we regulate, and it doesn't matter who made the employment decision, the employer is going to be liable. So, if it was an AI tool, or human, with bias, the liability is going to be the same either way. So, that's where I'm starting from, the point saying, well, that's our strength, that's what we know, and, in a sense, if an algorithm does make a decision with bias in it, well, now, what is the next step going there? Because if you look at...
Now, if somebody complains of employment decision, I wasn't hired because of my race, because of my age. What do we do now? We show up, and we have to interview the hiring manager through depositions or subpoenas, and we say, did you not hire this person because she was a woman? Did you not hire this person because they're old? Nobody ever admits to employment discrimination. Nobody says, of course, I have bias, I would never hire a woman for this job. It's just not that easy to begin with in our investigations now. Now, if algorithms or data or machine learning is involved in the equation, I've been arguing, this can actually help us with our investigations, because now you potentially have an auditable and traceable trail, outside determining what the numbers of the algorithms, and saying well what were the inputs here?
Opposed to the inputs of somebody scribbling notes on a notepad saying, looking for discrimination during an interview, because that's generally all we have, versus having a dataset, and what does the dataset mean in HR? Your applicant pool? Having the actual points that the algorithm was looking at, what does that mean in HR? The skills to see if you have the job, and not protected characteristics? We're able then to, in a way, have a much more transparent, auditable, traceable, all those buzzwords, in doing employment discrimination investigations that we didn't have before.
But again, that's all dependent on the results is, which is what we know how to investigate. So, I've really been focusing it there. What are you using the tool for in HR? What purpose are using for? And what are those results? In a way, now, they're much more traceable, but they're also auditable in advance, which, as you know, a lot of the states and foreign governments are going to.
Yeah, it's interesting, because I think the way that we think about that is, that you hear the term when it comes to public policy of technology neutral. Given Workday has been involved in the HR AI space for a decade, I think we were pretty comfortable in our talking points right up until November of a year ago, when all of a sudden generative AI came out, and it's sort of shifted everything, and who knows what's going to happen,next November, the November after that, November after that. So, really focusing on on a technology neutral approach makes some sense.
I want to take a moment of personal privilege, as the fireside chatee, can you talk just for a second about the tools at the EEOC's disposal? Clearly, there was some guidance that came out? Obviously, there are enforcement actions? I think there's there's some rulemaking authority. How are you thinking about taking your focus on these issues into....
How does that translate into employer behavior?
For me, I just try to compare it to what HR departments are doing for the same employment decision making that's occurring outside algorithms. Think about, now corporations have very significant HR training. When you start at your job, you have to watch all those videos, you have sexual harassment training, you have policies and procedures related to making hiring decisions and being an employee. That's what needs to occur related to AI governance, both at the macro level and the micro level. That's really what I've been arguing for, from my position, at the macro level, and I've talked a lot about setting your principal.
Going back to when the MeToo movement occurred, what happened? Every CEO came out, every board came out, saying we are not an employer that's going to sexually harass, we are not employer that's going to discriminate, and if you feel like you're sexually harassed, you can report it, and something will happen. I think that's what needs to happen with AI governance, and a lot of companies like Workday, Microsoft, IBM, Salesforce, have come out and put these very broad principles of corporate use of AI. So, that's step number one when it comes to governance.
And then step number two is the more micro for this specific use, like in HR. There's so much tools that we've provided, about how the Americans with Disability Act impacts disabled workers who are going to be subjected to these algorithms, how you can use long standing employment testing principles from the 1970s on any type of AI tool that you're going to be using as an employment assessment.
What I've been out really arguing for, is saying you have all these existing policies, like I just mentioned, but now you need to create and amend those related to AI, and you need to signal to your employees and your applicants, that, if we are going to be using this, we're going to be using this under the same long standing principles, which are rooted in civil rights laws, related to worker protection, related to worker rights. That could be ensuring that people, who have access to this algorithm, have access to the programs, have that proper training, by the vendor, working with the vendor to make sure that, you know, when you're looking at these programs, that you can't sift it through age, you can't sift it through gender or any characteristics that may show you're this protected class, and if you do, you're fired, to protect the company from using it, just like you would fire somebody who sexually harasses somebody.
So, I think we need to take, and what I've been trying to do with the EEOC, is saying, here's all of our existing guardrails, laws, when it comes to employment decision making. Because AI tools are simply doing that at scale, you need to have those same policies amended and drafted related to AI governance, because if something does go wrong, you can at least show that you are doing everything you can to prevent discrimination from occurring, and that's what we will kind of look at first and foremost.
That is sort of the way these products are sold in AI across the board. As you all know, it's SAS software, implement it as quickly as you can, look at all the money you're going to save, look at all the employees you're going to displace, look how wonderful, there's gonna be no bias, no discrimination. If you do that in the HR space, you're going to be in significant trouble, because you're making decisions about people's livelihoods, their careers, which are protected by fundamental civil rights.
So, that's where I've been arguing that this stuff can all work, and it actually can do what it's promised, but there's just so much more that has to go through it in the HR space. That's not only the cost of buying the software, but the cost of that governance, building in the policies, the testing, then making sure that, before it ever makes a decision, that there is is no bias. There's a lot of tools under existing laws that we put out that can show you how to do that. So, it can all work, but there's just a lot of extra care in the HR space, that most individuals who deal with software implementation haven't had to deal with before, and I think that's the difference here of how we make it work.
I think you've covered a little bit about this, but Workday spends a lot of time talking to policymakers about HR, about AI, about safeguards. There is this sens, somehow, that that all of these tools are being developed in a vacuum, that there's no existing regulatory framework. You've been very clear, and I want to give you a platform to be clear here, that when it comes to both developing and deploying AI in the HR space, there are existing employment laws that cover these uses.
Correct. And I say that, and what's even trickier in the HR space, under employment laws, whether you intended to discriminate or not, liability is going to be the same either way. And, that's really where that extra care and caution needs to be, because the end of the day, it's just making an employment decision. Only the employer can make the employment decision, not the AI vendor, and that's really what we can't lose focus on. I think what we're starting to see is a lot of distraction out there related to certain cities, or states, or even foreign governments, start to regulate in this space, specifically related to HR. Now, New York City was the first one to put out a comprehensive, called Local Law 144, related to using AI and HR. There's proposals in California, AB 331, that talks about some of the additional requirements they're going to have, such as employee consent disclosure, opting out, yearly audits. The EU has designated some of the uses of AI in HR as higher risk, which is going to have those significant disclosure requirements. So, that is really all in addition to these underlying principles. So you can't get lost in the sense we're saying, well, in New York City, I'm going to have to do a yearly bias audit or in California, I'm going to have to give disclosures. That's all things employers can be doing now. Because if you look about how you actually do those bias audit testings, those yearly testings, it's based upon the EEOC principles of employment testing. So, a lot of that really comes back to our agency, even if some of these new innovative laws are restricting it, and, you know, it gives sort of employers who operate on a multi-state or multinational level, kind of, almost, options that they could start integrating now, saying, Well, if this is where all these governments are going for employee consent, or yearly bias audits, if you do the yearly bias audits, and find bias before it's deployed, and actually eliminate that bias, then you're not going to have liability with the EEOC either, because there's no discrimination. So, I think it's really important to watch that. But, we can't lose sight that the EEOC is already there, in all these areas that are trying to regulate in this space,
And that's going to have to be the last word. One of the downsides of you and I getting on a podium is that it can take all day to get to the end of the conversation. So, please help me thank Commissioner Soderling for coming, and appreciate your time.