All? recording in progress Good night everyone. Thank you for being here I'm very delighted to finally welcome you to this press conference about artificial intelligence actor trialog everlasting trialog that finally came to an end. We have today Secretary of State Carmel Artigas. rapporteurs Brando, Barney fussy Benny fussy and Draco's goes to the ROTC, and Commissioner Bratton, who will explain thoroughly how all went. Since then I introduce, give the floor to the Secretary of State.
Thank you very much. Good evening, everybody. Ladies and gentlemen. I'm very pleased to be here today with you with such good news. And I want to thank you all for waiting for us. Until the end, despite it has been a very long process, sometimes painful, sometimes it's stressful, but at the end of the day, tremendously satisfactory sense. So very right to whoever is behind the Twitter account. Is there a deal on AI act is finally able to put with all the letters that there is a view and close the account. It has been very long sessions, long trial locks, but I can definitely say that has been worth the few hours of sleep the nerves and all the time we have spent these days to be able to celebrate the biggest milestone in the history of digital information in Europe, for the single digital market and I think for the world. I believe that we as a presidency, as well as our colleagues in the Commission Commissioner valladon and the parliament, of course, read these illegally by randomly fee and that I was to the recce we have earned the right to say with all the letters that we have achieved the first international revelation or artificially into regions in the world. We feel very proud. We are very happy that this has also happened during our presidency. It was one of our strategic priorities. And we have all done all the necessary work to reach this important milestone. Before going into the details of this regulation, I would like to congratulate and thank all the teams involved in the negotiations, who have worked so hard to make the law reality for the positive impact that I will have on all the Europeans, teams with responsibility. And with a sense of a state and utopia in a Europeanism. We have worked together to have a regulation that respects human rights at the digital age. And that guarantees the ethics and the values that we defend in Europe, of course, mentioned three things. So we consider key when when this negotiation, the uses of AI for development and research are out of scope. Open Source has very limited light requirements, and the ecosystem boost, have a lot of incentives. And there are two narratives I didn't want anyone to write today. This is not an AI this tool. This is a full agreement not only at a high level, political level, but in every single of more than 20 something at something like almost 90 different articles, where we have agreed a legal would say detail or legal description how they say that legal what wording legal texts. So we have agreed to legal texts from every scene of the article and the other narrative, I want no one to buy, as it is a loophole. There's a loophole to avoid fundamental rights for a citizens. No, this is a very well once regulation that boost innovation, and the mix compatible innovation with the defense of fundamental rights. So I give the word not to the coalition of Thank you very much. Oh, no, sorry. My colleagues, the European Parliament's think
we had one objective to deliver a legislation that would ensure that the ecosystem of AI in Europe would develop with a human centric approach respecting fundamental rights and the European values, building trust, building consciousness of how we can get the best out of these AI revolution that is happening before our highs. These meant we had to develop a strong and robust risk based approach that we developed to defend everyday citizens from the risks that AI can endure in in in everyday contexts, and also foster the development of new business ideas with the sandboxes. At the same time, we also add to detect, and the Parliament did a great job on this because it really changed completely the texts from the beginning in formulating a clear indication of use cases that were to be prohibited that we didn't want in Europe, I'm talking about the predictive policing, I'm talking about the social scoring, I talking about the biometric categorization, issues on which we had to do long discussions. But finally, we got in the right track, defending fundamental rights to the to the, to the necessity, that is there for our democracies to endure such incredible changes. And we also put clear rules for the most powerful models, we had to discuss this thoroughly. This was a big topic also emerging after the start of this work, also linked with the issue of the generative AI and the transparency needs for the generative AI, which are crucial. And I think we delivered also on that it was not obvious. Just a few weeks ago, I remember the debates on that. But we delivered, we are the first ones in the world to have a horizontal legislation that has this direction on fundamental rights, that supports the development of AI in our continent. And that is up to date, to the frontier of the artificial intelligence with the most powerful models under clear obligations. So I think we delivered and this is thanks to a strong cooperation with other institutions. And also this shows that when Europe is united, when Europe works together, it can lead it can lead and not be the one that is touched is suffering from changes, but instead is is able to lead in how we want our world to deal with crucial changes like the AI entering in our, in our lives. Thank you.
Well, just a few words to add on my side. I think as AI is is becoming so omnipresent in our lives, so when we present in our economies, the whole world For the last year in particular, has been wondering what to do about it. And when wondering what to do about it, they were looking for models. And for inspiration, and I think that the work that we've done today, the work that we have achieved today, I dare say is an inspiration for all those out there looking for models, because we did deliver, as I think we've heard from the Secretary of State, and from my colleague, Randall, we did the level of balance between protection innovation, not many people thought that this was possible. And we were always being questioned whether there is enough protection when there is enough stimulant for innovation in this text. And I can say, this balance is there. We have safeguards, we have all the provisions that we need the redress that we need in giving trust to all citizens in the interaction with AI, in the products in the services that they will interact with from now on. And at the same time, we have all the elements in there to allow the ecosystems working, developing deploying AI in Europe, and also those companies outside of Europe to do their jobs. I think we now have to use this blueprint to seek now global convergence, because this is a global challenge for everyone. And I think that with the work that we've done, as difficult as it was, it was difficult. I think this was a marathon negotiation by all standards, looking at all precedents so far. But I think we delivered and I remember when Brandon I met for the first time after being nominated as reporters in Parliament more than two years ago now. I think we had a discussion on what our priorities in this negotiation, and I am happy to say branded. And I think all those priorities are delivered. So I want to thank you for the work done. And I want to thank the presidency, the state secretary and also Commissioner for for your work for your support and all that allowed us to right here.
Thank you, Commissioner. Well,
I think her when Finally we reached the deal, I heard one word. We were almost one on one in the Room 100 for almost three days. And the word was historic. We are humble, because it has been quite a challenge. But my turn to do Thank you. Thank you Kami. Fantastic rock. Mao, my word also to Lagos and Mondo fantastic. Not easy to manage constituencies. You know, I know it's not easy also to write the first draft proposal. Very mean, I would like to thank you all better. And to thank you or the team, also from the Commission, because and your team, of course, technical team, amazing job 100 people for almost three days. And I will tell you why we decided to spend three days. Because zoom, the deal that we reached today is a complete deal. We learned we learned with the ID we better spend some time finalize everything, if any was always a technicality, training as much as we can all the recitals. So now it's a full package. It is a complete deal. And this is why we spent so much time since Saturday, seven hours for one trailer is true that after 22 hours, we decided to stop not to stop the trailer just to have a bigger mic just to go to sleep a little bit and come back. So so this is really something that is much more I believe that book it's a launchpad for the reopen startups, and also researchers to to lead the global race for what our fellow citizens want. The trust was the AI and this is a balancing user safety, innovation for startups, while also respecting and you said it very well come in on Monday when and I was respecting our fundamental rights and our European values. It was not easy, but we managed it just went away when I joined the commission four years ago. Because of the end of 2019. I decided to embark into a mission to remember this very well about two we said we need to organize our digital space to make it a space where we can be a front runner, but also when we have security and when we can manage are the new rules. And a lot of people told us impossible, Sandy Malaya you will have also lobbies against against you. We are the law. Have lobbies against us. It will be an impossible task. And we did the GSA and we did the DMA. And we did attacked and we did the DGA. And, and, and and now, we did the act. But the first one we started was four years ago with the AI act, like always starting with the very last consultation and trying to understand trying to interact, having also as many Stackelberg sticking stakeholders on board at the beginning. And again, we had any any comments to these causes, but at the end of the day, because I knew that the first to establish rules, as always a first mover advantage in setting global standards, this is why a lot of non european countries try to stop us and I will tell you this. It's true, because the US guess what this methodology is it forces particularly complex, but also not impossible. And today's prologue is a proof that we will be able to do it. You know, our approach is always to regulate as little as possible, but also as much as needed in Europe. Which is why we promoted with my team is our original proposal for purpose and risk based approach. We are the first to establish a binding but balance framework for large AI models general purpose AI Medina's promoting innovation along the AI value chain. We agreed also on a two tier approach with transparency requirements for all general purpose AI models and stronger requirements for powerful and very powerful models with systemic impact across the EU single market. And for these systemic models. We develop an effective energizes time to access and tackle systemic risk, just like we did for the DSA. And for the GME, we also defined values, high risk use cases, as you mentioned, such as a certain users of AI in law enforcement workflows and education, where we see a particular risk for fundamental rights. And we mentioned also that the high risk requirements, effective proportionate, always and well defined. And we developed also tools to promote innovation, which was extremely important for all of us during the process. And finally, we agreed also on our vision enforcement framework for the AI actually, to be extremely clear. This makes it very different from values voluntary framework around the world, and a new way, because it's so difficult to achieve what we did that it's almost more easy to say, Okay, we trust companies give us your your Yahoo sander, and let's see if you apply it, it just doesn't work. But yes, it's extremely difficult to do what we did, because it involves market civilians at national levels. And it also involves a new AI office to be established in my services in the European Commission. We will welcome a new colleagues, Roberto, and a lot of them. And it includes also to funnel tough penalties for companies that do not comply with the new rules. In our regulation, we need also to have this tool no penalties, no regulations, Europe as positioned itself as a pioneer understanding the importance of its for as a global standard setter. So now we are back to good signs on your journey. Thank you very much again, coming. Thank you very much. Thank you very much. And I was this is a yes, I believe a historic day.
Is that one thing just to remark a couple of things I forgot before, I think are very important. For the first time ever, we are not only putting some requirements, technical requirements on this product base, race based general purpose AI systems and models and high risk AI systems. But we for the first time are defining AI governance at a transnational level. So this law contemplates creating national supervisory AI authorities as the one we have already launched in Spain precisely yesterday, we announced the first port, but also we are creating a European wide AI office, that he's not only going to help us coordinate among the national states also responsible for the systemic risk on purpose models, but also they will be advised by scientific panel when he bore the member states and with the civil society. So we are achieving an important milestone that we the citizens decide what can be done and what cannot be done on AI because we have revisions for the first time somebody says even though this is is technically feasible. I don't want this to happen in Europe. And secondly, we citizens participate in its governance. I think that's a very important milestone. And the second point, very important. This legislation is future proof. One of the challenges we have is how can we cope with this regulation with the constant changes of technology, how we have solved it, because the law itself has updating mechanisms, according to the advances of technology. So I think these two contributions, I think, are very valuable to think.
Thank you, Bill. The floor is open now to questions.
Hi, Anthony Rogerson from the Spanish news agency FA. I have several questions after 336 hours. But I would like you to clarify please a little bit more, which are the exceptions for biometric identification in law enforcement? What are also how is the general defi? regulated? Second question. And then when we hit enter into force Hello, and what will happen in the meantime?
Can you explain your biometric? Yes.
So on biometric and I'm not telling you a secret. This has been probably one of the most debated issues from the very start of the journey in adopting this act. And we as Parliament had a very strong mandate. Again, I'm not saying a secret, we had a biometric ban. And we knew always that this is going to be probably the toughest part of the negotiation. And it has proven to be so in fact, I think for the last 20 hours or more, this has been the part that we debated the most. There is an exception to the ban for law enforcement, but it is a an exception, that is accompanied by very strong safeguards, much stronger than what were initially in the commissioner proposal, because we wanted to make sure that even if we give this tool in the hands of law enforcement, it is accompanied with the rights, the redress, and the safeguards for the citizens so that this is not abuse, against their interests against air rights. So that's a text and that's the part of the text that that we as parliament where we fought the most because again, this is where we thought that our agenda and priorities need to be well served. And I believe we found a balanced solution.
If I, if I may, on the other point that also on this one, the independent authorities checking on that this was not easy to get it with the Council on but we agree that to have independent scrutiny, because we cannot accept that the government's they are in the end the law enforcement authorities, they control themselves if they do it correctly. So we this independent scrutiny was a very important point to fight for. And we in the end agreed on that. And also, we fought very hard for a clear, very clear, very extensive, also not easy biometric categorization ban, that is strongly linked to this topic, because this allows not to use the biometric identification to pursue these kinds of forms of of surveillance practices. And so it was important to bring all these pieces together. And on the generative AI, it's very important that we obtained to get the transparency on that we have transparency on the recognizability of AI generated content. And we also have transparency to defend the end and strengthen the rights of copyright holders, obviously respecting the existing copyright law, but giving stronger transparency and tools to represent their interests.
And come to foresee Satan. And you see the adoption period. Now it was mentioned that option periods in general terms of 24 months, but we have put just six months for the permitted use cases
and 12 months for the models and GPI. Yes. So it's gradual entry into votes that is linked also to what is needed to be put in place. For example, for the high risk. I mean, we there was a pot disease of 4836 Indian 24 is really the minimum because we need the standards to be in place we need the officers to be ready. So also we will work on voluntary compliance, anticipated voluntary compliance so that at Every one will be ready for when it's fully mandatory. But in the meantime, we encourage everyone to start respecting the rules in a gradual way. And this will be, I think, the big big work of the next year or 2024.
This is why we have the AI pact, but in between companies will be able already to adopt are we so if I want maybe to summarize, in a nutshell, if you allow me, because it was very well, well explained, but for Wilton biometrics, there is a full ban, again, with only three exceptions, I think is also another way to say it, first foreseeable and expected threat of terrorist attack. Second exception for search for victims, and self prosecution of serious crime. And then for the generative AI transparency requirements. And watermarking for general purpose AI models to do reproach approach, which is something very, very important that we discussed earlier on with transparency, again, requirements for original purpose models and requirements to assess and manage systemic risk for the most powerful model. And then for the timing. Of course, it will be it will enter in force once the parliament under consider will adopt it so early next year. That's what is planned. And the first aim is to will be established immediately that we will have this anointing immediately. So we will work starting tomorrow to get to get ready. After six months probation after 12 months of transparency and governance requirements, and after 24 months for all other requirements. And indeed, reasonable finalization. But I don't want to add anything, because if that's the timeline
for which Bloomberg, I have two questions for Secretary of State, if you caught in France and Germany's sign off on the deal, especially when it comes to general purpose, AI. I also wanted to ask about the flop threshold, my understanding is that with those kinds of thresholds right now, the only model that would actually qualify is opening eyes GPT four, is that correct? And if so, is that purposeful or is that understanding that at some point, other models will soon qualify? Thanks,
because we have agreed that initial threshold to define low tier and top tier general purpose AI models is 10 up to 25 flops, we know this is not a relevant or cannot become a relevant measure to define this relevant these thresholds in some period of time, especially in two years time when this is come to force. This is why we have already foreseen and updated mechanism. And we have identified that all other measures can be taken into account because at the end of the day, this is a steal risk oriented legislation. So we are not regulating foundational models based on the size of the companies that are developing the models, but says on the potential impact of systemic risk. And therefore one way to put it is a high capacity of computing, which allows for, of course, a bigger impact. Some other can be at number of attributes, but some other can be the number of users. So we have identified several of identity, I would say KPIs, this is why we have a Scientific Advisory Board associated to the office so that we can update those requirements based on the scientific consensus. And it is important that all these regulation are behind are based on code of practices. So we work together in this to to find out the house. So there are some obligations, but the way we encourage the company to comply with those regulations and obligations will be defined an example is also the regulatory sandbox that we have already started in Spain that will continue its job to fall into two years during the addition period that will help also designed the tools both for seven supervisory agencies, for their companies, especially thinking about SMEs to help them comply with his with his new requirements. What is your question?
Google from France, in Germany, on the way
Yes, as a presidency, we can do anything we cannot negotiate anything without a mandate. So last Friday, we got a mandate that was so specific with some red lines that we have obviously respected, but some option for maneuver and some agility that has allowed to come to this positive agreement.
So do you expect them to take any issues with what you've negotiated? Now the
final doesn't really share it and of course in the quarter they need to confirm and we are hopeful that they will they all will confirm
maybe when when when will also if you will and the benefit for the equation for comment, but we always also put into the regulation which is that we will have the flexibility again to do product or service were sold, when we will need when we see it's needed. So that's important we that we met an advocate this flexibility. And something
it's very important r&d is excluded. And that means r&d, academic organizations, but also the models that are on an RD an r&d phase and a pretrained phase. So for us, the models subject to regulation are the models that are already in the commercial phase.
Thank you very much, very much. I thought Austrian broadcaster or if you mentioned the exemptions for real time biometrics, the three exemptions. Um, could you just clarify a bit more like how this independent scrutiny should look like and also like the process of this scrutiny as we know that scrutiny always comes afterwards? Thank you.
Well, under the scrutiny, one of the most important points for us in Parliament was to make sure that we have judicial authorization for any use of RBI not biometric identification by law enforcement. This has not been an easy negotiation with the council, but we have a peanut so we have this as an important safeguard and scrutiny by judicial authorities or by independent administrative bodies with the ability to make binding decisions. So here, we have also sought to stay within the jurisprudence of obviously jail net. And with that, we can make sure that any act any decision to deploy such otherwise very intrusive tool in public spaces is done always with with such authorization. So for us, that's an important element of scrutiny. The second point is that the deployment of such tool is always done with notification of DBAs, the data protection authorities so that also they can exercise their mandate, which they have under the GDPR to make sure that the protection of personal data, which is the one that we tried to, to achieve, with these exceptions, and with the regime that we put in place is insured through the exercise of eminent.
Thank you. We have several animals connected online. We'll go first with hot Hollywood ADA from euronews. Okay, you can go ahead.
Hi, hi. Hello, I hope you can hear me and see me maybe not that well, because I'm at home. I have a question for both for all of you. The first one is, you can explain a bit more, where chalkboards will fall under this deal that you have agreed upon, because you speak about transparency requirements for some general purpose and some stringent conditions for all the kinds of general purpose. So where do chatbots fall in that categorization? And then second, for the MEPs. You said right now that you had a very strong mandate coming into the negotiation with this long list of unacceptable practices, do you feel that you have given in too much to the concerns of member states who at the end of the day are the low in order in the countries Thank you.
So the first chat, I mean, jet chat can be considered a general purpose system, but if a general purpose a system is not considered a high risk system, the legislation does not apply.
So on the issue of whether we have given too much on the issue of not only on remote biometric identification, but generally on the prohibitions. For those that are away with the text, you know, that already we in Parliament, we have added quite a number of issues quite a number of use cases and applications of AI that we considered so detrimental to the values that you want to protect in the union that we added them to the prohibition list. And many thoughts thought that this would be to struggle, the mandate, and that will not be able to push through in the negotiations. Well, we can confirm that not only the prohibitions that are already in the text, but we have a prohibition on biometric categorization. We have a prohibition on predictive policing, we have a prohibition on scrapping internet for facial images. So all these things are again a very strong regime that say what kind of uses of AI we do not want in the Union, we do not want to know market and we believe that that stays in line with the mandate that we receive
maybe maybe one word on the chatbox just to say that on Article 52, the chatbox will be subject to transparency requirements. And so your user will need to be in from that they are interacting with an AI chatbox
if I may, if I may add one point on the Balance, the work the parliament gave in etc, I want to give a very clear example on the emotional recognition, we add a starting point in this discussion looking at the original text, where the emotional recognition was not even considered Iris, it was in the transparency. So we really change completely. And this is the demonstration of the power of the European Parliament. And then of the negotiations we do with other colleges later. Like sometimes the descriptions we read, the text of the Commission, which was already very good, as on many, many aspects can be really changed completely on some parts if we think it's not okay. And on emotional recognition, we really change it very much, because now we have prohibited some practices of emotional recognitions, and we put them in the eye risk the others, so we really change it completely, this has moved completely in terms of the risk assessment that I don't think we have anything that comes close to these not even in voluntary commitments in the rest of the world. Because we think that these instruments, for example, can be used to exert pressures and manipulation on people. And we wanted to prohibit when needed, and also put into strong scrutiny of the high risk in other cases. So I think this is the clear example on where instead, the parliament strong mandate, elders get really a better text and a good result for the sake of fundamental rights of citizens.
Thank you. We have another question online, with Matthew Newman from Amex.
Hello, and thank you for my question. Questions, x is one on predictive policing. And the second one is a follow up on the French and German positions. For some predictive policing, I'd like to ask about the compromise that was reached, which allows predictive policing in some circumstances. And specifically, I want to ask the parliamentarians if they're satisfied with this deal, because it allows for this practice, which is normally banned. But for looking for criminal activity, and specifically, its criminal activity that would be often assessed by humans. But there would also be a carve out for looking at for financial fraud. So is this too wide of a? Will you you agreed to it? So I just wondering how you how you reach this agreement? Because it seems like a kind of a wide carve out for predictive policing. And then on France and Germany, and Italy, we know that they were quite concerned about impacts on innovation. So they made really strong statements about how a regulation on foundation models would harm innovation in the EU, just if you can go over a little bit how those views were overcome, how you convince them that what you have on the table, this two tier system won't affect investment into AI in Europe. Thank you.
Okay, yes, well, I need to clarify this, because this was a very fourth point for a long time, we had to discuss a lot and the very stiff as parliament that I think we did it for the good reason. So getting to a result that is good for all the villages, the colleges laters. And for the process itself, we banned all the predictive policing, what is in is not predictive policing. And it took time to get to that because there was disagreement, there was discussion in the end, and I think really, it was good that we reached this agreement. There is no predictive policing what we allow with a specific texter on that is that there is the possibility to use so called the crime analytics applications that do not apply and do not link anything to single individuals but rather to anonymize the trends and the exclude that we can predict the forecast or even suggest that someone could commit a crime based on these kind of systems because we don't believe they really work and when if they really work they would be against the all our principles including the presumption of innocence and so we then all predictive policing, we allow the use of crime analytics that are linked to existing facts that are linked to possible criminal investigations but there is is no way we would allow, in this way the predictive policing and it took time to reach this agreement. But I think this is one of those things that the parliament can say we fought the good battle. And we convinced that we were on the right side, everyone. And I'm want to thank again for the flexibility on these our ecologies they chose to reach this deal that makes these watertight exclusion of predictive policing.
I will add one thing, because, yes, we did. As Rhonda said, we didn't fight hard with this on the council. But in fact, we realize that we were aligned on the objective. And we as Parliament already from the start of of designing our mandate, we never sought to deny law enforcement, the tools they need to fight find the tools they need to fight fraud, the tools they need to, to again, provide and secure the safe life, our citizens. But we did want. And what we did achieve is that we banned AI technology that would determine or predetermine who might commit a crime, or who might re offense because, again, we believe that such tools could not be deployed in our union. And we delivered on that ambition.
Yes, and the topics of, I suppose, I suppose you refer to them on paper issued by France, Germany annex. And I think that was very helpful. For us, it was very helpful because they put into focus on something that was important for our companies from their perspective. And we embrace that that criticism, which was a contrast, a constructive criticism. And we've had that in our mind when we were negotiating according to the limits of the mandate. And I can say we're very happy because I think we are supporting European developers, startups and future scaleups why, first of all, we get certainty, we have legal certainty, you have technical certainty. That is that is going to avoid a lot of low class action that as we're seeing in other countries, because they don't have the certainty once you have certainty in the market, then you give trust to consumers, then you will trust the citizens and people will not be afraid to buy this product or use this product. So we're giving certainty to the market. Therefore, we have a first mover advantage for our companies. Therefore, we are boosting innovation with not only contemplate the regulatory sandbox, but also that was very important for us real world testing scenarios, we are considering real world testing scenarios within this law. And therefore, the fact that we are having a different classification of those foundational are now recalled global purpose models, global per se models and go propose AI systems. So for the models, we differentiate between non having a low risk of systemic risk and a high risk of systemic risk, the low tier that is where all the European now companies are in have very light requirements, very light obligations, they are excluded in the r&d phase and the retraining phase of the models and open source models are also excluded from this transparency except for the I think foundation right? Fundamental Rights Impact Assessment said that that is very important, because we're giving a light requirements, we're also facilitating the adoption of those requirements is technical information. And the things will require an assessment like like the Freer we provide with the templates and online templates that can be solved in very devoting very few more minutes, I think we will have this will help the adoption. And also we are establishing a co legislative process similar to the DSA, which is the CO codes of practice. So thing all together gives certainty, allows for innovation protects and boosts our startups and generates the environmental and sandbox and real world testing is scenarios that our company's needs to
grow. Maybe we'll have to add something on on r&d Purely because it's true that is, in fact really pro innovation. And we spent a lot of hours because it's very, and it's true, that we have been also discussing with with any stakeholders, including member states wanted us to do to focus on this as a result is is really that the AI act is pro innovation. And I waited the private sector. It was a myth floating around suggesting that the act could hamper innovation. This is just not true. This is a country as you just watch you say so, we we foster the uptake of AI we promote innovations for Regulatory Sandboxes you said it will work testing you said it on the prototypes of fully executed for open sources. And, and and we we we have a brilliant day. Last fall, that ensure legal certainty, which is extremely important for business, this is only continent when you know what you have to do you know what you don't have to do. And it's extremely important for innovation and business. Then, of course, no more class actions here, because of course, we know how to create an AI will have the largest computing power capacity for startups in the world with UHPC at the disposal of with two exascale computers, the biggest in the top three. So I mean, this will be a really landscape for for innovation. We often think about all sorts of big AI developers, but we also should think about the many Downstream businesses and SMEs that use AI models and incorporate their models into our systems, they will all benefit from this clear and predictable framework and formulas, our two tier approach will definitely place a particular responsibility of those models, which are could have a systemic approach around all our internal market, and this will allow also these downstream Deployer to use AI models safely and confidently. So I really think that that we spent a huge amount of time, seven hours to make sure to make sure that this is really and by the way, well already a lot of things is original paper, but we definitely improve it thanks to the quality setters. Thank
you, we have another question online by Sophie Pettyjohn.
Yes, hello. Thank you for taking my question. So I have a question regarding the two tiered approach and the classification it's very concrete I would like to know where Miss should be put in. Is it in the low tiered approach is it considered as an open source model regardless considering the definition that you kept in the in the parliament in the text sorry.
Thank you. So as far as we know, because I have no direct information, but as has us what I heard from this French company, they will fit in the low tier but since they are still on the r&d phase and pre training model, they will be excluded if they were a final commercial product will be in the low tier because I think they're using 23 I guess Yeah,
and of course the companies will have to self assess
thank you one last question by Luke cover to chain from your active
okay
maybe we have a question to you look
you will be surprised we've been wondering for some time what
you have slept as liquid as little as we have.
Right? I'm going to test your memory now. Because I'm gonna ask you very specific questions. So, on remote biometric identification, initially the text from the parliament talked about uses that are strictly necessary and based on national legislation, can you confirm this was this part of the agreement? And secondly, the there was a an exception for law enforcement to deploy a high risk system that has not passed the conformity assessment has that been maintained and is there also this procedure that they can request the authorization experts
can you repeat the first question
Yeah, but also the audio review.
Alright. So, for for the extensions for a remote biometric identification. There was initially wording on strictly necessary and based on national legislation has this been maintained? Yes, in the parliament or both for the exemption for RBI
Yes.
Okay. And on the on the emergency procedure to use for law enforcement to use non conform systems
what is your procedures
In I don't know if we can find the technicality here. But I think it's important because sometimes it's mistaken. There are two classifications here. Prohibited uses, and high risks uses, of course, the initial list of the commission was very few in terms of permitted uses. The list of the parliament was very weak. And we have like coming to the media halfway. So we have made the exercise to include more much more uses as fully permitted. But the fact that something that was initially proposed by the Parliament as prohibited has moved to high risk doesn't mean that no, everybody can do what they want No the other way around, we have, in return, put additional safeguards. So the balance here is something which is probative in any games. And something that we consider it is high risk with it as there are some beneficial users. But in that case, we provide additional safeguards. And it's important that these additional safeguards are put in place like prejudicial, authorisation, fundamental rights impact assessment, and some obligations of the Deployer. So sometimes, it seems when I see the press, that we just saying, Oh, this is not this is not permitted. So the government's can do whatever they want. No, the other way around. The fact that we have this middle ground, where we were ready to provide certainty to provide security and provide additional safeguards in the case something is not fully bound, because it cannot be fully done, because there are some specific cases of the use of these technologies are also for good.
In case because I wasn't sure exactly what was the question, but it gets you referring to the derogation from from conformity assessment for public security that is done with this judicial authorization.
Thank you, man. Thank you.
So there are no more question. I think we can call it today. Thank you all. Thank you.