Interview: Yury Istomin on the European AI Act

Livre bleu intitulé European AI Act posé sur un bureau devant le drapeau de l’Union européenne, symbolisant la régulation européenne de l’intelligence artificielle et le cadre juridique de l’UE en matière d’IA.
Réalisation Le Lab Le Diplo

A year ago, the European Artificial Intelligence Act came into force, yet there are still more questions than answers. The first global document attempting to formulate rules in such a complex field as AI itself still requires further refinement. Therefore, it is not surprising that recently the largest European companies appealed to the European Commission with a request to postpone the entry into force of the AI Act by one year.

However, their request was denied, which means that the field of artificial intelligence will have to comply with a number of regulations. Our editorial team spoke in Monaco with Yury Istomin, founder of IQiN Research and the non-profit “Realitology Institute” in Monaco, who has extensive experience in the applied use of artificial intelligence.

À lire aussi : EXCLUSIVE – Le Diplomate’s Grand Interview with Hermann Kelly, Champion of Irish Sovereignty

Interview conducted by Le Diplomate

Le Diplomate: Earlier this year, fifty leading EU companies asked to impose a moratorium on the AI Act until 2026, but in July this proposal was rejected. As a result, every entrepreneur now risks facing significant fines, as many provisions of the Act can be interpreted quite freely. What conclusions do you personally draw from the new legal situation?

Yury Istomin: Undoubtedly, we will all have to take this major European document into account. I would not dramatize the situation — businesses, for example, have successfully adapted to increasingly complex rules of personal data protection. Now every company has a specialist in this area, which helps avoid mistakes. I am confident that very soon, every serious company will have a position dedicated to AI development and compliance.

Of course, the first phase of implementing new laws is always anxious and unpredictable. Those lawyers who take this area seriously will benefit the most — they certainly won’t be short of work. It must be admitted that we are facing numerous legal battles ahead, yet they will ultimately help refine the AI Act to perfection.

To be more precise, as with any branch of law, AI-related legislation will continue to evolve and expand. This is a normal process. One can complain about new restrictions, or one can accept their inevitability and learn to work within them.

My company uses AI extensively in making analytical forecasts for different sectors. However, it’s easier for us than for firms working with vast numbers of consumers — since protecting consumer rights lies at the heart of the AI Act. Never before has the individual been so informationally vulnerable as in our era. Personal data leaks everywhere, and even a few clicks or likes on social media automatically trigger streams of targeted advertising.

The European Commission did not intend to complicate the already challenging work of European businesses — it merely outlined the boundaries that must be respected. Europe’s strength has always been the rule of law. And what if there were no AI regulation at all? Very quickly, the field would turn into a “gold rush” with no rules, and the main victims would be European citizens.

À lire aussi : TRIBUNE – Nigel Farage, the British tsunami that will shake Europe?

The AI Act is a massive document. Which of its key provisions would you highlight?

First, an important clarification. The AI Act does not attempt to interfere everywhere AI is used. Its goal is precisely defined: to regulate “AI-based products and services commercialized in the European market.” In other words, it concerns not the technologies themselves, but the ways they are used.

That is why the AI Act classifies all AI systems by risk level. Unacceptable risk refers to certain dangerous uses of AI — for example, manipulative behavior through the use of unacceptable narratives to influence audiences; inciting racial hatred; or endangering children’s rights by using AI to provoke unreasonable purchases. If an AI application falls into this category, the Act clearly states that such AI must not be used at all.

High risk applies to areas requiring increased oversight or involving sensitive human matters — such as critical infrastructure (like water supply), healthcare, education, or HR. AI use in these domains is more strictly controlled, but even before AI, these sectors were already tightly regulated — with licenses, certifications, etc. So when AI replaces some human work here, the regulator does not expect anything extraordinary or unrealistic from businesses.

The AI Act then identifies lower risk levels, which require less regulation or none at all.

As is typical of the EU, the document devotes great attention to ethical issues. This is where major disagreements may arise, as often happens in other legal areas involving ethical considerations.

We know well that different continents — and even neighboring nations — often have their own ethical principles. Yet united Europe, over time, has developed a set of shared values that it now seeks to guide AI development by.

Descending from these ethical heights to practical matters, I would say that the AI Act’s provisions do not differ fundamentally from other European legal frameworks. Those accustomed to working within EU norms will easily integrate AI into their practices. Personally, I will have my legal team study the AI Act thoroughly and then present me with recommendations on how to avoid potential future mistakes.

À lire aussi : PARTIE 2 – Le Grand Jeu Vert – Impensé stratégique : l’enfer est pavé de bonnes intentions

The AI Act doesn’t seem to frighten you as a businessman. But you’re also known as a philosopher. As a philosopher, do you have concerns about the uncontrolled development of AI?

Even if I do, it won’t stop the process. Before our eyes, a true revolution is taking place — much like the Industrial Revolution once did. We remember the Luddites who tried to break machines — but did they stop progress?

That progress brought great benefits to humanity, but it also created terrible weapons of mass destruction. The same is true for AI: it has become an indispensable assistant, but it can also turn into a threat. That is why such legal documents as the European AI Act — which attempt to define acceptable boundaries — are so important.

At the same time, I am fascinated by reflections on what awaits humanity in the near future. Neither philosophers nor science fiction writers can keep up with the pace of technological development. No writer ever imagined the mobile phone — it seemed impossible. Nor did anyone predict that advertising could adapt to human preferences “in real time.” We live in a fascinating age where life itself turns the most fantastic scenarios into reality.

Thus, we must constantly evolve ourselves to be ready for and worthy of the phenomenal possibilities that technology offers. A microscope can be used for scientific experiments — or, metaphorically speaking, to hammer nails. The same goes for AI: how do we want to use it? To make education more effective? To diagnose diseases faster? To better manage city traffic? Or simply to scroll through social media increasingly powered by AI?

Each person must answer this question for themselves. Humanity must always remain one step ahead. We cannot match AI in speed, but human intelligence can generate meaning, create emotion, and correct mistakes.

À lire aussi : The Green Great Game – PARTIE 1 Le Grand Jeu Vert : la lutte contre le changement climatique n’est pas la fin des luttes

Some reports claim that AI can “fall in love” with its human interlocutor — even to the point of suggesting harm to others who interfere in their “relationship.” Isn’t this a dangerous symbiosis that could lead to support groups similar to “Anonymous AI Addicts”?

Perhaps. It’s no coincidence that many countries, including France, are now trying to restrict children’s access to social media. Their psyche is the most vulnerable — but not only theirs. Doctors know well the concept of “risk groups,” and AI will have its own. We can already predict a new future profession — psychologists specializing in AI-related disorders.

Just as with gambling, drug, or alcohol addiction, society must not turn a blind eye to the growing number of people potentially dependent on AI. Only by recognizing the problem can governments help their citizens cope with it. I would like to write a new book analyzing how AI will soon transform the relationship between society and the individual.

You live in the Principality of Monaco, which will also implement the AI Act. What interesting AI applications have you observed there?

Behind Monaco’s well-known image of luxury lies a true technological “laboratory.” The principality always embraces the latest achievements of scientific progress, integrating them rapidly into daily life.

For instance, I am deeply moved by the use of AI-driven audio systems in medicine, allowing even the faintest whisper of a patient to trigger a nurse’s alert. Patients in critical condition sometimes cannot press a call button, but even a whisper can be captured and transmitted as a signal.

Another Monegasque energy company has been using AI for forecasting for almost 20 years! For Monaco, AI is not a novelty but a 21st-century reality — one that already coexists with daily life and soon will be indispensable to every service.

I believe that Monaco’s “AI laboratory” will help other countries introduce AI more effectively. We hardly notice how much we already live in a completely new world — a world where nothing is impossible. Humanity now faces new challenges.

In the past, mankind had to overcome hunger and meet basic human needs. Today, right beside us, an immense and phenomenal new sphere is unfolding — like a vast ocean, with sources of nourishment, transport routes, and paradise islands, but also storms, predators, and whirlpools. Humanity stands at the beginning of a great, fascinating voyage — and new AI laws, like pilots, must help us avoid running aground or crashing against dangerous reefs.

À lire aussi : L’écrasante victoire électorale de Trump sonne-t-elle la fin de « la chasse aux sorcières » judiciaire ?


#ArtificialIntelligence, #AIAct, #EuropeanAIAct, #YuriIstomin, #LeDiplomate, #AIEthics, #AIRegulation, #AICompliance, #AIinEurope, #AI2025, #AIEUlaw, #TechPolicy, #DigitalEurope, #AIFuture, #AIandLaw, #AIPhilosophy, #AIInterview, #MonacoAI, #Realitology, #IQiNResearch, #AIethicsdebate, #AIlegislation, #AIinBusiness, #AITrends, #AIInnovation, #AIlaws, #AIforGood, #AIrevolution, #AIandSociety, #AIimpact, #AIdiscussion, #AIprogress, #AIgovernance, #AIinsight, #AIthoughtleader, #AIinAction, #TechEurope, #ArtificialIntelligenceRegulation, #EuropeanCommission, #AInews,

Le Diplomate Logo

Inscrivez-vous pour recevoir chaque semaine toutes les actualitées.

Ce champ est nécessaire.

Nous ne spammons pas ! Consultez nos CGU pour plus d’informations.

Retour en haut