The long-awaited proposal for EU legislation on artificial intelligence that came to light a few months ago has been applauded by most experts. It was the natural step after the implementation of the General Data Protection Regulation (RGPD).
“We have lived through a period of ‘anything goes’, in which we have built artificial intelligence systems by virtue of being able to build them. This legislation seeks to limit this approach ”, points out Josep Curto, professor at the UOC’s Computer Science, Multimedia and Telecommunications Studies. But the proposed regulation has also opened the debate about what the legislation does not regulate. And in the opinion of professionals such as Sergio de Juan-Creix, collaborating professor of Law at the UOC degree in Communication, one of the fields that is most neglected is that of neuro-rights, the area related to our privacy and mental identity.
“Artificial intelligence is the necessary tool to predict your behavior and, based on this, offer you products or services based on how you are or what your mood is, anticipating it or molding it to the bidder’s taste. This goes beyond privacy or even intimacy because to do so they need to have a certain predictive control over your mind, “says De Juan-Creix, an expert in digital law at the Croma Legal firm. Sergio de Juan-Creix recalls that that intrusion can translate into large-scale manipulation and can have a direct impact on our decisions and neuro-rights, a totally virgin field that continues not to be regulated in this legislation.
Josep Curto pronounces himself along the same lines, who states that although it is very difficult to define how far the regulations should go, “it is surprising that areas that affect consumers are not considered and that are linked to information bubbles, false news or social networks ”, he warns. And he adds that the omission, in the technical field, of recommendations of approaches that preserve privacy, the systems known as privacy-preserving machine learning.
Other aspects that could create controversy if they continue not to be regulated are those related to the use of artificial intelligence in intellectual creations. As De Juan-Creix explains, one of the requirements to be intellectual property is originality, “in the sense of novelty and of being an original human creation. But if it is a creation of a robot, is there intellectual property? And, if so, whose is it? From the owner of the robot? From the artificial intelligence programmer? ”He wonders. Likewise, the UOC collaborating professor believes that the issue of civil liability is not well defined. “Who is the person responsible in the event of a fault being incurred? Who creates? Who integrates? Who maintains? Who controls? All of them? With the same level of responsibility? How is this responsibility to be distributed? And, above all, how is this going to be made understandable for the consumer? ”, He raises, and indicates that these considerations will require a long process of negotiation, implementation and, finally, assimilation.
The two experts recall that this proposal is still in the approval phase and its final wording may vary significantly, and even address all these blank spaces in the future. If he did, there would be another reviewable point in his opinion, and that is how companies would articulate compliance with the regulations. “The fabric of companies in Europe is wide, and the way in which they are going to consume artificial intelligence is going to be very different, so it will be very easy to make mistakes.. I miss the recommendation of a role similar to that of data controller (DPO) in the General Data Protection Regulation (RGPD) ”, says Josep Curto. “This role can be CDO (chief data officer), CAO (chief analytics officer) or equivalent, and must collaborate with the internal audit department and with the legal department, as well as participate in the data governance committee ”, he adds.
Equally, planned fines could be revised, since those indicated in the proposal —up to twenty million euros or 4% of annual turnover— are similar to those presented in the RGPD regulation, “and as we have seen in recent years, many companies have received sanctions related to to the treatment and protection of customer data, and even so some of them continue to carry out the same practices since they operate in multiple countries at the same time and seek stratagems to bypass compliance ”, points out Josep Curto. In his experience, a possible solution would be to carry out coordinated actions between countries with respect to the investigated companies so that the accumulated sanctions would be truly effective.
Another option would be to increase the amount of the fines. “It must be taken into account that lArtificial intelligence can not only invade privacy, but also the case of collapsing a health center, for example. And in this field, as in the military or in matters of population control, twenty million euros seems to me a ridiculous sanction, like 4% of the turnover. Yes to Facebook the sanctions do not scare him because he can pay them, there should be a different section, with a higher amount even with dissuasive effects, for mega companies like these that, by controlling artificial intelligence systems, can further consolidate their power and technological dependence “, Sergio de Juan-Creix points out.