It is the improper use of algorithms in the work environment or the refusal of companies to report on the content of their artificial intelligence tools popular voice in business management environments. Until now, the Labor Inspectorate had not focused on this problem, but the Second Vice President and Minister of Labor, Yolanda Díaz, announced this week an inspection campaign to increase surveillance on the use made by large technology companies, such as Amazon, of these algorithms to control employees and organize their work rhythms. However, several experts consulted highlight some complications in imposing sanctions for these practices.
Last June, the Generalitat of Catalonia fined Amazon for refusing to reveal the functioning of the algorithms used to measure the productivity of workers in its logistics center in Prat de Llobregat (Barcelona). Although the fine was little more than symbolic, at just over 2,400 euros, it was significant for being a pioneer in sanctioning a company’s refusal to account for the details of its algorithmic application.
To date there are not many such decisions. But last July the National Court also condemned the company call center Foundever Spain for refusing to provide CGT union delegates in the company with information on the use of a number of algorithms which it claimed not to use, but was proven in the trial to use. In this case, the Court found that the company violated article 64.4.d of the Workers’ Statute, which requires providing information on the design and operation of algorithms.
The Labor Party’s intention is to increase surveillance and, where appropriate, sanctions relating to artificial intelligence in the workplace. But the inspectors of this organization differ in their possibilities to exercise this new control. The president of the Union of Labor and Social Security Inspectors, Ana Ercoreca, claims that the legislation already allows the organization to impose sanctions. And it ensures that, in addition to the violation of the aforementioned article 64.4.d of the statute, the Inspectorate can sanction companies for improper use of algorithms through other types of violations of labor regulations.
In fact, he explains that he has already started several actions in this regard. These include a case where a large multinational outsourced part of its delivery system to a subcontractor (which is permitted by law) but the Inspectorate’s investigations found that it was the multinational’s algorithms that were determining the subcontracted company’s shifts, holidays and other work organization issues. «In that case the sanction is the illegal transfer of workers, but this violation was achieved through algorithms», explains Ercoreca.
Likewise, he adds, there have been cases where a selection process is reported and inspectors find that there are no women of childbearing age and find that this is due to the bias of the AI tool. In this case, the sanction for the company owning this tool would be that provided for discrimination, a right which is regulated in a general way in article 4.2.c of the Workers’ Statute, and specifically in employment relations in article 17 of the same regulation.
Similarly, discriminatory practices in pay, situations in which a platform’s algorithm assigns a lower workload to part-time employees, have also been punishable. «Instead of the distribution of tasks being proportional to the contracted hours, the algorithm sends fewer so that no one receives the productivity supplement», criticizes Ercoreca.
Other inspectors disagree that these nonconformities are being addressed. The head of the CSIF union at the Labor Inspectorate, Miguel Ángel Montero, believes that inspectors “are almost in a black hole, because the platforms generate poverty, underemployment and make inspection activity extremely difficult.” According to Montero, “the speed with which the improper use of algorithms in the workplace is growing is exponential: at the moment those who use them the most are large companies, but very soon they will reach small restaurants and workshops and the Inspectorate is lagging behind in this”.
This union official also laments the lack of specific qualifications to interpret AI tools and spot how they are being misused. Although he acknowledges that the Ministry of Labor is aware of this gap and is incorporating specific training on this type of algorithms into the official training curriculum of the agency’s school.
The lack of qualified professionals is the main obstacle that labor law professor and platform legislation expert Adrián Todolí also encounters in the fight against the abuse of artificial intelligence by companies. This academic agrees with the two positions of the inspectors consulted. He assures that Spanish legislation already allows the Inspectorate to sanction companies that violate the Workers’ Statute, but also through the Data Protection Law, which further specifies the requirements relating to the use of algorithms in companies. In these cases it is up to the Data Protection Agency to establish a sanction, but the Labor Inspectorate, again tangentially, can accuse the company, for example, of violating the principle of non-discrimination.
“If this inspection campaign is carried out, the most difficult thing will be to carry out the investigations. The main challenge (of the agency) will be to have specialists on how algorithms work and how to identify the risks of artificial intelligence,” says Todolí.
Audits
Parallel to the Inspectorate’s actions, the head of Artificial Intelligence at the UGT, José Varela, denounces that “companies systematically block information in the negotiation of collective agreements about the algorithms they use, because they know that they will not pass any checks.” This complaint is linked to European legislation contained in the Artificial Intelligence Regulation (AI Law), which in the part already in force provides for audits of algorithmic applications in the work environment before and after their implementation. Likewise, this standard already expressly prohibits the use, for example, of emotion recognition systems through biometric tools.
However, the jurist who was deputy rapporteur of this European regulation on AI, Iban García del Blanco (now international director of Lasker), is not clear how the widespread use of these tools can be combated through inspections. “It will be very complicated how to plan inspections and for what purpose; it will be how to control the implementation of robots, robotic machines. Furthermore, it may even be counterproductive,” he says. Furthermore, attention is drawn to the fact that the part of this regulation in which the penalty system for non-compliant companies will be established, as well as the distribution of powers (who will establish them), will not come into force until August 2, 2026.
