Apple has found itself in difficulty with the EU following a product launch on September 9. Alongside the launch of the iPhone 17 and the all-new iPhone Air, the company launched its latest earphones, the AirPods Pro 3.
One of the main advertised features is the ability to have languages live translated into your ear as the other person speaks, which is powered by Apple Intelligence. This allows real-time conversational translation and marks a step forward in both AI and how we understand inter-language communication. Deliveries begin on September 19, however, Apple users in the EU will find themselves unable to access this feature due to an amalgamation of EU regulations that place restrictions on AI usage in the region.
The EU has not commented on the new feature; rather, it is Apple that has made the decision that users with an Apple account based in the EU will not be able to access this live translation. This is because the company views compliance with strict regional rules too difficult and instead has opted to withhold the live translation altogether. The core of the issue comes down to two EU laws: the Digital Marketing Act (DMA) and the EU AI Act.
Article 6 of the DMA states that Apple must provide interoperability with their hardware and software to developers, to mitigate the possibility of a monopoly being created over certain technological advances. For the new live translation feature, this means that Apple would have to provide the basis of how it works to other developers, and instead of doing so, Apple has foregone the feature entirely. The EU AI Act contains provisions for human data protection that may clash with the data information required for live translation to work.
These frictions with new Apple products raise questions about how governments can effectively regulate new technological innovations; simultaneously nurturing growth and investment without impacting safety or human well-being. The EU is a leading player in global AI regulation, and the EU AI Act was the first of any such legal framework in the world. It marks an important step forward globally to “foster trustworthy AI in Europe”. The EU does not want this Act to be viewed as an anti-AI agenda, rather, it is a plan to “guarantee safety, fundamental rights and human-centric AI” in order to “strengthen uptake [and] investment … in AI across Europe”. One Union student, a resident of the EU, said that she knows that “EU regulations are often stricter than in the US”, but that there is usually a “good reason for this”.
There is a delicate balancing act between promoting technological advances whilst protecting citizens from the potential negative impacts of these advances on jobs, well-being, and human safety. The EU has begun to draw the lines, and it is being followed by other countries such as the UK and the US, each with unique and varying attitudes towards the role that AI should play in our modern world. It is difficult to say whether governments will protect economic growth and efficiency even at the cost of human well-being, or whether they will prioritise jobs and regulations over innovation. Or, if they will take a similar view to the EU and attempt to tie the two together – embracing the possibilities for AI innovation to support and uphold human well-being and have the two work alongside each other. This is a dilemma that is going to dominate the next few years of technology policy, and what it means for a market economy that demands continual growth remains to be seen.