The use of artificial intelligence (AI) in the field of tax compliance is not something outside the usual canons and principles of legal application and interpretation. Legality, proportionality and equality are still the foundations of any area of the law.
Taxpayers have rights of privacy that are as important as the right of the state to pursue sources of revenue. AI has the power to tip the balance in the struggle of the taxpayer to shield information against the right of a government to identify such information in the name of fairness and equality in order to levy taxes. The role of AI should be to provide a level playing field for the legislature and the courts to identify where the balance should be in individual cases.
Tax secrecy is complicated, fluid and fact-driven, all at the same time. It refers to the disclosure of tax-related information by taxpayers to the state, and to permitted access to the origin and destination of taxable transactions. Tax administrations must recognise the right to withold information. They must also deal with information that is supplied without wider public dissemination and without pursuing excessive and onerous disclosure from taxpayers.
Tax authorities often claim a legitimate right to ignore the tenets of privacy when investigating taxpayers and third parties acting on their behalf. AI is a powerful tool enabling authorities to access the increasing amounts of information that are generated by the equally increasing complexity of financial transactions. Government revenue departments may also wish to withhold knowledge of the use of machine learning and algorithms in addition to more conventional knowledge-based methods. Institutional secrecy is inextricably interlinked with tax enforcement, and an administration must be vigilant that one does not encroach on the other. The role of AI in preserving the tenets of tax secrecy is part of a wider consideration of the relationship between state and personal rights and freedoms. The onus is on an administration to make taxpayers aware of the involvement of algorithmic AI processes in increasingly data-driven court settings, particularly in developed jurisdictions.
The first successful attempt to overcome this challenge came in 2020. The court of the Hague halted the use of the Systeem Risico Indicatie, or SyRI, a machine learning and risk scoring algorithm designed for the Dutch administration to predict and prevent cases of non-compliance and fraud, particularly in regard to social security. The plaintiff’s case was that the use of the system contravened article 8 of the European Convention on Human Rights, which guarantees a right to private life. The court accepted the argument, holding the transparency versus verifiability principle favoured the taxpayers. The need for the proper use of AI systems by tax administrations has been emphasised not only by national courts but also by supranational tribunals. The Ligue des droits humans case saw just such an approach by the EU Court of Justice (CJEU). The case dealt with a directive to airlines for the disclosure through automatic systems of passenger name records. Various national administrations relied on the need for public security and safety. However, the CJEU upheld the right to privacy under article 8 as well as the right to good administration under article 41 of the EU Charter for Fundamental Rights (EUCFR). The court held that there was a duty to disclose the legal grounds on which machine use and consequent secrecy were employed. It further held that the use of such technology may deprive data subjects of their right to an effective judicial remedy and fair trial as set out in article 47 of the charter.
The reliance on similar arguments of state security, safety and effective governance in cases of tax enforcement, seems to be gradually decreasing in the light of these decisions. The courts must acknowledge technological advancement and the fact that tax administrations will be using AI to save time and money. If the tax authorities require inputs that will be analysed by algorithms, machine learning and AI, this must be done in appropriate, proportionate and transparent ways. The courts must also recognise risk indicators and require administrations to appreciate and respect individual rights and proceed on a case-by-case basis.
Mukesh Butani is the founder and managing partner, and Pranoy Goswami is a research associate at BMR Legal.