
Asade Pourmand
Senior Lawyer
Stockholm
Newsletter
by Asade Pourmand and Michael Decker
Published:
Queen Mary University of London periodically conducts empirical surveys of the arbitration community, providing valuable insight on trends in the international arbitration community. The school's most recent international arbitration survey (the "Survey")[1], conducted again (as with the last survey, released in 2021) with the support of White & Case, indicates that the arbitration community is largely positive on the efficiency gains of AI, yet wary of issues such as confidentiality, data integrity, and due-process risks with this technology which has only been introduced in the past few years. The Survey also helps separate where AI appears to be "welcome", from research to document review, and where it is less so: decision-making and award drafting.
The Survey results indicate that usage is set to "boom". The most common current application was stated to be for factual and legal research, with 91% of the Survey respondents expecting to use AI for research and data analytics over the next five years. Document review and data analytics were next in line, with growing reliance predicted as available tools mature. Speed in handling large datasets and trend-spotting, and even practical gains in arbitrator due-diligence, were cited as benefits. Yet caution was urged with open-source models where data handling cannot be controlled.[2]
Three drivers for using AI dominate according to the Survey:
Fully 21% of the Survey respondents also expect AI to enhance predictability in arbitration, and some see its potential to level the playing field for parties with more limited resources. In practice, many of the Survey respondents already deploy AI to generate chronologies, summarize witness statements and depositions, and manage document sets.[3]
Based on the Survey, it appears that the arbitration community distinguishes between procedural assistance and adjudicative judgment. A strong majority consider it appropriate for arbitrators to use AI to calculate damages/costs (77%), summarize submissions/evidence (66%), and help with procedural drafting (60%). However, there is strong resistance to drafting reasons (only 23% approval) and to AI assessing the merits or accuracy of submissions/evidence (31%).[4]
Further to barriers and risk perception, the main obstacles appear to be the risk of undetected AI errors and bias (51%), confidentiality/data-breach risk (47%), lack of experience (44%), and regulatory gaps (38%). The Survey respondents also worry about award challenges where tribunals have used AI without adequate transparency (28%).[5]
These findings resonate with academic commentary. Jennifer Kirby has highlighted that AI can usefully improve the written phase of proceedings, where arbitrators often face submissions that are too long to digest efficiently. She points to AI’s potential to generate draft chronologies, consolidate issue lists, or even prepare mock pleadings, tools that would allow tribunals to focus on the truly dispositive points.[6]
Ana Fernández Araluce has similarly argued that AI, properly deployed, can sharpen case preparation, arbitrator selection and case management. She sees opportunities to reduce network-based biases in appointments and to push routine tasks (for example document review, transcript handling, translation) away from lawyers, leaving strategy, advocacy and judgment at the centre. At the same time, she warns of digital asymmetries if advanced tools are unequally available, and urges clear disclosure obligations, "[n]ot all parties can afford the high costs of AI systems, which may ultimately result in significant power imbalances and asymmetries where one party makes use of AI throughout the proceedings, and the other one does not."[7] (Fernández Araluce, Iurgium, 2024).
By contrast, Maxi Scherer has explored the deeper implications of AI for arbitral decision-making itself. She cautions that predictive models depend on confidential data sets that arbitration does not generate in sufficient volume, that AI risks entrenching hidden biases, and that probabilistic inferences cannot deliver reasoned decisions of the kind required for legitimacy. For Scherer, reliance on AI to decide disputes would represent not only a technical but also a paradigmatic shift, making outcomes hinge on pattern-based likelihoods rather than reasons that can be articulated and justified.[8]
The conclusions of the Survey and the analysis reflected in the above discussion of thought leaders' views also harmonize with Schjødt's own practices, internal guidelines, and experiences using AI tools. Schjødt has been a leader in the use of AI among law firms, testing and using several AI tools in various contexts, including for factual and legal research, for data analytics and document review, including various forms of due diligence. We in Schjødt experience that, as long as AI tools are used with due consideration for confidential information and keeping in mind that these tools are prone to serious flaws in their outputs,[9] they can (nonetheless) save significant time and money for our clients, and to an extent also improve the quality of our work product.
The cautious consensus emerging from both the Survey and scholarship – as well as our law firm – is clear: AI is welcomed as an efficiency tool, but not as a substitute for human arbitrators (or counsel). The community appears ready to embrace the faster research, streamlined drafting, and more consistent case management which AI makes available, but insists on preserving human reasoning, legitimacy and accountability at the core of decision-making and advising.
[1] Available at: White-Case-QMUL-2025-International-Arbitration-Survey-report.pdf
[2] Survey, pp. 26–29.
[3] Survey, pp. 28–29
[4] Survey, pp. 30–31.
[5] Survey, p. 32.
[6] Kirby., "International Arbitration and Artificial Intelligence: Ideas to Improve the Written Phase of Arbitral Proceedings", in Maxi Scherer (ed), Journal of International Arbitration, pp. 657 - 666.
[7] Araluce, "AI in International Arbitration: Unveiling the Layers of Promise and Peril", in David Arias Lozano and Luis Capiel (eds), Iurgium, pp. 35-46.
[8] Scherer," Artificial Intelligence in Arbitral Decision-Making: The New Enlightenment?", in in Cavinder Bull, Loretta Malintoppi, et al. (eds), ICCA Congress Series No. 21 (Edinburgh 2022), pp. 683 – 694.
[9] Schjødt's internal guidelines for the use of AI (available at: Retningslinjer for bruk av kunstig intelligens.pdf) were published by the Norwegian "Advokatbladet" on 16 June 2025, in connection with an interview with Schjødt-partner Eva Jarbekk. The article is available at: Schjødt deler egne KI-retningslinjer: – Et så sterkt verktøy krever klare rammer.