Workshops
CLAIRvoyant: ConventicLE on Artificial Intelligence Regulation
In the first quarter of 2024 the EU Parliament enacted the EU Artificial Intelligence Act (AI Act), a landmark piece of legislation, regulating AI systems deployed in the European Union. Therefore, in less than two years, it will be necessary to put in place systems that ensure compliance, despite the continuing generic nature of the guidelines to achieve compliance.
The use of Artificial Intelligence (AI) devices in almost every field has become both ubiquitous and inevitable. The recent AI Act establishes guidelines for the use of AI in different fields and mandates certification of compliance to enter the EU Market. The AI Act is purposely structured to regulate the use of AI systems, describing requirements, limits, and obligations. It does not specifically refer to a single AI, but it takes a risk-based approach to AI regulation, providing a scale of risk related to the use of AI in a specific field. The higher the risk, the stricter the rules, which means that the more an AI system has the potential or the capacity to cause harm to society, the more requirements it will have to comply with.
This framework brings to light a lot of issues. Not only about the creation or the implementation of the AI device themselves, but also legal or ethical aspects about their use, mostly in the so called high risk sectors (e.g. healthcare or justice). Is it possible to use AI in these fields and/or until which limit? Which are the limits of their use in each different field? Which are the strong and weak points in using them? They are transparent, accurate, trustable; at all or with which assumptions?
The European Union was one of the first major jurisdictions to introduce a comprehensive legislation for AI, and it is expected that many more jurisdictions will follow the EU lead.
The IT market is a global and transnational market, where one system is developed in a country and deployed in another, and accessed in a third one. Accordingly, in this trans-jurisdictional context the actual AI Act will have an impact not only in the European market but also in the worldwide market. Other countries will regulate the AI systems, this implies that harmonization of these norms will be desirable for the market and the enterprises in the field, and will be inevitable the impact of the first strong restrictive regulations of some countries, on the following regulations and on the economy on the global market.
Given the complexity of modern AI systems, it is likely that solutions to certify the compliance of AI tools will be AI systems themselves. Also, with the rapid growth of AI solutions more AI tools will be used in the Law and Legal professions. Thus, AI tools can prove beneficial in the legal domain. The judicial and legal domain are deemed as sensitive and high risk domains. Accordingly, what are the AI tools beneficial to the legal domain? What are the areas of legal domain where AI tools can be successful, and if they need to be certified against AI regulation, how can we certify these AI systems?
More information and Call for Papers is available on the Workshop webpage.
AI4Legs-III: 3rd Workshop on AI for Legislation
Can the Law be written by GPT-4? Can the members of the parliament use ChatGPT for improving their knowledge of the society needs? Can the Law be converted into programming code using AI/ML and logic formulae without losing legal theory principles, legal linguistic expressivity, and Constitutional principles? Can the digital format of Law be equally valid and considered a legitimate Legal Source, and under which conditions? Can a whole Legal System –including its diachronic dimension – be managed in a digital manner by using knowledge Graphs, Semantic Web techniques, Legal ontologies, Logic theory? Can a such translation be made automatically executable using Smart Contracts and immediately enforceable? How to render the “Law as Code” to the common citizen in a simple, yet transparent and accountable manner? Can an explicit normative statement be expressed natively in code or in non-linguistic signs (e.g., icons)? Which principles are necessary in order to not compress the Rule of Law and Democratic principles? What new legal theory is needed for a deep digital transformation of the legislative process that produces a digital format of Law with an innovative generative and constitutive modality instead of converting text (logos) into code? How to produce Law in non-textual norms while preserving normativity? How to improve the legislative process using AI/ML for better regulation?
This workshop would like to discuss these challenging questions with interdisciplinary instruments coming from philosophy of law, Constitutional law, legal informatics including AI & Law, computational linguistics, computer science, HCI and Legal design. We intend also to discuss the state of the art of the most advanced applications of AI in support of the better regulation, law-making system, aims to find answers to these questions using.
More information and Call for Papers will be available on the Workshop webpage.
AI4A2J: AI for Access to Justice
(AI4A2J ALLOWS ONLINE PARTICIPATION - PLEASE REGISTER VIA SUFFOLK UNIVERSITY'S ZOOM)
Over the last 2 years, we have seen an explosion of interest in applying artificial intelligence to help solve the biggest challenges in law. Yet many of the best tools remain out of reach of the practitioners on the front line of civil legal needs: legal aid workers and unrepresented litigants.
We invite legal technologists, researchers, and practitioners to join us in Brno, Czechia on December 11th for a full-day, hybrid workshop on innovations in AI for helping close the access to justice gap: the majority of legal problems that go unsolved around the world because potential litigants lack the time, money, or ability to participate in court processes to solve their problems.
Contributors should submit either:
- short papers (5-10 pages), or
- proposals for demos or interactive workshop exercises
We welcome works in progress, although depending on interest, we will give a preference to complete ideas that can be evaluated, shared and discussed.
The focus of submissions should be on AI tools, datasets, and approaches, whether large language models, traditional machine learning, or rules based systems, that solve the real world problems of unrepresented litigants or legal aid programs. Papers discussing the ethical implications, limits, and policy implications of AI in law are also welcome.
More information and Call for Papers is available on the Workshop webpage.