Reading time: 3.5 minutes
In late March, the Future of Life Institute published a much-discussed open letter calling for a six-month pause in the advancement of AI systems. This should create time for legal and ethical frameworks to guide AI advances, which currently seem uncontrollable, into more tangible paths. In addition to more than 25,000 signatories, the statement also received a broad response from interested members of the public. Just a week earlier, the U.S. company OpenAI had unveiled its advanced GPT-4, a tool that closely resembles human cognitive and creative performance, further fueling public debate about the merits and drawbacks of AI.
The members of the Bavarian AI Council also addressed this open letter of the Future of Life Institute’s. Two co-signers are Prof. Ute Schmid, Chair of Cognitive Systems at the University of Bamberg, and Prof. Eric Hilgendorf, Chair of Criminal Law and Legal Informatics at the Julius Maximilian University of Würzburg.
Both Council members see the letter as an important initiative to highlight the risks associated with the use of large language models (LLMs). These ranged from uncertainties about erroneous information to fears of job losses. Calls for regulations on AI are growing louder, council members said, “but it is also clear that the dangers can never be completely eliminated […] as with all technical systems, there will always be a residual risk,” Ute Schmid said.
In fact, so far only the Italian government has responded with regulation at the European level. At the end of March 2023, OpenAI was ordered to restrict the processing of user data from Italy until further notice. Reason: The user data is not sufficiently secured against misuse. OpenAI now has until April 30 of this year to adapt the AI system to the Italian government’s requirements.
However, in addition to the risks that led the Italian government to its de facto ban on ChatGPT, LLMs also offer significant benefits. As lawyer Eric Hilgendorf points out, “One should (…) also not overlook the many positive opportunities that a program like ChatGPT offers, from the service sector to geriatric care to general and technical education.” Examples include medication management (e.g., reminders to take medication or generation of information about interactions between medications) or novel learning methods (e.g., help with writer’s block or simplification of complex texts). However, there has not yet been a comprehensive assessment of the opportunities and challenges in the professional and social discourse.
Important but little considered in AI discourse: resources, human dignity, value systems, legal certainty.
Schmid and Hilgendorf see at least four aspects of AI developments as insufficiently discussed so far:
A look at the key innovations in the field of LLMs shows that the USA and China dominate in an international comparison. Because of the cultural proximity to the U.S., U.S. value systems are currently also being transported to Europe. Whether value-free results can be expected under these conditions may be questioned. Here, Europe could take the decisive step toward qualified, factual and explainable AI, Hilgendorf and Schmid believe. The key: a practice-oriented linking of ethical-legal discussions with the development of AI technologies. This could even become a model worldwide, as prejudiced AI decisions would become less likely overall as a result of such a process.
Here, the Free State could also take on a pioneering role, because with well over 800 AI scientists at Bavarian universities, research institutions and in companies, the baiosphere forms a nationally and internationally unique knowledge and implementation network for artificial intelligence. Leveraging this ecosystem can be the first step on the path to trusted AI developments of the future.