home » news » A break for artificial intelligence

A break for artificial intelligence

Reading time: 3.5 minutes

In late March, the Future of Life Institute published a much-discussed open letter calling for a six-month pause in the advancement of AI systems. This should create time for legal and ethical frameworks to guide AI advances, which currently seem uncontrollable, into more tangible paths. In addition to more than 25,000 signatories, the statement also received a broad response from interested members of the public. Just a week earlier, the U.S. company OpenAI had unveiled its advanced GPT-4, a tool that closely resembles human cognitive and creative performance, further fueling public debate about the merits and drawbacks of AI.

Robot sleeping in a meadow
Stable Diffusion: Robot takes a break in a pasture (AI generated)

The members of the Bavarian AI Council also addressed this open letter of the Future of Life Institute’s. Two co-signers are Prof. Ute Schmid, Chair of Cognitive Systems at the University of Bamberg, and Prof. Eric Hilgendorf, Chair of Criminal Law and Legal Informatics at the Julius Maximilian University of Würzburg.
Both Council members see the letter as an important initiative to highlight the risks associated with the use of large language models (LLMs). These ranged from uncertainties about erroneous information to fears of job losses. Calls for regulations on AI are growing louder, council members said, “but it is also clear that the dangers can never be completely eliminated […] as with all technical systems, there will always be a residual risk,” Ute Schmid said.

In fact, so far only the Italian government has responded with regulation at the European level. At the end of March 2023, OpenAI was ordered to restrict the processing of user data from Italy until further notice. Reason: The user data is not sufficiently secured against misuse. OpenAI now has until April 30 of this year to adapt the AI system to the Italian government’s requirements.

However, in addition to the risks that led the Italian government to its de facto ban on ChatGPT, LLMs also offer significant benefits. As lawyer Eric Hilgendorf points out, “One should (…) also not overlook the many positive opportunities that a program like ChatGPT offers, from the service sector to geriatric care to general and technical education.” Examples include medication management (e.g., reminders to take medication or generation of information about interactions between medications) or novel learning methods (e.g., help with writer’s block or simplification of complex texts). However, there has not yet been a comprehensive assessment of the opportunities and challenges in the professional and social discourse.

Important but little considered in AI discourse: resources, human dignity, value systems, legal certainty.
Schmid and Hilgendorf see at least four aspects of AI developments as insufficiently discussed so far:

  1. High energy use through mass training and use of AI systems. Due to the shortage of natural resources, Ute Schmid fears that in the future new developments could only be financed by global players. Smaller organizations and companies would then quickly face tough cost-benefit decisions that could stifle innovation.
  2. Sometimes poorly paid human labor in training an AI. One example: Time magazine published research in January that revealed low pay and psychological stress among employees of a Kenya-based service provider. The employees’ task was to instill ethical standards in ChatGPT. Schmid complains, “This is hardly ever talked about, and instead the impression is always given that models built with machine learning are built purely from the data collected.”
  3. All pioneering technical innovations currently come from the USA and China. The dependence on U.S. monopoly suppliers that has arisen, especially in Europe, would make U.S. values binding for and in Europe practically through the back door.
  4. The clarification of legal questions, “(…) such as an appropriate liability framework for damages resulting from erroneous statements by AI language programs” has not yet taken place, according to Eric Hilgendorf.
    Bringing these and other criteria into a kind of democratic exchange among science, business and the general public will be a task for policy makers in the near future.

A European AI Language Model – Made in Bavaria?

A look at the key innovations in the field of LLMs shows that the USA and China dominate in an international comparison. Because of the cultural proximity to the U.S., U.S. value systems are currently also being transported to Europe. Whether value-free results can be expected under these conditions may be questioned. Here, Europe could take the decisive step toward qualified, factual and explainable AI, Hilgendorf and Schmid believe. The key: a practice-oriented linking of ethical-legal discussions with the development of AI technologies. This could even become a model worldwide, as prejudiced AI decisions would become less likely overall as a result of such a process.

Here, the Free State could also take on a pioneering role, because with well over 800 AI scientists at Bavarian universities, research institutions and in companies, the baiosphere forms a nationally and internationally unique knowledge and implementation network for artificial intelligence. Leveraging this ecosystem can be the first step on the path to trusted AI developments of the future.