AI+D
talk |

Welcome Address @ re shape forum 2025

Maren Schohl during welcome address

Opinion


A weird member of the team

Welcome Address to the re shape Forum by Maren Schmohl, Rector of Hochschule für Gestaltung Schwäbisch Gmünd

In her thought-provoking welcome address, Maren Schmohl set the stage for the re shape Forum 2025 by reflecting on the curious and sometimes unsettling relationship we have with AI today. From navigating the post-hype reality of generative tools to questioning human exceptionalism, she invites us to see AI not as a perfect machine or looming threat, but as a “weird member of the team,” brilliant, unpredictable, and increasingly present at our creative tables.

Schmohl draws parallels between AI and other humbling historical shifts, from Copernicus to Freud, reminding us that losing control may not be a failure, but a moment of intellectual honesty. In an age of accelerating complexity, she urges design schools and universities to embrace not answers, but uncertainty—to teach not just skills, but the humility and imagination needed to ask better questions.

This opening set the tone for two days of critical, creative, and courageous dialogue on how we reshape AI’s role in art, design, and society.


”Good Morning and welcome intelligent entitites:

Students, colleagues, guests and clever machines - welcome to the AI Reshape Forum. The second forum of the KITeGG project at HfG Schwäbisch Gmünd, and the 7th iteration of this format overall.

As most of you know, KITeGG is a 5 year long research project in which several German schools of design collaborate to explore ways of integrating AI in art and design education and curricula.

The forum brings together the KITeGG research team as well as esteemed international guests to explore AI’s evolving role in creativity, interaction, and society. The forum’s aim is to question dominant AI narratives, rethink design beyond human-centered approaches, and imagine regenerative futures where technology coexists with a more-than-human world. So we have an ambitious and exciting agenda and a lot to look forward to today and tomorrow.

My name is Maren Schmohl, I am the rector of HfG and I am very happy that I have been given the opportunity to greet you and share some of my thoughts while you finish your coffee and mentally gear up for the expert talks that will start in a couple of minutes.

I feel that we are at a curious moment right now concerning our experience with AI.

It seems that the first stunned than feverish excitement surrounding the release of ChatGPT in late 2022 may have subsided a little bit and the hype-cyle of public imagination may even have surpassed (a first) peak.

AI as a novelty has lost a little of its otherworldly glow and sparkle: we don’t perk up in excitement (or fear) every time a new development is announced. And we don’t quite expect AI to solve all of humanities problems within the next hours (or bring about the end of the world) - depending on whether you are more on the apocalyptic or the integrated end of the spectrum to use Umberto Eco’s famous dichotomy.

And so while we have gotten a little used to and habituated with AI I for one still have the feeling that we are standing in the middle of an 8-lane autobahn with objects zipping in all directions with 200 mph while we are trying to steer a bicycle: We haven’t been hit by a big truck yet, but it may happen any minute. A feeling of ominous uncertainty is still there.

Many have said that AI presents a further “insult” to humanity in that it throws into question some dearly held beliefs about ourselves. I agree. The three big insults that are referred to in this context are of course:

A cosmological one when Copernicus and Gallileo recognized that the earth is not the center of the universe but instead only one of a handful of planets that orbit the sun – not the other way around. We experienced a biological demotion when Darwin discovered that humans are not designed by God as the crown of creation but are descended from animals like all other species. And thirdly a psychological insult when Freud theorized that our rational mind, our free will, is in battle with or even determined by the subconscious and other psychic forces that we don’t have much control over.

We as modern people shrug this off, we laugh a little at the silly people of prior ages who were shocked by what to us seem like banalities. But now we are faced with an identity crisis of our own: a technological one, that once again throws the idea of what humans are for a loop and deals a serious blow to the notion – still deep seated – that we are different, special, better than other species or forces with which we share the earth. What is AI, what are we vis à vis AI, what is AI vis à vis to us: these questions face us as students, teachers/researchers but also as humans.

For a long time we have found it easy to set ourselves apart and in opposition to ever more sophisticated machines and technology. We defined areas of exclusive expertise or natural traits that surely only we as humans had: machines are dumb, we said, we are “intelligent”. Machines are cold, we are empathetic. Machines are just crunching numbers or auto complete sentences, we are creative. And finally: if machines are perfect – well then we are fallible, imperfect thus loveable, thus human. We have moved the goalposts to make sure we’re always winning.

If we play the game like this we’ve run out of space. These little islands of exceptionalism dissolve underneath our feet like sand and I don’t believe in any of them anymore. Machines, certainly AI, can beat us in all these categories. Is that a problem? Maybe not. But it is certainly profoundly unsettling.

New analogies arise. Dealing with AI is like being the driver of a self driving car. We simply let AI do the tedious work of navigating traffic while we have better things to do. Yet we still must learn how to drive, so that can take over should it be required.

Or: when using AI tools in our process we are like the conductor of an orchestra – someone who combines and ‘orchestrates’ the specific properties and talents of a host of tools and devices and decides what to use when and how.

In his typically understated way my colleague Prof. Hartmut Bohnacker recently said that he sees the challenge of working with AI as “finding interesting jobs for it”. This deceptively simple statement really sums up a lot of experience with AI. In this image AI has become what I would call “a weird member of the team”. Brilliant in one second, dumb as a brick in the next, but a team member non the less – it sits at the table with us, it’s not just a machine in the corner.

All of these images grapple with the idea of control. Who is in charge, who decides, who is the boss – or at least the author. They circle the question of what kind of skills are needed, what qualities do we need to develop in order to successfully work with AI. This of course is the perennial question for us as universities. We are talking about skills of application, skills of curation, certainly, but also good old fashioned expertise, standards, aesthetic and even moral values by which to judge whether an outcome is helpful, innovative and fulfils our demands.

A conductor produces wonderful music not because she can play every instrument herself but because she knows how the music should sound like. She knows exactly what she wants to achieve and bases her decisions of when to silence the violin, when to speed up the drums, on her expertise and vision.

This is a nice image – even if I don’t fully believe it.

I don’t believe that we are fully in control concerning AI. And I admit that I do feel a bit shocked and insulted by this notion when I let it hit me. But then: loss of control and supremacy does seem to be the defining headline of our age: We cannot control nature. Or the outcomes of capitalism. Or the predicted march of humankind towards progress and enlightenment. We are learning to be humble vis à vis forces that we thought we had conquered and well under control. AI maybe just a weird, imperfect member of the team – but so are we.

To be humble, to have been humbled, is not a comfortable place to be in, but a fitting one for us as teachers, students, explorers and experimenters, because it’s a good place to ask questions from.

So despite the feeling that during the last years we have gained firmer footing in handling and working with AI, and despite the enormous pressure that is put upon us to understand, control, put to use, make money with AI - let’s not presume that our questions have been answered or will be answered or that this is even the point. Let`s not set up another rigged competition. As always I say our job as schools and universities is not to produce certainty but to embrace uncertainty. Not to perpetuate the illusion of control but develop the skills to deal with question marks, openness and standing firm on an ever shifting ground of sand.

I want to thank Benedict Groß, Jordi Tost, Felix Sewing, Moritz Hartstang, Rahel Flechtner, Christopher Pietsch and the whole team here at HfG and our partners for organizing this wonderful event and I wish you all a great forum.

Thank you!”


Picture: Stefan Eigner


Related