Recently my attention was caught by an opinion piece in The Big Idea from Dr Maggie Buxton, who is described as an authority on the subject of artificial intelligence. The teaser for the post reads, “With both the use and threat of Artificial Intelligence growing in the arts and creative industries, Dr Maggie Buxton gives her expert insight” (you can read the article here). I’m sure Buxton does have very useful knowledge to contribute, but this article was not the thought-provoking analysis of the NZ arts sector’s response to and use of genAI that I was looking forward to.
After some fairly general introductory comments, Buxton groups creatives into three groups based on their attitudes to AI: “sniffy” deniers; bandwagon-jumpers who haven’t realised “the implications for their creative identity”; and the “hesitant but curious”. The sniffy deniers are dismissed in a short sentence, and later alluded to as “a current of self-righteous technophobia in some corners”. The poor burned-out bandwagon-jumpers are destined to “wake up one day and realise your integrity has quietly slipped out the door” – and here I thought, ah, now she’s going to unpack some of the potential risks … but no, she moves on to “the middle group” without further comment on how and why our integrity might be lost, or what this might mean for us as artists.
The “middle” group, “hesitant but curious”, are “moving from reaction to relationship” but, when shown something “cool” they respond with “[f]ear, confusion, disturbance”. Now I’m confused! If these are curious people who are moving towards a relationship with genAI, what is it about this “cool” thing that frightens them? But there’s no further explanation; Buxton segues into anthropomorphising the technology – it listens, looks back, wants to please – then she insists that we “stop downplaying artificial intelligence’s significance”. It’s unclear who this directive is aimed at – the middle group that are frightened by this cool thing she showed them? The sniffy deniers? The reader? Everyone? Who knows! Moving right along, she descends into advertorial, eulogising about how useful AI is – it even “hallucinates cool mistakes”!
Given that Buxton is the director of a transdisciplinary art and research lab and has worked with “hundreds of creatives to unpack the risks and opportunities of AI”, there is a conspicuous absence of real world examples from her own company or the wider arts sector to illustrate her opinions. She gives generic examples of the potential usefulness of AI for the arts, such as writing grant applications or transitioning an arts practice from 2D to 3D, but these are hardly expert insights.
Finally, towards the end of the post, Buxton poses some questions: “What happens when the whole room believes they can do what you’ve trained your life to do? How does that land on a societal level when no one needs your side hustle anymore, and the work you relied on to stay afloat has been automated or no longer exists?” Not the questions I’m asking, nor does she attempt to investigate these questions. Instead, she accuses the arts sector of a lack of self-critique, and arts organisations of staying silent about AI while secretly experimenting with it “in the shadows”. This is bizarre. I’m no authority on AI but I’m well aware of discussion and experimentation happening openly across the sector – at events, in blog posts, articles, guidelines and of course also in art projects. To give just a few examples:
- the 2024 Aotearoa Digital Arts Symposium with the theme “Rising Algorithms: Navigate, Automate, Dream” – a 5-day symposium exploring “the increasing sophistication of machine learning and artificial intelligence, its growing accessibility for artists and the general consumer, and its influence on wider society as we navigate complex environmental, social and economic issues worldwide.”;
- Playmarket reporting on copyright implications in its monthly e-newsletter, including this article about research undertaken for the NZ Society of Authors (June 2025);
- guidelines developed by organisations such as NZ on Air and the NZ Film Commission;
- Julian Oliver’s machine generated fictional cosmos, The Closed World;
- Tom Brooker’s exhibition of art made in collaboration with ChatGPT and Midjourney, at Refinery Gallery in Nelson (December 2024);
- these in-depth discussions of the impact of AI on Maori culture (27 February 2024) and AI as cultural appropriation (19 August 2022);
- and this thorough analysis of the potential impact of AI on the music industry (15 July 2025), which is replete with examples, questions, hypothetical situations and suggestions for the way forward.
This last article calls on the government to “invest in research into AI’s impact on creative industries, foster cross-sector collaboration between tech developers, artists, and legal experts, and actively participate in international dialogues to shape global standards,” which is a far more useful conclusion than Buxton’s rather flippant “engage with it – whether you want to or not”.
It is of course true that we (not only artists, but everyone) already have to engage with genAI, whether we want to or not, as powerful tech corporations and FOMO-riddled governments increasingly force it upon us. But we need to engage with it critically, exploring both the positives and the negatives, so that we can make informed decisions about how we want to use it and what we might be sacrificing (our integrity, perhaps?) in the process. AI evangelists need to stop downplaying the significance of known problems – such as LLMs already being proven to increasingly fail as they scale, the legal minefields of copyright and data privacy, the not-so-cool hallucinations, the potential consequences of baked-in biases and, not least of all, the environmental impact of AI’s huge energy consumption.
Buxton ends with the statement that “Artists have always made sense of disruption. This strange intelligence is no different.” I disagree with the first sentence – as an artist I am interested in creating disruptions: in disrupting complacency and wrestling with disruptive questions around how new technologies are impacting our arts practices as well as society and the environment. I do agree with her call for discussion within the arts sector and, fortunately, many artists and arts organisations in Aotearoa are already actively disrupting the complacent acquiescence and asking difficult questions together.
Note: the day after I published this post, The Big Idea ran another article in which they asked nine funding agencies about their approach to the use of AI in funding applications, and what they are doing to “safeguard their process from AI being used as a replacement for creative skills”. The organisations’ responses clearly demonstrate that they are actively engaging with the issue and exploring the implications for artists, not only the use of AI in funding applications but also how it impacts their artistic practices. Hardly “staying uncomfortably quiet”.
Disclaimer: no LLMs or genAI was used in the creation of this article.