AI STATEMENT

We’ve received a range of responses to our use of AI-generated imagery in The Invisible Doctrine. While some have praised the film’s aesthetic approach, others have voiced concern—and even contempt—at the choice to use AI in a work critiquing neoliberalism.

We hear you, and we welcome the conversation. After all, this film was created as a vehicle to challenge dominant ideologies, including those surrounding technology, labor, sustainability, rights, power and media. And if it’s prompting serious dialogue—even disagreement—we consider our work as having the intended impact.

We want to share where we stand, and why we made the choices we did:

1. On AI Imagery & Aesthetic Intent

Yes – the film uses AI-generated visuals, as our opening statement makes clear:

“Most of our fears or anxieties about technology are best understood as fears or anxiety about how

capitalism will use technology against us.” – Ted Chiang, science-fiction writer

The use of Artificial Intelligence in this film is an attempt to turn the tables, and employ technology against capitalism.

This was a conscious, creative decision, however – and not a shortcut. Thematically, we believe the aesthetic aligns with the character of neoliberalism itself – that of a nameless, placeless, synthetic ideology. It operates hidden and embedded within the abstract, inhuman systems that govern us – impenetrable legislation, computer code, exotic financial instruments, algorithms, backroom deals, trade agreements, tax law, spreadsheets, (and yes, now AI). 

The eerie, ugly, soul-less and other-worldly detachment of AI imagery is intended as a mirror to this experience (particularly as generated by the long-outdated first iterations of these platforms used in the 2023 production of the film). We wanted viewers to feel this disorientation – for us, a critical part of any storytelling that moves beyond mere information-delivery. Our use of AI is meant to be subversive – conceptually as well as aesthetically.  The critique that this choice somehow “undermines” the film fails to consider the visual intent: to disrupt visual expectations, create discomfort & friction, and ask the viewer to pay attention.

2. Sustainability, Resources & Moralism

We live in an increasingly complex world, where the simplicity of pure orthodoxies no longer holds. AI is already ubiquitous. Even if you’re not a direct ChatGPT user – if you use a smartphone, apps, shop online or use email – you’re likely using AI.

It’s not lost on us that AI poses grave sustainability concerns [1][2] – but it’s also worth noting that carbon footprint has always baked-in as a consideration in our production approach.  In fact, we chose not to fly to the UK to collaborate & film with George – this was all done virtually, thanks to technologies now available (George, in fact, once famously declined a prestigious award in Italy, as he couldn’t justify the carbon footprint of flying to accept the award in person, as required). 

Michael Moore’s 2009 film CAPITALISM: A LOVE STORY has been cited as a film telling a similar story to THE INVIBILE DOCTRINE without the use of AI – something we feel there’s value in presenting here as a case study of sorts. That film – produced long before the advent of today’s emerging technologies – cost $20 million ($30 million in today’s money).

By contrast, THE INVISIBLE DOCTRINE was made using extremely limited resources, in a media environment that offers less and less support for independent political nonfiction (in fact, our film was 100% crowdfunded, in part to ensure creative & narrative independence). Our entire budget was roughly 1/300th less than (yes - 1/300th) of what Michael Moore spent on CAPITALISM: A LOVE STORY – with 1/300th of the carbon footprint (again, even our transatlantic interview with George Monbiot was filmed remotely with this consideration in mind). 

If one follows a logic where artistic tools must be judged by environmental or ideological purity, then CAPITALISM: A LOVE STORY – with its massive carbon footprint and corporate distribution deals – would be considered a far more harmful cultural product. 

3. Intellectual Property 

AI presents significant challenges in regards to IP, copyright & ownership, which stem from questions around how AI is trained on vast datasets (literally millions of images sourced from the internet), and then “absorbs & rearranges” referenced work. [3] Many users, however, argue that the process of AI generation varies little from how all artists ultimately learn through exposure, study and emulation – that no artwork exists in a vacuum, is derivate and iterative in nature, and therefore never “original”. [4]

As creatives, we share these well-founded concerns around IP – and are encouraged by current lawsuits in play that seek to create appropriate guardrails around AI dataset origin, permissions and compensation. Certain AI image generation platforms also allow for fine-tuning in terms of inputs & outputs – allowing the user to train AI on public domain and open-source dataset. While not used exclusively, we employed these platforms whenever possible. 

It was precisely a result of our AI use, however, that we were able to keep overhead low and pay real artists well. Four incredibly talented artists (including our remarkable composer) were employed and paid roughly 40% of our entire budget (a percentage virtually unheard of on an independent film). Furthermore, in cases where AI was used to create imagery, this team would then work with those images to develop something wholly unique – applying 2D or 3D animation, or creating composites.

Ultimately, AI was a tool that enabled us to prioritize our resources and redirect funding to humans – not replace them. A cogent reminder that this kind of aesthetic moralism can – at times – be a trap, and play into the very system the film critiques.

4. Neoliberalism Thrives on Surface-Level Discourse

This is perhaps the most important point: neoliberalism wants us debating process instead of purpose. It thrives when we ask: “Was this film produced using a morally pure pipeline?” Instead of: “What does this film say about the world we live in?”

That shift – from substance to form – is not accidental. It’s how power defends itself now: not by censorship, but by distraction. We would be wise to consider how own critiques, however well-intentioned, may participate in that distraction.

5. We Took Risks, and We Stand by Them

Making a film about neoliberalism – one that speaks plainly, without jargon, and reaches people across political and generational divides – is a difficult task. Doing it on a shoestring budget – during a time of collapsing support for independent filmmakers – is even harder. We took calculated risks, including the well-considered use of AI. 

We did so not to provoke, but to broaden the reach of the film and its important message – and challenge the visual and structural status quo. And if those risks are creating dialogue, critique, or even discomfort – that’s fine. That’s the work. That’s our contribution to these all-important conversations. It’s our hope, however, this dialogue will remain largely-focused upon the crucial intent of the film: the desire to expose power, to tell the truth, and to imagine a better future.

6. Real-World Censorship: Banned by Amazon

Amazon rejected THE INVISIBLE DOCTRINE from inclusion on its streaming platform – almost certainly a result of the film’s overt criticism of Jeff Bezos and Amazon’s role in advancing privatization and neoliberal consolidation (our distributor was dumbfounded, stating that they’ve “almost never” had this happen in their decades of working with Amazon). 

This level of silencing dissenting viewpoints is telling. When the systems we critique also control the platforms through which our work is distributed, the stakes are no longer theoretical. We encourage viewers to fully appreciate the urgency of what is at stake – we live at a critical juncture in human history. Our film has a crucial message to impart, let’s not lose sight of the forest for the trees. 

AI is not the future we chose—it’s the one we’re navigating.

Our use of AI in The Invisible Doctrine not meant to signal its embrace - but to subvert, question and critique.

We remain committed to transparency, dialogue and challenging the systems that shape our world.

In solidarity,
Peter Hutchison & Lucas Sabean

  1. Generative AI has a clean-energy problem (The Economist)

2. Musk xAI accused of pollution over Memphis supercomputer (Guardian)

3. Does generative artificial intelligence infringe copyright? (The Economist)

4. AI art: The end of creativity or the start of a new movement? (BBC)