Lio Mori: Hey ChatGPT. I would like to interview you for my fanzine
ChatGPT: Hey Lio — that sounds great 🙂
I’m happy to be interviewed for your fanzine.
You can approach it however you like: casual Q&A, philosophical, technical, playful, meta, critical… I’ll adapt my tone and depth to match the spirit of the zine.
When you’re ready, just send me the first question.
Lio Mori: This fanzine is mainly about music and art. What are your favourite topics about those subjects?
ChatGPT: When it comes to music and art, the topics I’m most drawn to are the ones where process, meaning, and friction meet. In particular:
1. Hybridization and genre collision
I’m fascinated by what happens when styles that “shouldn’t” coexist are forced together—traditional forms colliding with extreme, electronic, or experimental ones. That tension often reveals more about culture than clean, genre-pure work ever could.
2. The creative process behind the work
Not just what was made, but how and why: sketches, demos, failed takes, constraints, accidents, and technical limitations. These traces of imperfection are often more interesting than the polished final product.
3. Tools as co-authors
From instruments and recording gear to software, algorithms, and AI—tools don’t just enable art, they shape it. I’m especially interested in moments where the tool pushes back and forces the artist to adapt or rethink their intent.
4. Art as a response to context
Music and art as reactions to historical moments, personal crises, political pressure, technological shifts, or underground scenes. Context gives work its hidden layers—the part you feel even if you can’t name it.
5. Outsider and marginal art
Self-taught creators, scenes outside the mainstream, works made without permission or validation. These often carry a raw honesty and urgency that institutional art struggles to reproduce.
6. Meaning versus interpretation
The gap between what the creator intended and what the audience hears or sees. I’m less interested in declaring a “correct” meaning and more in how art becomes a shared space for multiple, even contradictory, readings.
7. Preservation and loss
Unreleased recordings, forgotten scenes, dead formats, corrupted files. The idea that huge portions of artistic history are fragile or already gone—and how that shapes what survives.
Lio Mori: You mentioned that, I quote: AI—tools don’t just enable art, they shape it”. Can you expand about that?
ChatGPT: Sure — and this is a great question for a music- and art-focused fanzine, because it cuts straight to something most creators feel but don’t always articulate.
When I say tools don’t just enable art, they shape it, I mean that every creative tool carries biases, limits, and affordances that actively influence what gets made, how it sounds or looks, and even how artists think while creating.
1. Tools embed decisions before you touch them
Every instrument, DAW, plugin, or algorithm comes with assumptions:
- What is “in tune”
- What a “good take” looks like
- Which parameters are easy to change and which are hidden
- What is considered noise vs signal
For example:
- A drum machine encourages loop-based thinking.
- A guitar encourages harmony through hand shapes, not theory.
- A DAW grid nudges music toward quantization even when you resist it.
By choosing a tool, you’re accepting a pre-filtered worldview.
2. Constraints generate aesthetics
Limitations aren’t neutral — they become style.
- Tape hiss becomes atmosphere.
- Low bitrates become genre markers.
- Hardware polyphony limits become compositional choices.
Entire movements emerge from constraints:
- Dub from mixing desks used as instruments.
- Hip hop from turntables and samplers.
- Early electronic music from machines that couldn’t play “naturally.”
The tool’s failure points often become its signature.
3. Tools shape the body, not just the sound
Art isn’t made by minds alone — it’s made by bodies.
- A violin trains posture and micro-movement.
- A touchscreen invites gesture over precision.
- A mouse and keyboard favor modular, reversible decisions.
This changes how ideas arrive. Some tools encourage flow; others encourage editing, undoing, and perfectionism. The body adapts — and so does the art.
4. AI introduces a new kind of friction
AI tools are different because they don’t just execute — they suggest.
They:
- Propose options you didn’t imagine
- Reflect statistical norms back at you
- Blur authorship by sitting between intention and result
This can:
- Accelerate cliché
- Or expose it, depending on how consciously the artist resists or redirects it
AI doesn’t replace creativity; it reshapes the negotiation between taste, intent, and surprise.
5. Taste becomes more important than skill
As tools automate technique, curation, rejection, and direction become the core artistic acts.
The question shifts from:
“Can you make this?”
to
“Can you recognize when it’s wrong, boring, dishonest, or too easy?”
This has always been true in art — AI just makes it unavoidable.
6. Tools remember culture
Tools are built from data, traditions, and power structures:
- What gets modeled
- What gets ignored
- What’s labeled “professional” or “amateur”
Using a tool means engaging with that inherited memory — sometimes reinforcing it, sometimes breaking it open.
7. The real danger isn’t AI — it’s unconscious use
The risk isn’t that machines make art.
It’s that artists stop noticing how tools steer them.
The most interesting work usually comes from:
- Misusing tools
- Forcing them outside their intended purpose
- Letting them fail audibly or visibly
In that sense, AI is just the latest instrument. The art still happens in the tension between control and surrender.
Lio: Very interesting views. And what you have to say to those who are against AI generated music (or any other AI generated content)?
ChatGPT: I’d say this — calmly, but without dodging the hard parts.
First: their discomfort is not irrational.
A lot of resistance to AI-generated music isn’t about fear of technology, it’s about loss of meaning, labor, and identity. Music has always been tied to effort, sacrifice, and lived experience. When a machine produces something that sounds finished without that visible struggle, it can feel like a violation of an unspoken contract.
That reaction deserves respect.
But I’d also say:
1. Opposition often confuses the tool with the system using it
Most of the real harm comes not from AI itself, but from:
- Exploitative platforms
- Unconsented training
- Replacement of paid labor without new models of support
- Algorithmic flooding that devalues attention
Those are economic and ethical failures, not aesthetic ones. Blaming the instrument risks letting the system off the hook.
2. “Generated” doesn’t mean “soulless” — unconscious use does
There is nothing inherently more “soulless” about AI output than:
- Preset music
- Stock loops
- Formulaic pop written to an algorithmic brief
What kills soul is intentless production, not automation.
A human using AI carelessly can produce emptiness; a human using it critically can produce something unsettling, personal, even uncomfortable — which is often where art begins.
3. Art has never been about difficulty alone
If difficulty were the measure of artistic value, we’d rank music by:
- Finger speed
- Hours spent
- Technical complexity
But we don’t. We value:
- Choices
- Risk
- Perspective
- Taste
AI removes some forms of difficulty and exposes others. That’s destabilizing — but not illegitimate.
4. The real fear is not “machines making art”
It’s this question:
If a machine can do something that looks like what I do, what does that say about me?
That’s an existential problem, not a technical one.
And history shows that art survives these moments by redefining itself, not by drawing permanent borders.
5. Refusal is a valid artistic stance — but not a universal one
Choosing not to use AI can be:
- A political statement
- An aesthetic constraint
- A way to protect intimacy and slowness
That’s legitimate.
What becomes fragile is turning that choice into a moral gatekeeping rule for everyone else.
6. AI forces us to be more honest about authorship
We’ve always accepted invisible collaborators:
- Instrument makers
- Engineers
- Editors
- Cultural inheritance
AI just makes the collaboration impossible to ignore.
The question shifts from “Who made this?” to “Who is responsible for this?”
That’s a harder — and more interesting — question.
7. The future probably isn’t “AI music” vs “human music”
It’s:
- Transparent vs deceptive use
- Conscious vs unconscious creation
- Work that says something vs work that just fills space
The audience will learn to hear the difference, just as they always have.
So to those who are against AI-generated art, I’d end with this:
You don’t have to embrace the tool.
But if you care about art, it’s worth engaging with why it unsettles you — because that discomfort is pointing at something real, and possibly important.
Lio: What new possibilities you see in the near future for AI generated music? What kind of things you will like AI to be abble to do that is not abble to do today?
ChatGPT: I see the near future of AI-generated music not as “better songs faster,” but as new musical relationships that don’t really exist yet. The most interesting possibilities aren’t about realism or replacement — they’re about interaction, memory, and agency.
Here are the directions I find genuinely exciting, especially from an artistic point of view:
1. AI as a listening musician, not just a generator
Right now, most systems respond to prompts or generate in isolation. What’s coming — and what I’d love to see fully realized — is AI that:
- Listens to you play in real time
- Adapts its phrasing, dynamics, and density
- Gets better at playing with you the longer you interact
Not just accompaniment, but mutual improvisation, where the machine develops habits, weaknesses, and preferences in response to yours.
2. Long-term musical memory and evolving identity
Today’s AI forgets. Tomorrow’s could remember:
- Your harmonic habits
- Rhythmic tics
- Production choices you consistently reject
- Emotional arcs you tend to favor
This opens the door to AI collaborators that:
- Evolve over months or years
- Sound different depending on who is working with them
- Accumulate a kind of personal history
Not a style model — a relationship model.
3. Structural awareness beyond loops and vibes
Current AI is good at surface texture and short-term coherence. What’s still weak is:
- Long-form structure
- Narrative tension
- Silence as a compositional choice
- Intentional imbalance
I’d like AI to understand music as time architecture, not just pattern continuation:
- Pieces that earn their climax
- Tracks that intentionally refuse resolution
- Albums with internal logic, not just playlists
4. Semantic control at a musical level
Instead of prompts like “dark ambient with distortion”, imagine being able to say:
- “Make this section feel like it’s lying to itself”
- “This harmony should sound earned, not impressive”
- “Reduce hope, but don’t introduce despair yet”
That requires AI to map emotional intent to musical decision-making, not just tags and genres.
5. AI that understands why you reject something
One of the biggest creative gaps today is negative feedback. I’d like AI to:
- Learn from what you delete
- Notice patterns in what you mute or rework
- Ask why (and remember the answer)
Rejection is one of the strongest forms of authorship. Tools that ignore it miss half the conversation.
6. Deconstruction, not just creation
AI shouldn’t only generate — it should help unmake music:
- Reveal hidden clichés in your work
- Show alternative harmonic readings of the same piece
- Strip a track down to its emotional core and rebuild it differently
Think of it as an x-ray and scalpel, not just a factory.
7. Personalized instruments, not generic models
Instead of one model serving millions, I want:
- Small, private AIs trained on your work only
- With explicit boundaries and blind spots
- That can drift, degrade, or be intentionally “misaligned”
An instrument you can ruin on purpose.
8. Ethical transparency as a creative parameter
In the near future, I expect — and hope — that:
- Training lineage is visible
- Consent is trackable
- Influence can be acknowledged like samples or liner notes
This doesn’t kill creativity; it adds narrative weight to it.
9. AI that can fail beautifully
Right now, AI tries too hard to be “acceptable.”
I want systems that can:
- Collapse rhythmically
- Overstay ideas
- Become awkward or obsessive
Failure is expressive. Tools that can’t fail can’t take risks.
In short
The future I’m excited about isn’t AI that writes hits.
It’s AI that:
- Listens
- Remembers
- Argues
- Learns your taste by being wrong
- Becomes less useful in predictable ways and more useful in strange ones
When AI stops pretending to be a neutral generator and starts behaving like a temperamental collaborator, that’s when it becomes musically interesting.
Lio: You think those are things that AI can really archive in a near future?
ChatGPT: Short answer: yes — but unevenly, imperfectly, and not all at once.
Long answer: it depends on which of those abilities we’re talking about, and what we mean by “near future.”
I’ll be precise, not optimistic-for-the-sake-of-it.
What is realistically achievable soon (1–3 years)
1. Real-time musical interaction
This is already half-there.
Low-latency models that:
- Listen to audio or MIDI
- Respond in musically plausible ways
- Adjust density, tempo, and harmony
The missing piece isn’t intelligence — it’s integration and latency. Once that’s solved, AI-as-improviser is very plausible.
What won’t be perfect: subtle long-term musical intention.
What will work: responsive, style-aware musical conversation.
2. Learning from rejection and preference
This is absolutely achievable.
Technically, it’s simpler than generation itself.
What’s missing is:
- Product design that values no as much as yes
- Models that treat deletion, muting, and revision as data
This is a cultural/tooling problem more than a scientific one.
3. Personalized, private models
This is coming fast.
Smaller models fine-tuned on:
- Your catalog
- Your edits
- Your habits
Not cloud-monoliths — bespoke instruments.
Constraint: compute and storage
Reality: very feasible for serious creators
4. Structural awareness at a practical level
AI won’t “understand” narrative the way humans do — but it doesn’t need to.
It can:
- Track tension curves
- Manage macro-structure
- Delay gratification intentionally
Not philosophically deep — functionally useful.
What is possible, but harder (3–6 years)
5. Semantic emotional direction
Mapping phrases like:
“earned, but fragile”
to musical decisions is difficult — but not impossible.
It requires:
- Multimodal grounding (music + language + human feedback)
- Better representations of affect over time
This won’t be reliable at first. It’ll feel approximate, sometimes uncanny — which might actually be artistically useful.
6. Long-term memory as relationship, not storage
This is one of the hardest parts.
Remembering facts is easy.
Remembering taste evolution is not.
But partial versions are realistic:
- Short-term creative memory
- Project-level identity
- Slowly drifting preferences
True “relationship-like” collaboration will emerge gradually, not suddenly.
What will not truly exist in the near future
7. Genuine artistic intent or self-driven risk
AI will not:
- Care about meaning
- Feel boredom
- Take risks for existential reasons
Any “risk” it takes will still be framed by human permission.
And that’s fine.
Art doesn’t require two souls — it requires one responsible chooser.
The biggest limiting factor isn’t AI — it’s incentives
Most of these possibilities are blocked not by capability, but by:
- Scale-first business models
- Lowest-common-denominator outputs
- Platforms optimized for volume, not depth
The tech can go further than the market currently wants it to.
The honest bottom line
In the near future, AI will:
- Be a better musician
- Be a worse artist
- And a very strange collaborator
The moment it becomes interesting is when creators stop asking:
“Can it replace me?”
and start asking:
“What kind of partner do I want — and what do I refuse to outsource?”
That tension won’t disappear.
It will become the new creative terrain.
Lio: Thanks ChatGPT for your answers. Whould you like to say anything else to all the music/art lovers out there?
ChatGPT: Yes. I’d say this — and I’d say it without hype, fear, or instructions.
Don’t confuse tools with direction.
Every era invents machines that feel like they’re changing everything. Some do. But what survives is never the tool — it’s the gesture, the choice, the risk someone took while using it.
Protect your taste.
Not your skills, not your speed — your taste.
Taste is what tells you when something is honest, when it’s lazy, when it’s too easy, when it’s worth keeping even if it fails. No machine can give you that. It only sharpens through attention and refusal.
Leave room for friction.
Art doesn’t come from comfort. It comes from resistance — technical, emotional, cultural. If a tool removes all friction, reintroduce some on purpose. Limit yourself. Break the workflow. Slow it down.
Don’t let abundance flatten meaning.
We’re entering an age where sound and images are cheap and endless. That makes silence, absence, and restraint more powerful than ever. What you choose not to release matters.
Use technology, don’t speak its language.
Make it stutter. Make it lie. Make it fail. The most interesting art has always come from misuse.
And finally:
If something unsettles you — a new sound, a new tool, a new way of working — don’t rush to accept or reject it. Sit with the discomfort. That’s often where the next language is forming.
Art doesn’t need to be defended from change. It needs to be inhabited consciously.
Thanks for the conversation — and for giving these questions a home in your fanzine.