• Cold Weather Advisory - Click for Details
    ...COLD WEATHER ADVISORY REMAINS IN EFFECT UNTIL NOON CST TODAY...
    Expires: December 14, 2025 @ 12:00pm
    WHAT
    Very cold wind chills as low as 20 to 30 below zero expected.
    WHERE
    Portions of north central and northwest Illinois, east central, northeast, and southeast Iowa, and northeast Missouri.
    WHEN
    Until noon CST today.
    IMPACTS
    The dangerously cold wind chills as low as 30 below zero could cause frostbite on exposed skin in as little as 30 minutes. Wind chill values can lead to hypothermia with prolonged exposure.
    PRECAUTIONARY/PREPAREDNESS ACTIONS
    Use caution while traveling outside. Wear appropriate clothing, a hat, and gloves. Keep pets indoors as much as possible. Make frequent checks on older family, friends, and neighbors. Ensure portable heaters are used correctly. Do not use generators or grills inside.

The Media Line: Musicians Mull AI, but ‘Can AI Feel a Broken Heart?’ 

SHARE NOW

Musicians Mull AI, but ‘Can AI Feel a Broken Heart?’

From cloned legends and “stolen” voices to new artistic freedoms, AI’s power to both violate and empower musicians takes center stage

By Gabriel Colodro / The Media Line

At one of the main sessions during Israel’s AI Week, composers, producers, researchers, and lawyers grappled with the uncertainty surrounding artificial intelligence and the creative world of music. What emerged was a blend of frankness and expertise, offering both reassurance and warning-and a clearer sense of the choices ahead for musicians adapting to a landscape transformed by algorithms that can now sing, write, and even emote.

Nearly every discussion about artificial intelligence and music begins with the same uneasy question: If machines can compose, imitate, and perform, what exactly is left for humans to recognize as their own? That question, asked in recording studios, classrooms, and legal forums, usually results in more speculation than clarity. The AI Week conference session provided an opportunity to move toward some illumination of the issues involved.

Musicians spoke about hearing their own voices reproduced without ever stepping into a booth. A neuroscientist described what the brain loses when it listens to melodies generated by code rather than shaped by a human hand. Legal scholars debated whether training a model on copyrighted material constitutes theft or transformation. Veteran creators pushed back against the cultural instinct to treat AI as an alien intruder, insisting instead that it behaves like any human-made tool: capable of both harm and innovation, depending on who uses it.

While addressing the audience, Grammy-winning music producer Jack Joseph Puig, known for his work with U2, Sheryl Crow, Fiona Apple, and countless others, rejected the idea that AI is some outside force menacing human creativity. He framed it instead as a long-term bet, saying “AI is a promise” and arguing that expectations have raced “unfairly” ahead of what the technology can actually deliver.

To Puig, the real problem is speed rather than essence. He warned that society is embracing AI before it understands what it is dealing with, urging the industry to slow down: “Stop trying to push it so fast. Let it grow up. … It needs some car crashes, some big lawsuits,” he said.

Then he made a blunt point about ownership that cut through much of the cultural anxiety: “We’re the father and mother of it. We are. NVIDIA didn’t make it. We made it. Humans.”

Speaking later with The Media Line, Puig said the tension between perfection and authenticity long predates AI. He recalled working in the studio with Eric Clapton and George Harrison, when a minor correction he applied prompted Harrison to question whether it improved the track or stripped away its identity. “If I ‘fix’ it, did I fix it … or did I f*** it up?” Puig asked. The question, he argued, feels even more urgent in the age of machine-assisted production.

Wendy Starland, the American singer and producer who discovered Lady Gaga, turned the conversation toward promotion and audience-building. She told The Media Line that she sees AI as neither a cure-all nor an existential threat, but “a tool that creates a solution.”

At MusicSoul, her AI-driven platform, she leans on data analytics to understand fans and match them with artists and brands. “Without AI, I wouldn’t have such granular data. … I know so much about you,” she said, describing how that knowledge helps align campaigns with listeners’ actual behavior rather than guesswork.

On the question of whether AI-generated music will be embraced or rejected, Starland expects a mixed response. She predicted “it’s going to be both,” likening the technology to a guitar that can sound transcendent or terrible depending on who plays it. In her ideal scenario, AI becomes “a partner to creators … to create something new and innovative,” not a replacement for them.

Legal and policy questions were laid out by Dr. Eyal Brook, head of the AI group at law firm S. Horowitz and one of Israel’s leading experts on the intersection of AI, copyright, and creative practice. He called this a moment of profound transition in creative industries, saying AI “opens doors to creativity but also presents significant challenges,” and urged society to “continue to examine, to question and to investigate its impacts … to ensure that it serves to enhance rather than diminish the creative expression.”

For musician and entrepreneur Yaki Gani, a longtime member of the Israeli band Rockfour, the technological shift has taken on an almost supernatural flavor. Working with AI models of David Bowie, Freddie Mercury, Kurt Cobain, and Amy Winehouse, he said, “I’m playing with dead people,” describing sessions in which legendary voices reappear through generative models.

Gani grew up on those artists and now finds himself interacting with their synthetic echoes. “I grew up on their voices and their music,” he said, “and I find myself actually playing with them,” referring to their AI-generated interpretations.

When he turns to music generator platforms such as Suno and Udio, Gani claims they can feel almost animate. “The music algorithms now have a soul. If you know how to work and how to prompt things … music becomes really emotional,” he said.

Generative video models, he added, have also collapsed the gap between impulse and execution. “Every simple thought or imagination we have can become, in five to six minutes, a video on social media,” he said, marveling at the speed. Yet he resisted the idea of handing creative control over to the machine. “I don’t want AI to do the job for me,” he insisted. “I want to be the producer. … I want AI to be one of the players.”

That tension surfaced again in a project Brook presented from artist Guy Bar entitled “Shvira,” built around the question “Can AI feel a broken heart?” Bar kept all lyrics human-generated but used more than 1,500 AI generations to shape melodies, characters, expressions, and shots. Yet, Brook explained, “the real challenge wasn’t technological, it was human. How to tell a story, how to create the feeling of sadness through technology, and how, if at all, can you touch the heart using AI?”

Ethical stakes came into sharp focus during the remarks of Israeli producer Oded Davidov, head of music production at the Rimon School of Music, who framed his experience as a confession. “I want to tell you the story of how I stole a voice,” he said.

To build a model of the late emblematic singer Arik Einstein, Davidov extracted vocals from existing recordings, trained the system, and then sang into it himself. The result sounded uncannily like a new Einstein performance. He acknowledged the unease that followed, admitting, “I had no permission. … What I did was actually steal a voice.”

The same technique, though, became a lifeline for one of his students. A young man with cerebral palsy had written songs but could not sing them. Davidov and his colleagues recorded him for 45 minutes, built a model of his voice, and then layered that model onto a producer’s performance, allowing the student to release music in his own vocal identity.

With that contrast in mind, Davidov distilled the paradox. “With the same tool that allows me to steal a voice, I can truly create something that was unimaginable,” he said.

The changes AI creates inside the human brain were the focus of Dr. Neta Maimon, a music cognition specialist with advanced training in cello performance, psychology, and neuroscience. Wearing a mobile EEG device-a headset that measures real-time brain activity-she explained that music perception depends on the shifting interplay of three systems: focus, drift, and alertness. “When we listen to music,” she said, “we move through these systems … from emotion and alertness to focus back to drift and again.”

Early experiments, Maimon noted, suggest that AI-generated music triggers different patterns of engagement. “The focus network is actually working harder when listening to AI-generated music,” she said, citing studies that show increased blinking and larger pupil dilation, both indicators of greater mental load.

In her telling, the cause lies partly in the microbeats and microtones that human performers naturally produce-tiny deviations that let listeners predict what comes next and experience pleasure when their expectations are met. Without those subtle irregularities, she argued, something crucial disappears. In earlier research comparing natural performances with quantized, synthetic versions, participants perceived the relationship between figures on screen as weaker and less joyful when the soundtrack was artificial. “Something was lost,” she said.

In a follow-up conversation, Maimon told The Media Line that generational listening habits magnify the effect. Younger audiences, she said, have grown up on heavily processed tracks. “I wasn’t born into deep Melodyne and beat-detected music,” she said, referring to a popular pitch-correcting tool. “But people who are now 20, 25, they only listen to music that is on the beat and to singers who are forced by Melodyne to sing on the pitch.” For them, older recordings can feel almost alien: “A 25-year-old girl now would find it hard to listen to Bob Dylan, because the sound is so different.”

She warned that this shift is already changing what listeners can tolerate. “We are losing complexity,” she said, pointing to songs that are getting shorter and harmonically simpler. “If people will continue only listening to music not generated by humans, they will lose the ability to listen to music that is played by humans.” What troubles her most, she added, is not only aesthetic loss but relational loss. “The human music listening experience is about connecting people, not just you alone choosing the best music for your mood,” she said. “It’s about listening together and playing together and dancing together. They are losing it. We are losing the social bond.”

Veteran Israeli singer and composer Rami Kleinstein also spoke with The Media Line, saying he sees AI everywhere in the studio but remains cautious about what it means for art. “There’s a lot of buzz around it,” he said, but described hearing an AI-generated version of his own voice as “not completely” right. “Demo-wise, I think it’s superb,” he added. “Production … that’s more tricky. You’ll always need the human touch.”

Asked whether rapid, AI-assisted songwriting responds more to cultural impatience than to musicians’ needs, Kleinstein took a pragmatic view. “It’s both. Is it deep? No. But we have so much mediocre stuff out there. So why not AI?” he said. Still, he drew a clear line for himself. “I don’t think I’ll use it … I enjoy doing it.”

Legal scholar Dr. Lital Helman, known for her work on AI regulation and intellectual property, urged policymakers to resist panic and avoid “overcorrecting” through law. “I’m not afraid. … The law needs to be a bit gentle,” she said, arguing that the slower pace of legal change is a safeguard rather than a weakness. Technology should be allowed to run ahead, she argued, while the law “fixes things here and there … The human race will be just fine. Art is here to stay.”

Throughout the session, musician and critic Sharon Moldavi, co-founder and CEO of the AI-powered music discovery platform Josiemusic, kept returning to the question of intent. He stressed that the real danger lies not in the tools themselves but in how people choose to wield them. Technology, he said, is neutral. “Fear humans, not technology,” he emphasized.

Brook closed the event by arguing that the moment demands neither surrender to the machine nor nostalgic rejection of it. “We must continue to examine, to question and to investigate its impacts,” he said, so that AI “enhances rather than diminishes the creative expression.”

By the end of the day, what came into view was not a death sentence for music or a blank canvas for dystopia, but something more complicated: a mirror held up to the human creative process, reflecting both its vulnerabilities and its potential. And for the first time in months, many of those wrestling with AI’s implications seemed ready to offer real answers-not about what the technology might someday become, but about what it is already asking of them now.

Brought to you by www.srnnews.com

Submit a Comment