Left Without Answers: Amy’s Experience of Paywalled Content Theft
Case Study Summary
Amy’s* experience of image abuse did not involve a partner or someone she knew personally. Instead, it began with a message from a friend and the sudden realisation that private content shared behind a paywall had escaped into the open internet.
What followed was not just the loss of control over her images, but a prolonged struggle to understand how it happened, who was responsible, and what, if anything, she could do next.
*Names and identifying details have been changed to protect anonymity.

Content Shared with Clear Boundaries
Amy shared intimate images on a subscription website, which was limited to paying subscribers. She was explicit about her boundaries: her content was not to be screenshotted, shared, or redistributed in any form.
She never consented to her images being available for free or hosted on external websites. Like many creators, Amy relied on the expectation that content behind a paywall would remain there. That expectation was broken without warning.
The Moment of Discovery
Amy first became aware of the leak when a friend, also a sex worker, reached out to warn her that her images were appearing across multiple websites. What she found was overwhelming: large volumes of her private content circulating freely, often under slightly altered site names.
What made the discovery especially frightening was its scale. Amy didn’t know which site had first posted the images, how they had been taken, or how widely they had already spread. As she later described, the moment of finding out was marked by panic and confusion, a sudden need to act without knowing where to turn.

Exposure Beyond Consent
The harm wasn’t limited to the images themselves. Amy discovered that the leaked content had been linked not only to her subscription site username, but also to her separate model name and her full legal name.
She described this as “terrifying”, a permanent connection between her sex work and her legal identity that she had never agreed to and could not undo. This loss of control over how and where her identity appeared online compounded the distress and made the leak feel impossible to contain.
Trying to Get Help and Hitting Walls
Amy sought support where she could. Within the sex work community, she found understanding and shared experience. But formal routes offered little help. When she contacted a hotline for image abuse, she was told her case “did not qualify as revenge porn” because she had initially shared the images on the subscription site. The only option offered was copyright enforcement.
Amy subscribed to a takedown service that issued DMCA requests on her behalf. Some websites complied. Many did not. In several cases, the site owners were untraceable. Over time, Amy cancelled the service. The cost, combined with the realisation that the content could continue to reappear indefinitely, made the process feel futile

Living Without Information
What Amy wanted most were answers. She wanted to know how the images were taken, which site posted them first, and who was responsible. As she explained: “I would have absolutely liked to know how it came to be. I don’t know where it first started or which website was the first one to have it.” Without that information, there was no clear path to justice, accountability, or even closure.
The Emotional Impact
Beyond the practical challenges, the experience took a significant emotional toll. Amy described feeling violated, exposed, and blamed particularly by people outside her immediate support network. She also described the moment of discovery as the point where support was most needed: “It’s that immediate panic of ‘what do I do?’ … feeling very lost and grasping at straws.” The lack of clear guidance or tools at that moment intensified the distress.
He also highlighted a broader impact beyond individual cases: platform trust. “It will actually make creators more likely to sign with you instead of another platform.” In Ben’s view, traceability is not just a safety feature it’s a signal that a platform is willing to stand behind its creators.

Where Image Angel Could Have Made a Difference
Amy was clear that no technology could undo what had already happened. But she also identified a critical gap: accountability.
If Image Angel had been integrated into the platform where her content was hosted, each time an image was viewed, it could have carried a unique, invisible identifier. That would have allowed leaked content to be traced back to a specific access session, IP address or device.
Amy described this kind of traceability as “a game changer” both for identifying who was responsible and for deterring misuse in the first place. She also explained that being able to identify the person who leaked her content would have given her a sense of power back: “Having that power back in my hands where it was taken out of my hands would feel… really important.”
Trust, Deterrence, and Platform Choice
For Amy, Image Angel wasn’t just about tracing harm after it occurred. It was also about trust. She explained that a platform adopting Image Angel would signal that it takes creator safety seriously and that this would directly influence where she chose to host her work: “I would be 1,000% more likely to use a platform as a creator if it had Image Angel integrated.” That commitment to prevention and accountability matters before harm occurs, not just after.
Closing
Amy’s story is one of many. It shows how quickly control can be lost, and how difficult it is to regain without evidence or support.
For Image Angel, listening to experiences like Amy’s helps shape a system focused not on undoing harm, but on preventing it, responding earlier, and ensuring those who misuse intimate images can no longer act without consequence.

Disclaimer
The examples, quotations, and case studies referenced in this document are drawn from voluntary potential user interviews conducted for product development, safety design, and contextual understanding.
These interviews are not academic research, nor are they designed or presented as formal studies, surveys, or statistical analysis. They do not aim to produce representative samples, generalisable findings, or population-level conclusions. The purpose of these interviews is to understand how image-based abuse and content misuse are experienced in practice, in order to inform the design, deployment, and evaluation of protective technology and operational safeguards.
Individual accounts are presented as contextual illustrations of real-world use cases, not as evidence of prevalence, causality, or typical outcomes across all users or platforms. Participation was voluntary and based on informed consent. Identifying details have been removed or altered where necessary to protect privacy, safety, and personal autonomy.
Find the reports and guidelines for the studies conducted here:
Download The Case Study
View All Research Reports
Download Case Study Summary FAQ