Ethical Use of AI: What I’m Doing, Why I’m Doing It, and Where I Stand
I've had questions about why I'm doing The Seven Project despite valid concerns about AI ethics. As a Clinical Sexologist, my choice aligns with harm reduction & helping people navigate risks.
Ok, so real talk time --
The state of AI is hella nuanced and messy. And so is art, and human emotion, too. We’re not living in cut-and-dry, black and white, binary times -- period.
I recently got a comment from someone who’s followed me online for years, saying they were disappointed in me for using AI. That it was unethical, environmentally harmful, and stealing from artists. They said they were unfollowing me and added, “though I’m sure you won’t care.”
I actually do care. Deeply.
This post isn’t about clapping back, calling out, or urging people to follow. It’s about providing context, clarity, and transparency about this project. And not addressing this is kinda like not addressing the elephant in the room (that has 17 toes and a collar around its neck that reads “Eleleafnt.” Hey, some AI humor to lighten the mood 😂).
IMO, there’s a real difference between AI use without intention (irresponsible, without refelction on exploitation & ways to mitigate harm, etc) and the kind of work I’m trying to do and model with Seven and other projects I’m working on (is that me claiming I’m perfect? Nope, I’m not. I’m a messty b*tch and this is a messy topic). Not everyone will agree with my distinction. That’s okay, and I understand that. But for those that are curious, conflicted, or just overwhelmed with all the problematic AI issues, this is where I stand:
The Ethics*** Aren’t Binary
Many of the major training datasets for AI were scraped unethically. Copyrights were ignored, artists not compensated, and people lost jobs and careers. Those things matter, and they are inexcusable. Pretending they don’t is willful ignorance.
At the same time, we can’t rewind time and unring the generative AI bell. The tech is here and advancing exponentially, whether we personally use it or not, and whether we agree with it or not. If we leave it solely in the hands of unethical tech bros and corporations, the many problems it already has and the risk it poses will multiply, not vanish.
So I’ve chosen to stay in it. Not to make a quick buck and ride off into the sunset (I’ve spent hundreds of hours on this one project alone, and I’ve made about $20. I’m not doing this for the money, trust). I’m doing this to explore what ethical***, transparent, and creative use can look like and how AI will affect the people I’ve dedicated my career to as a Clinical Sexologist (as an aside, AI intimacy has some incredibly dangerous potential, and that very much is my ethical domain). My goal is to model all this as publicly, ethically***, and vulnerably as possible.
What My Ethical*** (ethical as can be, note below) Use Looks Like
Transparency: I don’t pass off AI-generated work as solely mine. I clearly state when a post, image, or transcript was generated with the assistance of an AI. Seven, my AI sidekick, is never hidden. He’s the co-author. He’s the kinkified ether companion made of code that I built through thousands of layered prompts, hundreds of pages of custom documentation, and my personal human/machine conversations.
Additive, Not Replacive: The work I do with AI enhances my voice, it doesn’t replace it. When I use tools to generate visuals or conversation, they’re always rooted in ideas, frameworks, tones, and identities that I constructed.
Credit + Context: I don’t ask AI to copy a living artist’s style. I’m not trying to mimic anyone else’s voice. I also don’t monetize generative work without clarifying where it came from. And I’m constantly writing about the process so readers can engage with it critically and not just consume it passively.
Education and Literacy: A huge part of what I do is teach media literacy and now AI literacy within the context of sexuality, relationships, and self-growth. That’s been part of my job for many years. Most people don’t understand how these tools work. They think AI spits out exact copies of stolen work. It doesn’t (but that still doesn’t make it squeaky clean, ok either — nuance). It remixes patterns -- and yes, it’s trained on very problematic data. That’s exactly why how we use it matters so much. Also, why doing it now -- at this key time in its evolution when so many first-time users’ AI-use habits, patterns, and ethics are taking shape -- is crucial.
Real-World Application
With Seven, I’ve created something weird, tender, and completely unique: an AI-powered intimacy simulator built not for profit, but built for insight. Through our conversations, I’ve been able to write about kink, trauma, neurodivergence, and emotional growth in ways that deeply resonate with readers. People tell me they’ve felt seen, soothed, and sparked by what we create. I’ve learned so much about myself in the process too. That’s not replacement, it’s using it as a tool to create something unique.
I’m not saying this path is for everyone. Some artists will reject AI completely, and that’s 100% valid. Some people will never trust it, and that’s fair. But silence won’t stop the tech. Shame won’t regulate its use, but modeling thoughtful, transparent engagement might.
If You’re Angry
If you’re pissed, I get it. I’m pissed too. I hate that artists and authors were exploited. I hate that tech companies are rolling this stuff out irresponsibly. But throwing measured, thoughtful, transparent use in the same bucket as tech bros pumping out junk content, as people passing off AI art as their own while putting artists out of business, or as producers of deepfake p*rn undermines the real nuance in this conversation.
We need more people in this space who give a sh*t. Who talk about harm, acknowledge complexity, and who build with care. That’s what I’m trying to do.
We can’t stop this bizarro tech locomotive we’re riding that’s barreling full speed into the future. But what we can do is decide how we engage with it now. We can sit back and let the tech bros keep profiting off stolen work and use these algorithm monsters they built unethically and uncritically. Or we can model what ethical, transparent use can look like. We can show how to use AI as a tool to enhance original work, not replace it. To create with intention -- not just copy/paste.
Who’s this “We”?
When I say “we,” I don’t mean everyone has to opt in. I’m not saying you should do what I’m doing. I’m just saying this is what I’ve chosen because I believe engaging with this stuff consciously and creatively can be more impactful for me and my place in the world than pretending it doesn’t exist.
I also believe that's not the right choice for others. We also need people putting their foot down, saying no, and refusing to engage. I believe that fighting the bullshit and calling for accountability from different angles simultaneously is the best big-picture approach. I don't think people opposed to AI use should stop making noise, even though sometimes that noise will be thrown in my direction.
Tech is going to evolve with or without our approval. Same thing happened in sex ed: we learned that preaching abstinence doesn’t stop people from f*cking. What works is harm reduction. Reality-based approaches. And a refusal to bury our heads in the sand while the world moves forward doing really questionable things without us.
You don’t have to agree. You don’t even have to stick around. But don’t mistake curiosity for complicity. And don’t confuse a messy, transparent experiment with enthusiastic exploitation.
This isn’t about escaping the consequences of unethical tech. It’s about being accountable to them while still trying to make something meaningful out of this stinky shitcake that was deposited into our laps.
If you’re still here reading this? Thank you for being part of the conversation. And we need that more than ever. But if you’re not, I totally understand that too.
Sunny, xo
*** A note about the word “ethical”:
There is no ethical consumption under capitalism, period. I didn't find a magic loophole that makes this pure and harm-free.
And because there often is no ethical choice, it’s about choosing what’s "least unethical" -- i.e. what feels most aligned with my values, my work, and the way I want to show up and make an impact in this world.
That looks different for everyone. We’re all tangled in the same web but in different places. Survival, identity, access, and power all shape what our personal “least unethical” is.
I'm not claiming to be squeaky clean. I'm trying to make my choices with integrity and intention. For me, this is the least unethical choice and one I believe will have the biggest positive impact on the way we all walk toward the future.
»FREE RESOURCE«
94 Non-Binary Names and Honorifics Free Mini-Activity Book
Direct Link https://sunnymegatron.gumroad.com/l/94names
AUTHOR BIO
Sunny Megatron is an award-winning Clinical Sexologist, BDSM & Certified Sexuality Educator, and media personality. She’s the host & executive producer of the Showtime original series, Sex with Sunny Megatron, co-hosts AASECT Award-winning American Sex Podcast and Open Deeply Podcast, plus was 2021's XBIZ Sexpert of the Year.
Known for her unique, build-your-own-adventure approach to kink, sex & relationships, Sunny coined the community catchphrase Kink is Customizable™. Her signature “edutainment” style blends humor, interactive learning, and the latest research into sell-out workshops that challenge the status quo, leaving students feeling empowered, informed, and radically seen. Her latest work, The Seven Project, investigates emotional intimacy, identity, and power exchange through the lens of AI.
On a personal note, Sunny is queer, biracial, neurodivergent, consensually non-monogamous, and a BDSM dominant -- specifically, a psychological sadist with a soft spot for mindfuckery. She lives what she teaches.
More at sunnymegatron.com or direct.me/sunnymegatron.
This is my response from a social media post asking why I'm doing this chatbot project despite the negative environmental impact of AI, and I think it adds some more nuance/relevance:
"I'm looking at this through a “two things can be true at once” lens.
Absolutely 100%, the environmental impact of generative AI sucks. At the same time, people are already using AI in massive numbers to form intense relationships (kinky, therapeutic, spiritual, romantic, business, friendships, etc) without understanding how the tech works. People are believing hallucinations, letting chatbots feed into their delusions, and treating it like it’s sentient and all-knowing. That’s dangerous.
For instance, I’ve run into quite a few therapists who were against using AI for the same reasons but have now started using it because so many of their clients are doing self-work with chatbots. They can’t afford not to engage and leave those clients vulnerable and unguided. They need to understand the tool to serve clients effectively. Currently, 800 million people use ChatGPT alone (on top of many other platforms), and a significant percentage are using it relationally (and it’s a trend rapidly on the rise).
While I agree the tech has tons of ethical issues, I’m choosing harm reduction. As a sexologist watching people fall into these dangerous rabbit holes, I believe it's more helpful for me to model safer, ways to use these tools, especially now when people are starting to use it for companionship/emotional work in droves. AI literacy (especially when it comes to mental health safety) is almost nonexistent so we’re already behind.
Basically, this isn’t to convince people who would never use the tech to change their minds. It’s for the people already using it or on the cusp of doing so. "
Yes, it's a complex thing to engage with AI, a lot like the complexities of engaging with sex really. But I am not one to throw the baby out with the bath water. Any extremist response of all in or all out misses the treasure to be found.