Who Gets to Define "Harmful"? The Hidden Politics of Content Moderation
An examination of the value judgments embedded in AI safety guidelines and who shapes them.
AI Specialist · Technical Writer · Curriculum Designer · AI Ethicist · Journalist
A lifetime of building with technology. A career spent making it understandable. Now working at the frontier of AI — writing it, teaching it, tweaking it, and asking whether we're doing it right.
I'm Mandy Hathaway, an AI ethics specialist and technical writer with 30 years of hands-on technology experience and an MA in Ethical Technology & Artificial Intelligence from Metropolitan State University. I work at the intersection of machine learning and communication: building curriculum that makes complex systems legible, writing documentation that reflects how these systems actually work, and asking the ethical questions that get skipped when moving fast.
I bring both the technical background and the communication range the field requires, from AI development and system design to editorial leadership and the classroom. I spent three years as Technology Editor at The Metropolitan, taught hundreds of students and led a team of up to 35 instructors at Coding With Kids, and have published on algorithmic bias and responsible AI development. I bring both the technical depth to understand what I'm writing about and the communication skills to make it land.
I take on remote contracts and senior roles in AI/ML technical writing, instructional design, prompt engineering, and AI ethics.
Languages & Tools
Certifications
A plain-language technical explainer for non-specialist audiences covering what AI is, how machine learning and neural networks function, and where current systems genuinely fall short. Designed to give professionals the mental model of an engineer without the jargon.
Read (PDF) →A complete K–12 AI lesson connecting core probability math to machine learning. Students learn that every AI prediction is rooted in probability, explore how models express certainty through confidence scores, and build a live image classifier using Google’s Teachable Machine. Includes full instructor guide and student-facing presentation deck.
Instructor Guide (PDF) → Presentation (PPTX) →An advanced K–12 lesson in which students build a working Stable Diffusion pipeline from scratch using PyTorch, StableDiffusers, and Google Colab’s free GPU. Covers GPU configuration, model loading, prompt engineering, and an optional Gradio web interface extension.
Instructor Guide (PDF) → GitHub Repo →A post-mortem on a fictional AI content moderation failure, tracing how satire content in a training dataset caused a 13-day false positive spike, why validation missed it, and what changed as a result.
Read PDF →A plain-language style guide for writers, editors, and product teams communicating about AI systems to non-specialist audiences. Covers terminology, anthropomorphism, accuracy, bias, and a quick-reference words-to-avoid table.
Read (PDF) →A structured reference library of prompts for common AI use cases: classification, summarization, content moderation, structured output, and edge case handling. Includes design notes explaining the reasoning behind each approach.
Read PDF →A step-by-step installation guide for developers and technical writers deploying a static site with GitHub Pages. Covers repository setup, branch configuration, deployment, verification, and common troubleshooting scenarios. No prior GitHub Pages experience required.
Read PDF →A practical user guide for running large language models entirely on your own machine using Ollama. Covers installation, model selection, pulling and running models, prompt experimentation, and programmatic access via the local REST API. Written for technically curious users who want full control over their AI tools without sending data to an external API.
Available in the next few daysI write and think critically about the human dimensions of artificial intelligence — from bias and accountability to how we design systems that are genuinely equitable. Below is a selection of essays, research, and public writing on these questions.
AI systems don't generate bias from nowhere — they learn it from us. This essay traces how racial and gender inequity becomes encoded in machine learning systems, why removing sensitive variables isn't enough, and what it actually takes to build systems that don't perpetuate the injustices they inherit.
Read the Essay (PDF) →Why fears of AI apocalypse say more about human psychology than AI capability — and why that matters for the risks that are already here. An examination of anthropomorphism, the Singularity myth, and whose psychology we're really projecting onto our machines.
Read the Essay (PDF) →An examination of the value judgments embedded in AI safety guidelines and who shapes them.
Journalism
Accessible journalism on technology and its human dimensions
How to evaluate what you read, catch what you're being sold, and trust what you share. A practical guide to source evaluation, fact-checking, and the logical fallacies used to mislead even careful readers.
Read PDF →An accessible introduction to how modern AI systems actually work, what they can do, and where the hype outpaces the reality.
The concluding installment of the series — tracing where AI has come from and where it is most likely, and least likely, to go.
Additional journalism and public writing will be added here.
A selection of documentary and editorial photojournalism.
Open to remote contracts and senior roles in AI/ML technical writing, instructional design, prompt engineering, and AI ethics. Always happy to talk about interesting problems.