
I did not treat research as a formality or a slide for investors. New features were shaped through direct conversations with users before release, using feedback to pressure-test clarity, value, and trust. That helped keep the product grounded in real behavior instead of assumptions, and it reduced the risk of building features people did not need or understand. They name of the platform itself was the result of user-feedback, ensuring that it did not feel intimidating for everyday people who are not apart of the AI craze.
For Storytailor, child safety had to be built into the system from the start. I helped shape an interaction model that prioritized safe outputs, clear boundaries, and age-appropriate experiences, rather than trying to bolt protection on later. That included guiding how the AI behaved, how the experience was structured, and how trust showed up in the product itself.
Representation was not a campaign or a surface-level brand choice. It was a core product decision. Storytailor was built so children could genuinely see themselves in stories across identity, culture, ability, family structure, and imagination. That approach shaped product logic, content design, character systems, and business strategy because for Storytailor, diversity is not optional. It is fundamental to relevance, trust, and long-term viability.
A broad idea is easy to pitch and hard to use. I turned Storytailor into a product with clear entry points, understandable user flows, and an experience that reduced friction for parents, caregivers, and institutional users. The goal was to make the value obvious quickly and make the system easy to navigate from first use.
Early-stage products often get crushed by ambition. I kept the product narrow enough to ship while preserving the larger vision. That meant making hard decisions about what needed to be solved now, what could wait, and how to build a foundation that could grow without confusing users.