Building AI Products in Public: Why We’re Writing
The best engineering happens when knowledge moves freely between teams, companies, and practitioners.
We’re the engineering team at Stan, and we’re launching this Substack because we believe the best engineering happens when knowledge moves freely between teams, companies, and practitioners.
If you’re not familiar with Stan, we’re building an all-in-one creator platform that helps entrepreneurs turn their social following into income—link-in-bio storefronts, digital products, bookings, memberships, email marketing, the works. But we’re also building something more ambitious: Stanley, our AI coach that helps creators actually grow their audience and succeed on platforms like LinkedIn and Instagram. Not just content scheduling—real coaching on strategy, voice, performance analysis, and growth.
That means we’re dealing with full-stack AI product engineering at scale. We’re a team of 14 engineers working alongside 3 designers and a 6-person (and growing) data team, building products that need to feel simple and fast for creators while handling the complexity of personalized AI coaching, content generation, monetization infrastructure, and analytics under the hood.
We’re here to share what we’re learning as we build—the decisions that worked, the ones that didn’t, and the architectural patterns we’re evolving in real time. If you’ve ever wished you could see inside another engineering org’s decision-making process, or wanted honest retrospectives on what breaks when AI moves to production, you’re in the right place.
What kind of community are we building?
This isn’t a marketing channel or a highlight reel. We’re building a space for practitioners who value transparency over polish.
You’ll find us writing about:
Architecture decisions and the tradeoffs we’re navigating as AI tooling matures faster than best practices can keep up—how do you build coaching that feels personalized when you’re serving thousands of creators? How do you instrument AI systems so you can actually debug them?
Incident retrospectives that show what actually breaks and how we responded (not just what we wish had happened)
Team practices and philosophy that shape how we work, collaborate cross-functionally, and make technical decisions when the “right answer” isn’t obvious
Engineering challenges we’re facing in real time, including the messy middle parts where the path forward isn’t clear yet
We’re sprinkling in teaching and enablement content when patterns emerge that feel worth documenting. Think of this as a mix of thought leadership and ground truth—we’re figuring things out alongside you.
Who is this for?
We’re writing for:
Fellow engineers building AI products or dealing with similar full-stack complexity
Data teams navigating the handoff between models and production systems
Design teams collaborating with engineering on product that involves AI/ML
CTOs and engineering leaders making architectural bets and building team culture
Our future selves who will inevitably need to remember why we made certain calls
If you care about how things actually get built—not just the polished case studies—this is for you.
What to expect (and when)
We’re aiming for bi-weekly posts, with additional pieces when we hit something worth sharing in real time. This isn’t a content treadmill; we’re only publishing when we have something genuine to say.
Some posts will be long-form deep dives. Others will be shorter reflections or quick retrospectives. We’ll let the content dictate the format rather than forcing a template.
Why now?
Two reasons: building in public and recruiting visibility.
Building in public keeps us honest. When you know other engineers will read your architectural decisions, you think harder about them. When you commit to explaining your incident response, you’re more likely to actually learn from what broke.
And yes, visibility matters. We’re growing our engineering, data, and design teams, and we want to work with people who share our values around transparency, craftsmanship, and curiosity. If reading our thinking makes you want to build alongside us, we should talk.
What’s coming first
Our first few posts will cover:
How we’re thinking about observability for AI products (spoiler: traditional APM doesn’t cut it when you need to instrument LLM behavior)
A retrospective on a recent incident that taught us something surprising about our architecture
The team practices that help us move fast without breaking trust across engineering, data, and design
We’re excited to build this with you. Hit subscribe if you want to follow along, and feel free to reply to any post—we’re here for the conversation, not the broadcast.
Let’s build.
—
The Stan Engineering Team


