It's all about leaving the echo chamber and accepting that your assumptions might be wrong. Probably wrong.

We’re collecting feedback relentlessly: watching session replays, noting when our assumptions fall flat, and constantly, maybe a bit annoyingly, asking users what’s working and what's not.

All to carve out an experience that works for them. When a clear signal emerges, it goes straight into our backlog. We brutally prioritise feedback and look for more data to validate it. Once we see it's a real issue, we work on it, and once a solution emerges, it goes into our deployable bits list. We follow a release train model, bundling changes into planned cycles. We believe it keeps momentum up while reducing the risk of sudden disruptions for our beta users.

To us, the details matter. Honestly, it's more of an itch than a commercial decision. Examples are abundant already. We’ve watched users click every checkbox on the dashboard, thinking they had to “complete” them. That’s on us. We’ve seen them get stuck on the connection screen because there’s no progress bar. Our blind spot. They rage-click metrics we assumed were self-explanatory. Of course, they weren’t. We’ve been staring at this product for months.

We're fixing all of it. Even the small stuff. Because simplicity doesn’t mean skipping the details. We want to stay lean and focused, without ever leaving users confused or frustrated.

We're also doing our homework to scale this approach for open beta. If we manage to do that, incorporating feedback into our backlog without it going stale, we'll have a product that solves real needs