Software Synthesis analyses the evolution of software companies in the age of AI - from how they're built and scaled, to how they go to market and create enduring value. You can reach me at akash@earlybird.com.
London AI Breakfast Series
Last week we hosted a great discussion on the MCP ecosystem, with participants from ElevenLabs, Pydantic, and ACI.dev. Takeaways here.
Next up:
July 16th: Computer Use and Browser Agents
July 23rd: AI Evals
July 30th: Future Founders
With all the debate around ‘vibe revenues’ and ‘experimental runrate revenue’, one potential solution I’ve been thinking about a lot is onboarding.
Companies have studied product-led onboarding since PLG became table stakes for prosumer SaaS in the 2010s, but I think it deserves much more attention than it’s getting in the AI age. To a large extent, retention comes down to the customer journey from signing up to activation, and it still seems like too few companies obsess about the entire journey through to activation and expansion.
I sat down with Kate Syuma to dive into best practices for product-led onboarding in the AI age.
Kate is a Growth Advisor and the Founder of Growthmates. She led Growth at Miro for 6+ years, seeing the company’s growth from a young startup with 50 people to the Series C unicorn valuation with 2,000+ people. Her mission at Growthmates is to share the knowledge that helps others grow meaningful products that delight users, and she has collaborated with 15+ B2B and B2C like Grammarly (who just acquired Superhuman), Manychat, and created educational content with brands like Amplitude, Appcues, UserPilot, and more.
One of the biggest areas of focus in AI today is quality of revenue - companies are investing heavily in activating their users so that retention improves. This then amplifies the importance of the onboarding process. How should AI founders approach onboarding differently to avoid ‘experimental revenue’?
I've been documenting some thoughts on this, and there's a report I produced that's dedicated to AI tools onboarding trends. We analysed 10 of the most popular AI tools and walked through their onboarding experiences. Surprisingly, there are generally common patterns used everywhere.
What's interesting is that six out of ten products are doing something like a pre-signup experience. You can touch the product and start getting value before actually signing up. The trend I see is shortening time to value. Instead of asking lots of questions and giving many steps to get to the first outcome, they're trying to give it earlier.
But the problem is the outcome and its quality, especially for Gen AI products. I've been recently looking for a Gen AI product to change the style of images, and I tried 10 products probably in one hour because there are so many of them. The outcome is usually of poor quality.
Shortening time to value will not help if the outcome is not good enough. We need to combine both — first, shortening time to value because now we can do that with AI products and prompts, but then also ensuring good enough quality of this first outcome. Otherwise, if this quality is very poor, people will not stay.
The worst thing I've seen is when you're waiting for this first outcome to be generated — the model is processing the request — the product is asking you to upgrade immediately or giving you pop-ups over and over to upgrade. This is a very dark pattern that I've seen a couple of times that we definitely need to avoid.
High-growth AI tools often go viral, yet 30-day retention can fall off a cliff. When you advise founders, what signals do you watch to distinguish “tourists” from users who are likely to stay and expand usage — and how do you surface those signals inside the onboarding flow?
I was thinking about what you can do in the onboarding process to identify these segments earlier. You definitely need to collect profiling information. The more relevant and correct profiling information you collect, the better you know your users from the first session.
There are a couple of examples, like ElevenLabs. Here’s the type of information they're collecting: What type of creator are you? What would you like to do with ElevenLabs?
Typical profiling questions. You'll see an example of Lovable also collecting quite a bit of profiling data.
This data is not always the most correct information because people sometimes are speeding through and just clicking randomly. But this is just one slice of data you can use for attribution to identify corporate users, especially B2B companies.
In Claude, they're also asking: do you want to use it as an individual or with a team? Especially if this is a corporate Gmail or corporate domain account, and they identified they want to use it with the team, maybe they selected some super relevant ICP type of intent. This is data to slice this segment and distinguish it from so-called tourists.
From this data, you can even identify your PQLs — high-quality product qualified leads. For example, if you identified someone from Apple registered in your product with a corporate domain, you can reach out to them, activate your sales-led strategy, and start nurturing these accounts more.
But definitely, in my experience, only by asking good enough profiling information can we know these customers. There are also tools used for this attribution, like Clearbit, that can give you extra data about accounts and help distinguish them from tourists.
There's a great study you published about how TheyDo doubled their activation once they delivered this metric of users getting to the opportunity matrix. How do you look at complex workflows and then isolate that single interaction that is most predictive of long-term value, and then design the onboard so that as many people as possible get to that gate?
TheyDo found the opportunity matrix to be the most valuable use case. From my experience, identifying that use case is not easy and requires a lot of user research.
One of the best methods is something like diary studies — not necessarily pure diary studies, which are complex and expensive, but you need to observe the real experience of some very relevant accounts from day one for, let's say, 40 days. You really need to see how they're adopting the product and what use case sticks most to them in real time while they're exploring the product.
It might be a complex study to do, but also by just doing typical user research, user tests, and user interviews over months and quarters — especially if you're an early founder — you will see the pattern. One use case is typically far more dominant than others. This comes from qualitative data.
Quantitative data in the form of analysis of your NRR and cohort/account-level trends. If you have accounts that have been with your product for longer and renewed every year, you can see what they have in common, what they're using more often.
When you're thinking about creating an onboarding path, I would say this is quite an exceptional example. TheyDo really identified this one use case and led the majority of customers to this one. In more diverse or horizontal B2B products, it's impossible to lead everyone in one path.
At Miro, for example, we identified the top four or five use cases — not 10, 12, 11, just the top four or five. Then we asked about them at the end of the onboarding flow, something like what we've seen with ElevenLabs today. For each of these top four use cases, we had preselected templates — a short number of templates that we showed for each use case, not just one particular use case, but several templates attributed to one of these four use cases. Then the onboarding flow was tailored towards one of these things.
When we did that back in the day, it was very hard to scale because AI wasn't there yet. It was very hard to scale that personalisation. But today, with AI avatars, it's far easier to scale it for several use cases just by adjusting it in the builder.
There's a guide I attached about how to build this particular AI avatar using Pine AI for your demo. Basically, you just need to create several scripts for several use cases, then it will work for not just one use case.
A blank canvas can feel intimidating — what heuristics help founders decide how much guidance (e.g., step-by-step AI tutor) versus pure self-serve exploration is optimal at different stages of company maturity?
When the product is very young, it's probably very hard to talk about any type of personalization because it's pretty complex to do any personalization. However, if we're thinking that the product can be used by both tech-savvy and non-tech-savvy users, one of the most useful questions for personalization criteria is experience level.
For example, this is a Figma screenshot, but there are plenty of others doing that: "Have you used Figma products before?" Lovable could ask something like "Have you ever coded before?" Is it something familiar to you?
Based on that, if an engineer is registering in a product like Lovable or Replit and they have experience before, we don't need to overwhelm them with checklists, tutorials, guides, etc. But if I were registering and I don't have hands-on experience building code or even creating prompts that would help me generate these outcomes, I would be lost without this guidance.
Even before adding any real personalisation, you can just start asking this question and learn about your ICP or people at the top of the funnel. You will see — maybe unexpectedly — a lot of people who have never done coding before, who are not tech—savvy, are registering in your product, and they leave more often than those who have some experience.
After that step, the situation is different. Maybe 90% of your top of funnel are tech-savvy people because your marketing is doing a great job, and you don't need to personalise for them. But then if there's a 50-50 split, of course, give more tailored guidance to non-tech-savvy people. Lovable is not doing it yet, but they said they're going to do that.
What are some emerging onboarding patterns that are very exciting for you? And what are some patterns you would definitely encourage founders to avoid as they think about activation?
One of the exciting patterns, which is probably counterintuitive, is that I think there's an opportunity to scale human onboarding that mimics a real personality. This is an example from TheyDo — they replicated their CEO who is guiding every user inside the product, especially for enterprise-level products where customers are more used to real demo sessions with real humans. This is an amazing opportunity to make the experience more human, not just a "next, next, next" type of tour guide.
But there are risks as well, related to trust. There's still some behavioural change happening in consumer minds regarding AI avatars. There's a proportion of people who are already more savvy with AI avatars, but there are people who don't trust them. We need to be selective and very careful when we experiment with that concept. I'm really curious to see where it all goes.
An anti-pattern I mentioned was around the creation process. With AI tools, we are still creators. We're not delegating everything to AI. As a user, you really need to be involved. I think that's another issue — some AI tools are trying to automate everything, so the user is not actively involved anymore in the creation process, which limits them from creating a habit and stickiness to the product. We need to give them space to create still.
For early-stage companies that don't have dedicated functions for growth, success, or design, what are the rituals you would recommend for closing the feedback loop between customer feedback and tickets and then improving their onboarding on a good cadence?
One of my favorites is session recordings. If these companies already have tools like Amplitude, Hotjar, FullStory in place, they all usually have session recording functionality that's enabled when your user is registering. It hides personal data for sure, but then you can watch these recordings.
Some people even have rituals: Friday sessions when they watch several recordings like a movie and have live brainstorming, live discussion. It's unbelievable how many insights you can see when people are struggling and you really see their core issue, how they're not sure what to choose, for example. It's not just a heat map where you already see everything — it's really interactive.
I would suggest booking every Friday a one-hour session where you watch these screen session recordings, and then enable this somehow for any tool you are using.
One tried and true pattern, but definitely key for AI companies, is community-led growth and user-generated content templates. What are some best practices for incorporating community in onboarding to accelerate time to value?
When I was working at Miro, Miro was one of the first UGC community-led B2B companies. After that, Figma started doing community, and Notion. We were at the beginning of that.
What I've seen is that UGC is usually quite complex and very niche to be able to integrate it into onboarding as a template. People get lost. For example, if somebody started at Miro with a Jake Knapp design sprint board, which is enormous, it was very hard for them to understand the tool — there are so many complexities there.
But it's good to use it for acquisition purposes, exactly how Lovable is using that — showing it on the website for building trust, but also as an acquisition channel. And for deepening adoption for niche use cases. When your user already tried the product and already knows the basics, showing them more community examples really helps to deepen their adoption.
I'm using the Figma community often. Every time I need a specific element, I search in the Figma community. Same thing — if a person wants to design a website, they go to the Lovable community to see what similar things can be done. So it's an example reference, but not necessarily a lever for activation. People will struggle with first-time content like UGC — it's very specific and quite complex, usually.
Data
Security software is pulling away from other categories.
Whilst data & infrastructure software is being rewarded for its foundational role in enabling AI.
Palantir’s surge is largely responsible for the high growth software basket’s sharp rise in recent months.
Quantum computing is expected to be a $22bn market by 2030.
Reads
Data Rules Everything Around Me: The Future Of Enterprise Applications
There Are No New Ideas in AI… Only New Datasets
MCPs: Value Creation, Capture, and Destruction—Lessons from the API Era
Training AI is Fair Use, Product Protection Versus LLM Liability, Piracy and Competition
Have any feedback? Email me at akash@earlybird.com.