Designing AI-Native Software
With SPACING
Software Synthesis analyses the evolution of software companies in the age of AI - from how they're built and scaled, to how they go to market and create enduring value. You can reach me at akash@earlybird.com.
Gradient Descending Roundtables in London
November 26th: Open Source AI with Alibaba Qwen
Last week we hosted Jonathan and Amelie, cofounders of design venture studio SPACING, to unpack how to approach design for AI-Native startups.
Moving Beyond Chat
Most startups SPACING worked with requested a chat interface, treating it as the default pattern for AI interaction. This led to:
Homogeneous user experiences across products
Home screens with generic prompts like “What do you want to achieve today?”
Users struggling to understand capabilities and how to articulate requests
Suboptimal workflows for specific use cases
The Chat Interface Paradox
Historical Context:
Command-line interfaces → Graphical User Interfaces (GUIs) → back to text-based chat
The key difference: Natural language processing enables conversational interaction
But this doesn’t mean chat is always optimal
When Chat Works:
Cursor - Engineers working with code and text (natural medium)
Text-heavy workflows
Open-ended exploration
Specific questions requiring detailed responses
When Chat Fails:
Tasks requiring visual precision (e.g., changing button colors)
Workflows needing immediate visual feedback
Actions better suited to direct manipulation
Users without clear prompting knowledge
Direct Manipulation vs Chat
Changing a button colour through chat:
Requires writing: “Change the colour to red”
May not get the exact shade desired
Requires specifying hex codes for precision
Lacks immediate visual feedback
Multiple iterations needed
vs. Direct manipulation:
Click colour picker
See all colour options instantly
Immediate visual feedback
Faster and more accurate
Four Ideas Beyond Chat
1. AI as a Collaborative Partner
Philosophy: AI as a companion rather than a replacement, working alongside users to enhance capabilities.
Implementation Strategies:
a) Contextual Commenting System
Integrate AI into existing UI patterns (commenting features)
Tag AI in comments on canvas elements instead of switching to separate chat
AI responds in context, maintaining workflow continuity
Example: Comment on a design element → tag AI → AI suggests variations in place
b) Project-Aware AI Companion
AI “lives” within design files with persistent context
Receives master instructions about project goals, brand guidelines, tone of voice
Eliminates repeated context-sharing (unlike ChatGPT workflow)
Press button to request copywriting assistance
AI works in background while designer continues other tasks
c) Exploratory AI Features
“Surprise button” next to colour picker
Generates random colour suggestions
Press multiple times for variety
Sparks inspiration
d) Icon Generation
Quick icon generation on demand
No need for external icon libraries
Integrated into workflow
e) AI-Generated First Drafts
Addresses “blank canvas” problem
AI creates initial structure (3 screens, basic flow, components)
Goal: No accuracy, but starting point for iteration
Leverages human strength: Critiquing existing work vs. creating from nothing
AI generates initial automation flow so users can modify rather than build from scratch
Key Insight: People excel at criticising and improving existing work but struggle starting from nothing. AI provides that initial draft.
2. Proactive AI
Philosophy: AI anticipates next actions and surfaces suggestions contextually.
Implementation Examples:
a) Contextual Suggestions
User selects table cells
AI surfaces most obvious next action
Keyboard shortcut to execute
Example: Select cells → AI suggests rewriting content → Press shortcut to execute
b) Smart Filtering
100+ job applicants to review
AI pre-scores applicants
Surfaces top 3 candidates
Provides strength/weakness summaries
Generates interview questions
User focuses on high-value activities (interviewing) rather than screening
c) Autonomous Content Creation
AI learns brand identity and competitor landscape
Generates new content pieces daily without prompting
User wakes up to 5 new AI-generated video outputs
Proactive value delivery while user sleeps
Key Challenge: Onboarding & Data Collection
Initial approach: Comprehensive onboarding collecting information manually
Problem: Onboarding too long, but insufficient data = poor results
Solution: Balanced approach
Collect minimum data for first value
Continuously gather data during usage
Feedback mechanisms: Like/unlike, quality ratings, script preferences
Long-term: Close the loop with analytics showing ad performance
Critical Balance: Minimise user friction while maximising AI intelligence
3. Task-Driven Workflows
Phase 1: Chat-First Approach
Assumption: Give users conversational interface for maximum flexibility
Reality: Users didn’t understand what to type
Problem: Too much freedom without guidance
Especially challenging for users without marketing expertise
Result: Poor user experience, confusion, abandonment
Phase 2: Guided Workflow
Broke down required information into structured steps
Used AI to enhance each input field
Clear progression: Step 1 → Step 2 → Output
Result:
Shorter time to value
Better outcomes
Higher user satisfaction
More consistent quality
Key Lesson
“We started with uncontrolled freedom...and the result was no one could figure out what to do. Then we moved to a guided approach...and the overall experience was just better.”
Supporting Features:
a) Prompt Enhancement
User types short, simple prompt
“Enhance prompt” button
AI expands to sophisticated prompt automatically
Better results without prompt engineering knowledge
b) Example Templates
Pre-filled examples to start from
Users click, see result, then iterate
Educates users on capabilities through interaction
c) Smart Defaults
Pre-populated based on user type (designer vs. founder)
User only edits exceptions rather than filling blank forms
Reduces cognitive load
4. Personalised Software
Philosophy: Software adapts to individual users and teams rather than one-size-fits-all.
The Home Analogy
“Your home should be arranged to fit your needs. You shouldn’t move into a house and have to live with someone else’s furniture arrangement.”
Personalisation Dimensions:
a) Custom Feature Development
Users build features on top of base software
Example: Calendly + Stripe integration
Many users want to charge for meeting slots
Calendly doesn’t offer this
Vision: User prompts to build Stripe integration themselves
Enables non-technical users to extend software
b) User-Type Adaptation
Onboarding asks: Designer or Founder?
Designer view: Shows all customisation tools, auto-layout options, advanced features
Founder view: Hides overwhelming technical details, surfaces agentic AI features
Software adapts to expertise level
c) Usage Pattern Learning
Keyboard shortcut user: Hide tooltips, maximise screen space
Visual navigation user: Show labels, icons, clear UI elements
AI observes behaviour and adjusts interface accordingly
d) Team-Level Customisation
Company-specific workflows
Example: Custom performance review process in Linear/Asana
Not in standard software
Team prompts tool to build their unique workflow
Critical Challenges Discussed:
Challenge 1: Consistency Across Users
Problem: If everyone’s UI looks different, how do colleagues help each other?
Zoom calls: “Click top left” but it’s in different position for colleague
Potential solution: Team-level personalisation rather than individual
Open question: Balance between personalisation and shared understanding
Challenge 2: Guardrails
What can AI adjust vs. what must remain stable?
Need clear boundaries for personalisation scope
Risk: AI moving critical elements unexpectedly
Challenge 3: Progressive Disclosure
Different users see different features
How to maintain product coherence?
When does personalisation become fragmentation?
Q&A Discussion Highlights
On Data Collection & AI Quality
Question: How much onboarding data is needed for proactive AI?
Super Scale Experience:
Started with website scraping
Added competitor analysis for niche understanding
Iteratively removed unnecessary fields
Balance: Long onboarding vs. quick value
Solution: Minimum viable onboarding + continuous learning
Feedback loops: Scoring systems, like/dislike, style preferences
Future: Analytics closing the loop (ad performance → agent improvement)
On User Friction & Context
Challenge: Getting users to provide context without disrupting work
Strategies:
Automatic data collection where possible
Example: Connect accounts and scrape data directly instead of asking for information manually
Reduce manual input through intelligent defaults
Key Principle: Users won’t write perfect prompts or provide extensive context. Design must accommodate this reality.
On Getting Users Started with Chat
The “Empty Chat Box” Problem: Users don’t know what to type when facing a blank chat interface.
Solutions Implemented:
Prompt Templates Below Chat
Pre-written use cases users can click
Shows capabilities through examples
Especially important for users unfamiliar with AI tools (e.g., lawyers)
Structured Input Forms
Guide users through required information
AI enhances each field
Clear progression to output
Visual Examples/Galleries
Show what others have created
User-generated content inspiration
Similar to Notion templates, V0 examples
Calibrates expectations and demonstrates value
Starting Point Generation
User clicks example
Gets initial result (may not be perfect)
Iterates from there
Educates on capabilities through interaction
On Feedback Mechanisms
Question: How do you collect meaningful AI feedback beyond thumbs up/down?
Challenges:
Users rarely provide feedback
Thumbs up/down insufficient
Need both quantitative and qualitative insights
Approaches:
Quantitative Tracking:
Time to export/completion
Number of regenerations needed
Whether users edit prompts after submission (indicates dissatisfaction)
Usage of manual editing tools (signals AI output inadequacy)
Qualitative Feedback:
Optional detailed feedback forms
“Why don’t you like this?” prompts
Script/style preferences
Problem: Low user engagement with feedback
Indirect Signals:
If user goes to manual editor after AI generation = AI missed the mark
Multiple regenerations = AI not understanding requirements
Quick export = successful generation
Challenge: People are “annoyed” by feedback requests (like ChatGPT’s constant questions about response quality)
On Voice Interfaces
Context: Discussion of voice as alternative to chat
When Voice Works:
Thinking out loud / brainstorming
Therapeutic “dumping” of thoughts to ChatGPT
Quick input capture
Natural expression of ideas
When Voice Fails:
Open office environments (privacy, noise)
Latency issues
Can’t easily review or edit what was said
Reading is faster than listening to responses
Forces real-time engagement (can’t pause and think)
Key Insight: “You talk quicker than you type, but you read quicker than you speak” - voice works for input but text is better for output consumption.
On Model Comparison & Iteration
Cursor Feature Highlight:
Run multiple models in parallel
Compare outputs side-by-side
Some models more conservative, others more creative
Users choose best result
Inline Commenting Approach:
Highlight specific parts of AI output
Comment on what needs changing
Regenerate specific sections rather than entire output
Reduces back-and-forth iterations
On Progressive Disclosure & User Types
Question: Who decides user type - the person or AI?
Answer: AI decides based on:
Onboarding questionnaire responses
Observed usage patterns
Keyboard shortcut usage vs. visual navigation
Feature utilisation over time
Evolving understanding of user sophistication
Guardrails Discussion:
Need rules for what AI can/cannot adjust
Some elements must remain stable
Balance adaptability with predictability
Case Study: From Chat to Guided Workflow
Initial Vision:
Chat interface for video ad creation
Assumption: Natural language = better UX
“Create this ad” → AI generates video
Problems Encountered:
Users didn’t understand capabilities
No marketing expertise to craft effective prompts
Too much freedom = paralysis
Prompts too vague for quality output
Long wait times followed by disappointment
The Pivot: Moved to structured, guided flow:
Step 1: Brand information (URL, identity)
Step 2: Voice and tone
Step 3: Niche analysis (automatic)
Step 4: Output preferences
AI enhances each input field
Clear progression to output
Results:
Dramatically shorter time to value
Higher quality outputs
Better user satisfaction
More predictable results
Users understand capabilities through structure
Signals
What I’m Reading
From Context Engineering to Context Platforms
Earnings Commentary
Over time, the best enterprises will have seamless data access across many of their data lakes. Whether it’s their observability data lake, their security data lake, their IT data lake. Because eventually, you want agents to go and go figure out what’s going on across multiple data lakes and solve your problem. And sometimes problems cross across multiple data lakes, right? If something is down in application, maybe the firewall shut it down, so firewall is in security data lake. So if you want this agentic capability across data lakes, all we’re trying to do is we’re trying to build the enterprise fabric with our customers.
Nikesh Arora, Palo Alto Networks Q1 2026 Earnings Call
We now have over 2,450 customers on Elastic Cloud using us for Gen AI use cases with over 370 of these amongst our cohort of customers spending $100,000 or more with us annually, representing nearly 1/4 of our greater than $100,000 ACV customer cohort leveraging Elastic for GenAI use cases.
Ashutosh Kulkarni, Elastic Q2 2026 Earnings Call
Have any feedback? Email me at akash@earlybird.com.




