[#76] From zero to live (Part 2): Testing Emergent’s “AI-built backend” promise (and hitting the limits)
Vibe coding using front end tools is just another word for fancy prototyping - setting up backend services is still very much a challenge, and we may be overestimating the vibe coding TAM in general
About a month ago, I decided to test out the so-called vibe-coding revolution. You know the pitch: “just talk to the AI and it builds your full-stack product.” So I set myself a real challenge: not just a shiny landing page or some fancy buttons pretending to be a product, but an actual full-stack app.
And when I say full-stack, I mean the real deal:
✅ Backend with a database
✅ Auth and sign-in flows (looking at you Google OAuth)
✅ A proper database logging real data (Supabase, MongoDB)
✅ AI recommendations plugged in via OpenAI APIs
Because I’ve seen way too many examples of how people talk about how they built a fill product through vibe coding, and when you look under the hood, it’s basically a glorified clickable prototype.
My past experience with Cursor was a fail - duct tape dev for non coders
So I decided to build a fitness tracker: one where you log your workouts and it spits out AI-powered training recommendations. Simple in theory.
My first attempt? Cursor - but that went sideways almost immediately. I didn’t “develop” anything: I duct-tape-engineered my way through prayers, vibes, and increasingly frantic copy-pasting between Cursor and ChatGPT. Cursor would generate files, I would nod intelligently, and then promptly feed them to ChatGPT.
At one point Cursor just lost the plot. And this is probably because I didn’t really have the plot to begin with. It started spinning up new HTML files like it was possessed: multiplying them in some kind of haunted debugging ritual. Endless loops, mysterious errors, and I had no clue what was happening
I was able to use Lovable + Supabase + Google Cloud Console to set up a working frontend + backend website
From there I switched to Lovable, which thankfully with its pre-integration with Supabase allowed me to set up the backend, auth, and a real database seamlessly. I still had to wrestle with Client IDs, secrets, and API keys (Google for OAuth so people can log in with email, OpenAI for the recommendation engine), but Lovable made the whole thing way less painful.
The annoying part didn’t disappear. You still have to visit Google, generate an OAuth client ID/secret, go to OpenAI, create an API key and so on. The difference is that Cursor wanted me to hop between consoles, paste keys into three different places, and perform ritualistic incantations. With Lovable, while I did have to manually add the secrets to Supabase, to configure, I just added the keys into the chat prompt, and iit mapped them into Supabase for me, and handled the backend wiring. I still created the keys, but Lovable saved me the constant context switching and manual clicking.
Long story short: Lovable got me to a working site far faster than Cursor ever did. I wrote a detailed step-by-step piece about the whole experience, and you can check it out below
[#73] From zero to live: Building an AI powered app using Lovable, Cursor, Supabase, Google OAuth, & OpenAI (as a non developer)
Building an AI-powered app from scratch can sound intimidating, especially if you have zero coding experience. Over two days, I dove headfirst into creating FitTracker, a fitness tracker that learns from you, using tools like Lovable, Cursor, Supabase, and OpenAI. This is my journey through the chaos, breakthroughs, and hard earned lessons of bringing a…
Emergent is this new “vibe coding” app that raised $23M in 2025 in Series A, that claimed to solve for both front and backend - exactly the issues that I faced with Cursor + Lovable. So naturally I was excited.
This was recommended to me by
, as I had complained a lot about the backend issues with Lovable and Cursor, and this is a solution that claims to solve my backend woes. So, to have a reasonable comparison of how Emergent works, I decided to build exactly the same app that I did using Lovable, with the same functionalities - that of AI powered recommendations and a Google OAuth log-in, just so I can compare. To work then.Thursday, 9:05 pm
Dinner done, brain recharged, and somehow I’ve convinced myself I’m ready for Round 3 (Round 1 being Cursor, and Round 2 being Lovable). Despite the emotional scars from Cursor and Lovable, I’ve been itching to try Emergent - it’s been sitting on my “one day when I have time/mental stability” list.
And tonight I have both. (Or maybe it’s just delusion. TBD.)
Plus, Emergent just announced a cool $23M raise in their Series A, so the hype is real. Time to see if it’s deserved, or if I’m about to add another chapter to my therapy journal titled “AI tools that made me question life choices.”
Thursday, 9:10 pm
Some learnings I’ve had from the last time (duh) is not to add on requirements later. Better to have a vision for the app, and put all requirements down in the start. Cool. I’m a veteran (or so I think), and I put in a prompt that talks about:
What I want to name the app
Its core functionality
Clearly call out the backend services I want to set up
The prompt in question below:
“Build me an AI powered fitness tracker, complete with log in through Google OAuth, and using OpenAI APIs, to allow me to log-in using my email, add my target muscle groups, activity duration (in time), and type of training (ex: cardio, strength, hybrid) and get an AI powered dynamic workout generated. Call it FitTracker. It should also have a database at the backend where it is logging all the log-ins at user level, and storing what the request was. I want a complete front end and backend enabled website”
Almost immediately, I see that with Emergent, it allows me to choose the LLM I want, vs Lovable that automatically manages this in the backend
Snapshot of Emergent:
I can’t do that with Lovable - but that doesn’t really matter. As the end user, what do I care? I just want my prompt to build something
Snapshot of Lovable entry screen:
The big draw with Emergent is pretty simple: it handles the painful parts. It has a fully managed layer for LLM APIs and Google OAuth. That means two magical things:
Universal LLM API key: I don’t need to go beg OpenAI (or any other model provider) for API keys and billing setups. Emergent handles it. Curious to see how this will be paid for though - I’m assuming some sort of prefunding will have to be done on Emergent.
Managed Google OAuth: no wading through Google Cloud Console trying to decode what a “redirect URL” is while questioning my life choices.
Already a win. Also worth noting: Emergent uses MongoDB for the backend, whereas Lovable plugs into Supabase. Translation: I don’t have to touch any external backend service. No third-party dashboards. No “wait where do I paste this again?” I can just tell Emergent what I want, and it spins it up.
Thursday, 9:12 pm
Emergent spins up a shiny new website for me, and politely asks me to try it. Great. I’m optimistic. We’ve raised $23M worth of expectations here: surely, this time, it’ll Just Work.
I click: Continue with Google, and the familiar page to select my account to log-in with appears.
A point to note here, since the OAuth is Emergent managed, the project name also shows at Emergent. You can see it below, where I can choose the account to proceed to “Emergent agent”
Compare this to Lovable - since we’re utilizing my project set up, the “skonlyzh…supabase” is the name of my project that I’ve set up in Supabase & Google cloud console.
I select my account, hold my breath and wait. And then….right back to the login page like nothing ever happened.
No token saved. No “Welcome back.” I feel like I’ve been gaslit. Cursor PTSD unlocked. At this point, I’m staring at the browser with a sense of foreboding. I feel like I already know how this ends.
And just to add to the vibes, while I can see different tabs in the UI, they’re basically non-functional. I can’t click anything to generate an AI-powered workout. I assume that magic unlocks only after login which currently refuses to acknowledge my existence. *Deep sigh*.
I open the Emergent chat window, type out the problem, and then whisper the sacred incantation I’ve now recited across multiple platforms: “please fix.”
At this point it’s less a command and more a prayer.
Also, this is an observation: Lovable and Emergent seem to have different default designs. Lovable seems to favour the lighter colour palette, while Emergent spins out a black and orange type of default look and feel.
Nothing wrong with either, I generally prefer a lighter colour palette - it seems cleaner, but it was just interesting to see the “personality” of Emergent.
Thursday, 9:22 pm
It’s been six minutes since I typed “please fix.” The Emergent agent is still thinking. I also have a lot of thoughts. I keep them to myself for now.
I also can’t add any other prompts until it finishes its existential crisis. Sure, there’s a Pause Agent button, but I don’t want to interrupt whatever ritual it’s performing to appease the debugging gods. So I do what any rational person does when they’re being held hostage by AI build tools. I go get ice-cream.
Thursday, 9:26 pm
I finish my ice cream and, with some apprehension, I open the Emergent tab again. Hallelujah. It claims it has “tested” and “fixed all bugs.” Apparently, the culprit was CORS.
And yes - I actually know what that means. I feel insufferably smart for exactly three seconds.
Quick explainer (because Past Me would’ve appreciated this): CORS is a browser security rule that blocks requests coming from another origin (domain). So if your frontend lives on one URL and tries to talk to a backend sitting somewhere else, the browser says: “um, who are you?”. So, if you don’t whitelist the domains in both your backend service (Supabase, in my last project) and your OAuth settings (Google Cloud Console), the browser just blocks everything, because otherwise they could get access to sensitive data.
So essentially, (and this is something I did when I was setting up the Lovable + Supabase configuration in my previous experiment with this) - I had to add the domains from where the request would originate on both Google Cloud Console (for OAuth), and Supabase, so that the request would be allowed, and the requesting domain would be recognized.
I’m guessing it was the same situation here. The app’s domain wasn’t whitelisted somewhere in the stack. Maybe Mongo, maybe wherever Emergent keeps its magical universal Google keys. Point is: something didn’t recognise something else, and CORS did what CORS does best, slam the door shut.
You can check more on how I avoided this issue in the Lovable + Supabase combo in my previous piece on building using vibe coding tools. I’ve linked the specific sections below. But TLDR: I had to manually set this up in my previous lovable experiment to avoid this, Emergent has debugged and fixed this automatically (or so it says)
Side note: 15 minutes of “testing” with no visibility into what’s happening? If I wasn’t stubbornly committed to finishing this experiment (and honestly, this story arc), I’d have rage-quit and tried another tool by now.
But fine. It’s “fixed.” Emergent proudly presents a little report of what it updated, and I’ll admit, I do appreciate that touch. Makes me feel like something actually happened behind the curtain, not just vibes and prayer.
It tells me I have a fully working app, with all the features I requested. I’m euphoric. Last time it took me two days. It can’t be this easy I think, as I test it again. And I’m right. It’s not. I still can’t log-in.
It’s the same issue. Log in → redirect → dumped back on the login screen like an unwanted guest. And of course, the workout generator still refuses to wake up. Hello darkness, my old friend.
At this point, the irritation is real. With Lovable and Cursor, fine: I expected a bit of chaos because I was stitching third-party services together myself. I was the duct-tape engineer in that scenario. But Emergent’s whole value prop is “we own the stack, we handle auth, we test it internally, you just vibe.” So why am I deja-vuing into the same problem? Still, I persist. I’m committed to this bit.
Thursday, 9:34 pm
So, obviously, I open up the Emergent chat again to tell it, hello sir, I still cannot log in.
Except I’ve now run out of free credits. Which means I need to pay. Again. So here I am, signing up for yet another subscription. At this point I feel there is a real opportunity for a start-up to provide cashbacks and discounts for for AI dev tools: “Build 9 glitchy apps, get the 10th one sort of working for free.”
What was easier about the payment process is that Emergent is India based, so I can just pay using UPI. No signing up for yet another recurring card charge, and then having to unsubscribe by logging in, or going to my email to figure out how to unsubscribe from the welcome mail. Unlike Lovable and Cursor, which I had to manually unsubscribe from because apparently the future of software is just cancelling subscriptions at this point.
I set up the Autopay, and cancel it immediately from my phone. Cost: INR 1,649 per month. My issue was that the Autopay mandate is for 10 years - October 2025 to October 2035. I almost didn’t see it because I just wanted to pay, and get back to fighting with Emergent. Now, when I read UPI Autopay docs for Razorpay and Juspay, they clearly mention that if there’s no defined expiry, the default can be 10 -30 years. Meanwhile, the other tools I tried which had through recurring card setups had a 1 year mandate.
Sorry, but this feels borderline predatory. Easy to miss. And there was a recent piece in The Ken about dark patterns in UPI Autopay where users unknowingly signed up for long dated mandates and some apps were making cancellation intentionally painful. While the goal is to make money, I think there is some responsibility that needs to be taken here: payment models need to mirror the use case.
For these “vibe coding” platforms, honestly, a pay as you go refill model would just make more sense. A ton of users only pay month one to test, and real retention only becomes visible around month three. Knowing that, I’m surprised there isn’t a credit top up option, especially when the product is still scaling up. And I genuinely feel, that in this day and age, when products are getting easier and easier to build, responsibility and ethical practices are way more important in gaining customer loyalty then just building a easy to use product (ex: Zerodha / Ditto Insurance are some that overall seem to have more positive customer reviews, and this is mostly due to their no spam policy, and responsible selling).
Anyway, I’m an expert at debugging. I inspect my plethora of solves, type out “PLEASE FIX” in block capitals into the chat. I contemplate adding some expletives in the chat, but I reason that that will just eat into my credits, and I’m also a cheapskate.
Thursday, 9:37 pm
Emergent does a deeper debug and discovers there’s some date issue causing the failure. Cool. But also , why wasn’t this caught in the first place? And then, in a very Cursor core moment (again, not being generic to Cursor, this is basis my specific experience with Cursor, more details in Part 1), Emergent decides to build a debugging tool to debug the thing it just built.
So instead of fixing the bug, we’re now building the thing that will (hopefully) identify the bug so that we can then fix the bug? Little odd, but going along with this for now.
What Cursor used to do when authentication failed was spin up separate HTML files. I’d have to download them, run them locally, manually test whether each service was working, and then feed the results back into the Cursor chat.
Emergent just pulled the same trick. It generated a debugging link, told me to click it, follow steps, observe, copy outputs, paste them back into chat. And I’m sitting here thinking: If you’re spinning up the file, why can’t you just see the events directly?
Why am I relaying logs back to my own AI engineer? Am I suddenly the debugging middleware here? But fine. I copy. I paste. I sigh. I persist.
There’s also this fun UX surprise: clicking the debug link doesn’t open a new tab - it replaces the chat tab. So I have to reopen the chat every time I click. Much design. Much wow.
What would be magical is the agent testing its own flows - clicking, logging, verifying instead of outsourcing the final mile to me. We’re clearly not there yet, but you can feel we will be. Someday soon, hopefully, this won’t be a co-debugging experience. It’ll be a “you build, you test, let me know when it’s ready” one. And to be fair and not to be overly critical, it’s only a matter of time before Emergent gets there. It’s me also working through my PTSD here.
But until then, copy, paste, pray.
Thursday, 9:40 pm
The debugging tool in question is below: a separate link Emergent has politely chucked in my direction. Fine. We click. It opens a test page with two buttons: Start OAuth Flow and Process Session. Basically a “choose your own debugging adventure.”
I dutifully follow the steps: hit Start OAuth Flow, do the login dance, return, then click Process Session. At this point I feel like I’ve become the bridging API between Emergent and Google OAuth. I am the auth callback URL.
And sure, this is progress: the tool exists! But it’s very “AI throws a gadget at you and hopes you figure it out” energy. The tool’s here, but the tooling isn’t; we’re still in the “click things and report back” stage.
Again, something that I expect will get fixed as these tools evolve.
And then Step 3: check the activity log. The debugging tool dutifully spits out a 401 error. Now, I don’t fully know what crime I’ve committed here: 401 means “unauthorized,” which basically translates to “lol no.” Something somewhere still doesn’t think I exist, or refuses to acknowledge my right to sign into my own app. So yes, progress: I have moved from “nothing is working” to “nothing is working, but now with logged evidence.”
I copy the log and paste it into the Emergent chat. Emergent goes, “Excellent debugging.” I immediately feel smug. It turns out I am extremely susceptible to positive reinforcement from AI.
It thinks for three minutes and then spits out another link. Fine. I click it. Repeat the steps. Same 401 error. Again.
At this point, both my patience and my credits are draining in sync. What’s really getting my goat is the credits part: I am literally reporting the same error, on loop. Why am I paying to say, “Hi, still broken”? And I really don’t want to pay INR 1600 more to spin up this simple app.
Nevertheless, I decide to try one more time. Same thing. Open link. Follow steps. Hit 401 error. Copy. Paste. Hope. Despair. Repeat. Emergent keeps generating new debugging links.
Finally Emergent tells me: “The Issue Is In The Backend.” Thank you sir. Didn’t realize that one.
Thursday, 9:55 pm
Screw log-in, I decide. This is Cursor PTSD all over again: layer on layer on layer, praying the problem magically fixes itself, except it doesn’t. And I’m not equipped enough (yet!) to unravel this. I never solved it on Cursor either, so why am I doing this to myself?
Fine. Break time. Let me at least understand what’s happening under the hood. What does the backend look like? What data is being stored? How are events being logged? I open the chat and ask, and after some coaxing (and patience I didn’t plan to spend here), it finally coughs up what I need.
Emergent seems to be very chat-first: I can ask for database details directly in the chat, and it’ll give me the details:
Everything, and I mean everything happens through that chat box. So instead of clicking around a dashboard or inspecting tables, I just have to literally ask the AI:
“Hey, what’s in my database?
What fields are you storing?
How are you logging events?”
And to its credit, it does answer. It spits out the collections, the schema, the stored field all neatly laid out in chat
✅ The upside: Emergent does set everything up for you: backend, auth, database and you can literally chat with it to inspect your data. That’s cool. It feels magical. “Talk to your infra” energy.
❓ The downside: There’s zero visual database view. And when you can’t actually see your tables or data structure, you don’t know what you don’t know. You lose context. You can’t tweak fields, reorder schema, or sanity-check what’s happening under the hood.
So instead of clicking into a dashboard like you would with Supabase or Firebase, I’m just asking the AI questions and hoping it tells me the truth. Eventually, out of pure desperation, I keep begging it to show me something visual, and it finally spits out a link
Link to FitTracker Database viewer
This doesn’t feel like the real database but I’ll take what I can get. That said, I definitely preferred the Supabase setup. Being able to open a real dashboard and poke around gave me way more confidence (and control) than just trusting whatever the AI says is happening behind the scenes.
Thursday, 10:10 pm
Anyway, I now sort of understand how the database works, and I finally have a visual to stare at so that’s progress. I do not have the emotional strength to attempt log-in again - my nervous system has limits so I tell Emergent, “Let’s just forget auth for a minute and focus only on the AI-powered workout recommendations.” Honestly, I’m pretty sure Emergent silently whispered “thank God” at that point.
The moment I ask it to ignore Google OAuth, it happily spits out yet another link, this one bypassing login entirely. Perfect. Exactly what we both needed.
When I click on quick log-in, it immediately redirects me to the workout planning part of the page.
Emergent spun out the below:
For comparison, Lovable looked like the below:
As I had mentioned, some difference in default colour schemes, clearly Emergent is going for a darker vibe here, but nothing that can’t be tweaked with a prompt. Right now I’m more interested in seeing if the AI powered workout recommendation works.
Thursday, 10:16 pm
I select the muscle groups, click “generate workout plan,” and immediately start praying loudly. At this point, after the failure with the log-in feature, I contemplate quickly running and lighting some agarbattis.
It takes a little time to think, but eventually, it spits out a workout plan. I quickly skim through it. The structure and tone feel very similar to what Lovable + my own OpenAI key produced which makes sense, because it’s ultimately OpenAI under the hood here too. The difference isn’t the model, it’s the setup and platform experience around it.
An observation: when I bought a paid Emergent plan, I assumed the cost of using Emergent’s universal LLM key for OpenAI API costs would be deducted from there, but seems like Emergent is absorbing costs here
I expected to see my credits drop with every OpenAI API call, but they didn’t budge. So clearly, Emergent is eating that cost (up to some limit) just to let users test things. Much appreciated, genuinely.
I’m guessing they do the same with other third-party integrations too. For context, when I used Lovable + Supabase, I had to create my own Google OAuth credentials and Google asked me to set up a INR 15,000 autopay mandate as part of that process. I wasn’t actually charged anything, but still, that’s a big “excuse me?” moment when you first see it.
Here, Emergent abstracts all of that away. And since an OpenAI call is approximately about a rupee (~0.5 - 1 INR I assume) per hit, this is smart marketing on their part. Most people testing aren’t spinning up backend-heavy apps anyway, so capex wise, not a real burn for them.
So far: ✅ AI workout recommendations working.
Still unresolved: ❌ log-in.
I’d love to say I’m eager to debug that again, but at this point, my willpower and patience are pretty much done. One last thing to try, then I’m calling it.
Thursday, 10:31 pm
I don’t like this FitTracker colour scheme. The orange and black is really jarring. So I ask Emergent to change the look and feel to a lighter background, and just a brighter colour palette in general.
This already looks so much better. I test it a few times. It’s okay. Not perfectly smooth. The workout generator throws errors every now and then, so I keep refreshing like a maniac hoping it magically behaves on the next attempt.
Because there’s no real-time database view, every time something breaks, I’m basically blind. So I go back to the Emergent chat and ask it to pull logs and tell me what’s happening. It eventually comes back and informs me that the errors are because of OpenAI server issues.
But hey, it does work. Mostly. Reasonably consistently. At this point, that’s a win in my books. The app generates workouts, the UI looks decent, and nothing has combusted in the last five minutes. I’m taking the win.
And with that I shut the laptop, exhale deeply, and officially call it a day.
Conclusion: Emergent is a great “vibe prototype” tool, but the backend story isn’t there yet
Fast to spin up but only because I already knew what I was doing: It took ~2 hours to get a working app, but I skipped login and already had context from earlier experiments. The painful learning had been done earlier so this wasn’t a start from scratch experience.
Backend value-prop didn’t hold up: The value add to use Emergent over Lovable was: “Emergent will handle frontend + backend for you.” In practice:
Auth was broken
AI API calls were inconsistent
Workout generation failed intermittently. If the core backend isn’t stable yet, the value of “backend handled by AI” collapses pretty fast.
Debugging is a black box: You can’t see the DB or logs directly. Yes, you can ask the chat for logs, useful, but I still prefer real-time database + logs I can inspect myself. But this is a personal preference, and not an Emergent problem to solve
Right now, it’s great for prototyping, but not for scaling You can absolutely build quick usable demos. But would I ship a real product on this right now? Probably not. We’re not yet at a point where AI app builders can reliably replace structured backend infra.
And Vs Lovable: not much difference: Functionally, Emergent feels like Lovable with a slightly different UX layer. And Lovable now has Lovable Cloud (auth + storage + DB like Supabase - essentially solving for problems that I’d talked about earlier - of having to handle a 3rd party database), so differentiation will be tough unless Emergent closes the reliability gap fast. Bottom line:
✅ Faster prototyping
❌ Not dependable enough for real world deployment
❌ Too early to hand backend over entirely to AI platforms
Bigger question: what’s the long-term moat for “vibe coding” platforms?
What I’ve been trying to understand is: how valid is this statement: “that these tools will change how we actually build.” Let’s be honest: they absolutely change how we prototype. Fast idea to UI turnaround, great for internal demos, alignment, fast iteration. Massive time saved for PMs, founders, and designers.
But do they change how we build products that scale? I don’t see it. Would I trust a vibe-coded backend for real-world scale, infra control, compliance, or performance? No. Not today. Maybe never. At best, these tools are training wheels + demo engines Great for: 0 to 1 mockups, feature pitches, founders testing hypotheses, and internal alignment + fast iteration. But once something has traction, you rip everything out and build on real infra. At max, this works in early days, but my first order of priority if I want to truly build a product out, would be moving away to my own independent backend set up, capable of handling the scale that I’m envisaging. So then I’d question the value of these tools even integrating with a backend - what’s the point, if these tools will only be used for prototyping?
Which leads to the obvious question: If the long-term path is migrating off these platforms, why bother with backend here at all? If these platforms are only for prototyping:
Backend integration feels overbuilt
“Managed services” here are nice to have, but not must have
What’s the point of auth flows you’ll never trust in production?
The platform’s ambition (full stack “we host your app”) doesn’t match real-world willingness to trust it. And once you accept that, they are frontend prototyping tools - not app platforms. And in that case: The focus should be UI/UX scaffolding, design-to-code pipelines, component libraries and easy export to real frameworks. Backend feels like unnecessary complexity. And because they’re fundamentally LLM dependent wrappers, there is limited moat.
And yes, the easy rebuttal here is that “everything is a wrapper.”
Perplexity wraps LLMs, LLMs wrap Nvidia chips, and so on. But there’s a difference: wrappers that sit on top of foundational technology and create new capability or distribution survive. Perplexity adds retrieval, ranking, a web-scale knowledge layer. Nvidia unlocks compute to intelligence. But these vibe-coding tools? They don’t unlock anything new, they simply consume LLM capability and put a UI around it. And the problem is that the LLMs themselves are already moving up stack into exactly this territory: English to code, which then lead to running apps, debugging, infra setup, agentic coding. These platforms aren’t like Perplexity vs OpenAI; they are a thin UX layer on something the base model is actively absorbing as a native function. The issue is not that they’re a wrapper, the issue is that they’re building in a space where the core model is rapidly integrating across the stack and absorbing the value layers
Which then brings me to the next point: if the next wave of apps is not going to be built on these platforms, then what exactly is the point of these platforms?
They’re fancy prototyping studios, not real build environments. Cursor and Claude Code help developers vibe-code; these no-code platforms help you prototype vibes. Tools that matter long-term will always be developer-first. Everything else is UI over GPT. And, with current valuations, it feels like we’re overestimating the TAM massively, or rather, not the TAM, but the willingness to pay massively.
These tools don’t replace expertise: They just remove the grunt work for people who already know what they’re doing. They make good engineers faster and sharper, not obsolete. If you lack fundamentals, these tools may actually accelerate you into bad. The real advantage isn’t in prompting, it’s in knowing what to ask, how to interpret the output, and, in my opinion, this is most important - when to push back because something feels off.
These tools are “English to code,” but only if you already have a clear, granular understanding of what you want to build. The skill that matters most now is clarity of thought - the ability to break down a problem, define specs precisely, and think structurally. If you can articulate the problem well, LLMs are pretty good at turning that into code. For students or early-career folks, learning how to use platforms like Lovable, Emergent, or Claude Code is useful, but only alongside developing core reasoning and problem-solving skills. The real edge is not knowing the tool, but knowing how to think. The value of critical thinking has never been higher.
Instead of pushing AI top-down across orgs and expecting every junior employee to suddenly “work smarter,” I’d actually argue the bigger unlock lies at senior levels, where there is already depth, context, and judgment. I’m not advocating for grunt work for the sake of it, but there is real value in repetition and time spent in the trenches. That’s how clarity is built. That’s how intuition is earned. The tension today is that junior employees are naturally more eager to adopt new tools, but often haven’t seen enough problems to guide an LLM through ambiguous situations. Meanwhile, leaders, the ones who can ask the right questions and frame the right problems, are often under using these tools. The balance we need is not “AI everywhere,” but “AI where expertise exists,” paired with structured learning time for younger talent to build that expertise, not bypass it.
LLMs often lack deep system context, and that’s exactly where real engineering expertise comes in. In many ways, the bar for engineers has actually gone up. Traditional engineering meant understanding the problem, translating it into requirements, building, testing, and deploying. Today, only one part - the act of writing code has become faster with LLMs. And realistically, that’s maybe 20% of the job. Everything else still requires judgment, architecture thinking, context, and rigor. In fact, because AI-generated code can introduce edge case errors or some invisible complexity, the time required for debugging and testing often increases. So yes, LLMs accelerate execution, but they also demand sharper engineering intuition, deeper problem-framing skills, and a stronger grasp of systems level thinking. For people entering the industry, it’s important to understand that LLMs only assist with one slice of the work, which is turning requirements into code. Which means the real leverage now lies in getting the requirements right.
This doesn’t mean programming languages are going away, or that you can afford to not understand them. When high-level languages arrived, a lot of developers stopped worrying about what was happening under the hood, and they got away with it because the systems were deterministic (aka - predictable and reliable). And even then, the people who did understand the low level stuff always got paid more.) With LLMs, you might think you can just give instructions in English and skip the technical depth, but these are non-deterministic systems - they’re random and unpredictable. The same input may give several different outputs. You still need to understand what’s being generated, why it works, and when it doesn’t.
Individually, LLMs are the engines powering this next wave. Everything else layers on top of them. Tools like MongoDB, Supabase, or even Claude Code are genuinely useful building blocks on their own.
But platforms like Emergent, Lovable, and Bolt still feel fundamentally like prototyping environments, not places where real, scalable products will be built and operated long-term.
If a company actually wants to ship something meaningful, it will still pick the primitives: Supabase, Mongo, its own backend stack, and direct LLM integration. And LLMs are only going to get better. Over time, they will absorb more adjacent capabilities, things such as UI generation, database scaffolding, backend logic, deployment flows. Which makes the valuations of these “vibe-coding” platforms feel stretched, given how exposed they are to LLM expansion risk.
Vibe coding - market snapshot
Lovable: Raised $200M; $1.8B valuation (2025); reported 30–50% drop in usage/MAU across public forums and anecdotal reports. In July 2025, according to TechCrunch, it reported an ARR of $50M.
Bolt: Raised ~$100M; estimated $700M valuation in 2025 (sources not fully verifiable, but directionally aligned with Lovable). Hit $40M ARR in 2025
Emergent: Raised ~$30M (Series A, 2025); valuation unclear, but some reports say $90M. It claimed $15M ARR in 90 days, but like all stats regarding ARR, I’m a bit skeptical, when you take take stats and annualize. As I mentioned before, with vibe coding apps the true revenue and usage actually shows in M3 / M4, when the users who’re just paying to try it out (like me) churn out, and the actual base remains.
From a user standpoint, these tools are useful, but primarily for fast prototyping, experimentation, and internal alignment, not durable product development.
From an investor standpoint, it feels like the market is over-estimating the TAM and under estimating the speed at which LLMs will cannibalize the middle layer. There is value here, but it’s narrow, and from a returns perspective, it may be reducing fast.



![[#73] From zero to live: Building an AI powered app using Lovable, Cursor, Supabase, Google OAuth, & OpenAI (as a non developer)](https://substackcdn.com/image/fetch/$s_!2rZj!,w_280,h_280,c_fill,f_auto,q_auto:good,fl_progressive:steep,g_auto/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83e6d55a-c1ae-4b24-bce6-c6b0c48f901f_1942x1096.png)

























Salute to your patience, Ambika! I can relate how frustrating it can be when "PLEASE FIX THIS" prayers go unheard.