Getting Hands-On with Windsurf
Learn how to build a simple web app using Windsurf.
We'll cover the following...
Building a functional web app in ten minutes might sound unrealistic, but in this lesson, we’ll show exactly how it works. Step by step, you’ll see how Windsurf and Cascade can help create, debug, and refine a basic to-do list app using just vanilla HTML, CSS, and JavaScript. No frameworks, no setup shortcuts—just clear, transparent development inside the IDE.
Everything in this example happened inside the editor we set up in the last lesson. The process is quick—just a few minutes end-to-end. The key is knowing how to communicate with Cascade effectively. Instead of vague or incomplete prompts, give it clear, specific instructions—just like you would when collaborating with another experienced engineer.
How to build a simple web app with Windsurf
We start exactly where we should: staring at an empty Windsurf workspace. If you’ve used VS Code, your fingers already know the lay of the land; Windsurf is a fork, so every extension and key combo you love came along for the ride. We pressed the “Code with Cascade” buttons, which in our case were “⌘+L.” It slides in on the right, demarcating a space that feels less like a search bar and more like the colleague who always has a whiteboard marker handy. This distinction is psychological gold: a search bar is where you try to look for answers, a colleague is someone you brief confidently and expect results from.
A lot of people add paragraphs into AI chats, but verbosity hides the actual ask. We prefer the opposite: one big sentence packed with constraints. Here’s the exact text I pasted:
Create a minimal Todo-List web app using only vanilla HTML, CSS, and JavaScript. Provide index.html,styles.css, and script.js in the project root; let users add tasks, mark them complete, delete them,filter All / Active / Completed, toggle light-and-dark themes, and persist everything to localStorage.Don’t use any external libraries or CDNs. When finished, run the app, add two sample tasks,and confirmall features work.
That’s forty-nine words, yet it delivers purpose, scope, file structure, feature list, a hard prohibition, and a self-test. It’s the kind of micro spec you’d hand a competent coworker before coffee break. Notice that we are not micromanaging CSS colours or HTML semantics; if we hire a pro, we don’t specify how many tabs of indentation they must use. We state outcomes and rely on professional defaults. That trust is an important psychological signal: it tells Cascade we believe it can handle autonomy, but we’ll still inspect the result.
We press “Enter” and lean back. Cascade spends maybe around twelve seconds thinking; the sideways spinner flicks, and suddenly three files appear in our Explorer. It then asks us if we want to start a local server so we can test it out, and suggests that we run the command python3 -m http.server 8000
.
If we open localhost:8000
in any browser on our system now, it shows a centered to-do box already waiting with demo items. One click flips to light theme, another click the checkbox strikes out a task. Total keystrokes from us: precisely the prompt. This took us around 5 minutes, counting the time it took us to decide on the wording for our initial prompt.
Here’s the important thing: we are not clapping yet. Fast output isn’t automatically good output. A senior engineer knows “works on first run” is just the opening handshake; the real test is whether it works correctly under the variations you care about.
Beyond first impressions
When we inspected the light mode, we immediately spotted the classic oversight: the “TODO” text was nearly invisible, rendered as pale gray on a white background. Cascade had nailed dark-mode styling, then phoned it in on the flip side. We tried the drag-and-drop reordering feature that it spontaneously added, and noticed items jumping unpredictably, not landing correctly, or sometimes refusing to move altogether. Classic AI optimism—introducing ambitious bonus features but skipping the QA. We weren’t annoyed, though; this scenario was expected. When your teammate ships a rushed feature, you don’t rewrite it yourself—you clearly communicate what’s broken and let them handle it. Peer-to-peer accountability.
Cascade, please review the current Todo-List app: in light mode the task text blends intothe background—adjust colors or contrast so each item remains clearly readable while preservingdark-mode styling. Additionally, the drag-and-drop re-ordering either fails to trigger or dropsitems in the wrong position; debug the event handlers and update the logic so tasks can be smoothlydragged to any index without duplication or loss. After fixes, rerun the app, demonstrate correctlight-mode visibility and successful re-ordering with three sample tasks, then summarize thechanges made.
Again, note the clarity here: we precisely identify what’s wrong and specify exactly how we’ll verify the fixes. We’re not dictating low-level implementation details (Cascade handles that), but we do demand proof of correctness. Seeing is believing.
Less than twenty seconds later, Cascade updated our files. It adjusted a CSS variable for contrast and completely reworked the JavaScript drag-and-drop logic. Did we immediately deep-dive into the diff to scrutinize every line? Nope—we opened the live preview first. Instantly, tasks were readable in both themes, and drag-and-drop now worked as intuitively as you’d expect. Cascade provided a neat summary of exactly what it changed, but we didn't obsess over the exact JS method names or specific CSS hex codes. The outcomes mattered more: no glitches, no surprises, just clean functionality.
Human time investment for these fixes: typing exactly one short, clear paragraph. Total session time at this point: still comfortably under ten minutes. We now had a robust Todo list app with full persistence, theming, and polished drag-and-drop—built entirely by clearly communicating our goals to Cascade.
By now, the key pattern here should feel blindingly obvious: the difference between a clean, impressive first draft and an error-filled mess always comes down to your original instructions. AI isn’t a magic crystal ball; it's a brutally literal executor of your words. If you feed Cascade fuzzy, ambiguous ideas (“maybe it could have this...”), you’ll end up babysitting it all afternoon. But approach Cascade with clear, specific expectations—exactly like you’d brief a professional dev—and it rewards you with speed and quality.
The real lesson here isn’t that AI makes everything trivial. It’s that when your own thinking is clear, specific, and organized, an AI assistant becomes an incredible multiplier of your productivity. Vague ideas in, vague code out. Tight, well-defined prompts in, production-grade results out—fast. Cascade’s strength mirrors your clarity.
Note: The examples demonstrated in this course are specific to our particular sessions and should be treated as illustrative rather than definitive. Since AI-generated code varies with each request, the exact output, functionality, and even the bugs you encounter when using Windsurf will likely differ from what we show here. The code generation process is inherently non-deterministic, meaning you might get different implementations, styling choices, or even entirely different approaches to solving the same problem. While the core principles and workflows we demonstrate remain consistent, expect your own Windsurf experience to produce unique results each time you interact with it.
We’ve demonstrated how quickly you can launch something basic. But now imagine extending this further. Within the next few minutes, you could prompt Cascade to add keyboard shortcuts, color-coded priorities, JSON export/import, or even automated unit tests. Each incremental enhancement is one tight, precise prompt away, and the feedback loop shrinks dramatically when you’re confident in what you’re asking. Windsurf’s entire ecosystem—built on familiar VS Code foundations—means you spend zero time relearning environments or switching contexts. You just focus on clearly communicating your next step.
What’s next?
So what did we actually accomplish with this simple to-do list? More than just checking off features—we demonstrated how clear, structured prompts can drive meaningful progress. Cascade didn’t just fill in code snippets; it helped with debugging, design decisions, and implementation by following your intent.
This is more than autocomplete. It’s a tool that responds to your thinking—one that improves as you get better at defining problems and communicating your goals clearly. The value isn’t in typing less, but in working smarter with systems that understand your project and context.