A Critical Review: What I Learned From My First UX Project
TL:DR
This was my first UX project from bootcamp (2021-2022). Instead of presenting it as polished work, I'm using it to show how my thinking has evolved - from following templates without understanding why, to building a research practice grounded in methodology, planning, and evidence. Every mistake here taught me something I now apply daily.
Survey
Then (2021-2022):
Started with a survey because that's what the bootcamp curriculum required. I guess that was their attempt to teach us how to create surveys. For myself, I used it to gain a basic understanding of how people book hotels. The main objective was to identify the devices people use and the challenges they face when making hotel bookings online.
What was wrong:
Surveys shouldn't be the starting point for research. To understand the what, how, and why of user behavior, you need to observe people doing the task - not ask them to recall or speculate.
The questions themselves had problems:
- Double-barreled questions: "Do you have a favourite hotel website or app, if so which and why?" is actually three questions crammed together. When someone answers, you don't know which part they're responding to. You can't analyze it properly.
- "What changes would you make?" is flawed for two reasons: users aren't designers so they shouldn't be asked to suggest changes, and you shouldn't ask what people would do - ask what they did last time. Plus their answers are too vague to act on. "Better map" tells you nothing. You'd need to watch them struggle with an actual map to understand what's wrong.
- Asking about current tools: Just because people use Booking.com doesn't mean Booking.com is great or innovative. Survey responses about current behavior don't tell you what should exist, only what people do now with existing options.
22 responses is too small a sample, but frankly even thousands would have been wrong - with badly designed questions, no proper direction, and no audience screening criteria. The number doesn't matter if you're measuring the wrong thing with the wrong people.
Device preference could have been found through analytics (for existing products) or observed during generative research.
What I'd do now:
Start with generative research: observe people booking hotels, ask about their last real experience.
Use surveys only for recruitment - with qualifier questions upfront to find the right participants, then bucketing questions to segment them into groups.
The missed insight:
Looking at the data now, there was actually something interesting: most people browse on mobile but book on desktop/laptop. I noted this but used it to justify focusing on desktop. I should have dug deeper - why the switch? Is mobile booking broken? Are people more careful with purchases on bigger screens? This could have been a meaningful discovery, but I didn't have the skills to pursue it.
Competitive Benchmarking
Then (2021-2022):
The bootcamp told us to look at best-in-class hotels to see what they're doing well (to learn from) and what they're doing badly (to avoid). I selected 3 highest-rated hotels in Ghent plus Booking.com as an aggregator.
What was wrong:
"Good" or "bad" based on what? Without knowing their research, their users, their business goals - I was just judging by personal taste. That's not UX, that's opinion.
I had no idea:
- Did they conduct research?
- Did they design based on that research or someone's preference?
- Who are their users and are they the same as mine?
- What are their goals and metrics for success?
Traditional competitive benchmarking often becomes "they have feature X, we should too" - which is feature-driven, not experience-driven. That's not UX research, that's feature copying.
When someone shows me screenshots of products they like "for inspiration," I'm always reluctant. Inspiration for who - for you or for our users? Pretty doesn't mean usable. Popular doesn't mean right for our context. Plus, what is the reason for doing what everyone else is doing?
What I'd do now:
I hardly ever do competitive benchmarking anymore. But when it's useful, it's for a completely different reason: understanding mental models.
If my users also use similar products, it helps to know what patterns they've learned - not because those products are "good," but because users bring those expectations to my product. In a recent project, I recruited participants who use Google Docs, Word, and track changes. Watching where their existing expectations were met or broken told me more than any feature comparison ever could.
The question isn't "what are competitors doing well?" It's "what have users learned to expect, and where will my product confirm or break those expectations?"
User Interviews
Then (2021-2022):
In addition to reviewing two user interviews provided by the UX Design Institute, I conducted interviews with three additional people. I had a discussion guide - basically the template the school provided with some adjustments to fit my project. Two sessions were in-person, one was remote.
I used two of the "best in class" hotel sites from my competitive benchmarking as test sites. The reasoning was: if these are the best, let's see how users actually perform on them.
What was wrong:
Recruitment: I asked family and friends to help me. No screener, no criteria, no consideration of whether they matched my target users. I had no idea if these people actually book hotels, how often, or what their needs were.
Planning: I had a discussion guide, but I didn't really understand why each question was there. I was following a template without understanding the methodology behind it.
What I actually learned (but didn't frame correctly):
Interestingly, even on the "best in class" sites, people struggled with basic things - finding the hotel address, finding the right room types, comparing prices. I should have concluded that "best in class" doesn't mean designed well or with users in mind. The ratings were probably based on the hotel experience, not the booking experience.
I also didn't understand back then that researchers and designers need to consider much more than users - business goals, technical constraints, budgets, stakeholders, and sometimes their egos. Research doesn't happen in a vacuum.
What I'd do now:
My research process has evolved completely. I now carefully plan each research project from the start. I even created my own Notion workspace that brings together everything needed to plan, run, and wrap up research - from early prep (plan, screener, guide) to synthesis and reporting.
Today my process includes:
- Research plan with clear goals, methodology justification, and timeline
- Screener survey with disqualification logic to filter participants and segmentation questions to bucket them
- Proper recruitment - not friends and family, but people who match specific criteria
- Discussion guide that I write myself, with intentional questions tied to research goals
- Consent and compliance handled properly
- Session notes and recordings organized for analysis
- Structured synthesis - not just notes, but tagged observations leading to findings
The difference isn't just more documents - it's understanding why each step matters and what happens when you skip them.
Affinity Diagram
Then (2021-2022):
I reviewed my interview notes and grouped anything relevant to the user experience: goals, behaviors, pain points, mental models, contextual information. I organized them into categories: Search results, Selecting dates, Hotel location, Expectations, Booking conditions, Pricing, Back-end, Visual, Navigation/flow.
I noted at the time that since I was working alone, I focused mainly on user experience and less on business goals or opportunities.
What was wrong:
Analysis stopped at grouping. I did the hard work - detailed notes with timestamps, emotions, actions, context. But then I grouped them into categories and stopped there. I didn't ask "why is this pattern happening?" or "what does this mean for the design?" Grouping is organizing data. Analysis is interpreting it.
Categories were topics, not insights. "Clear pricing" is a topic. The insight would be something like: "Users can't commit to booking because they don't trust the displayed price is final - they've been burned by hidden fees at checkout." That tells you why it matters and what to do about it. My sticky notes just told me which bucket to put them in.
Notes lacked clarity. Looking back, notes like "Simple flow" or "Upfront cancellation policies" are unclear. Are these pain points? Things done well? User needs? I probably had context at the time, but I failed to document it properly. If I can't understand my own notes months later, they weren't written well.
Some notes belonged in multiple categories. "Clear pricing" belongs to Pricing, but also to User needs, User struggles, and Expectations. A flat category system doesn't capture that.
No research goals to guide focus. Without specific questions I was trying to answer, I ended up with "here's everything I observed" instead of "here's what matters and why." When everything feels equally important, nothing is actionable. This came from poor planning - I didn't define what I needed to learn before I started.
What I'd do now:
My synthesis process has evolved into multiple passes:
- Hot notes - Immediately after each session while memory is fresh. Critical bugs or issues sometimes get fixed before the next interview.
- Thematic pass - AI-assisted transcript review (Dovetail) to surface patterns. Quick wins get flagged for immediate action - browser bugs, resolution issues, obvious UI failures.
- Deep pass - Behavioral analysis. What did users want vs. what did they do? What patterns emerge across participants? This feeds into bigger design decisions.
The reason for this approach is practical: fix what can be fixed fast, don't overload developers, stay ahead of planning, and don't bottleneck everything waiting for parts that take longer. Quick fixes are for obvious stuff. Deeper insights come from pass 3 and feed into design direction.
What I haven't done yet:
I still do affinity mapping solo. Doing it with others reduces bias and surfaces interpretations I might miss. I haven't had the opportunity yet - but I know it's a gap.
Customer Journey Map
Then (2021-2022):
I used my research findings to map the customer's experience and mood at each step of the booking process. I included goals, behaviors, context, positive interactions, and pain points. I noted user quotes to show their state of mind. The mood tracking - the line going up and down - came from what users said or what I observed them do, not assumptions.
Key observation: users start to experience problems on the search results page. This is where they make their final choices, but the step is poorly optimized for comparisons.
What was actually fine:
Unlike many bootcamp journey maps, mine was based on actual observation. The moods, the pain points, the quotes - all came from watching real people struggle with real tasks. I didn't make it up.
What was wrong:
I'm not sure it was useful. I created it because the bootcamp told me to. But what did I actually do with it? It confirmed that the search results page was problematic - but I already knew that from watching users struggle. The journey map was a nice visualization of something I'd already learned.
It wasn't actionable. A journey map shows where problems happen, but not why or what to do about them. The insights that actually drove my design decisions came from the observations themselves, not from plotting them on a timeline.
What I'd do now:
I haven't created a journey map since. Not because they're always useless, but because I haven't felt the need. For the projects I've worked on, the observations and synthesis were enough to move forward. If I had a complex multi-touchpoint service with many stakeholders who needed to see the big picture, maybe. But for focused product work, I'd rather spend that time on deeper analysis.
Journey maps can be valuable for alignment and communication - showing stakeholders "look, this is where users suffer." But they're a presentation tool, not an analysis tool. The insights come from research. The journey map just packages them.
User Flow
Then (2021-2022):
I created a high-level flow and a more detailed flow showing how users would move through the booking process. I included the "happy path" and knew I should add unhappy paths too, but didn't really get to it.
What was wrong:
I didn't know flowchart notation. I used shapes that looked nice to me - circles, diamonds, rectangles - without understanding they have specific meanings. Circles are for start and end points. Diamonds are for decision points. We weren't taught in the bootcamp that there's actually a standard for this.
Happy path only. Real users don't follow happy paths. They make mistakes, change their minds, encounter errors, abandon halfway through. A flow that only shows the ideal scenario doesn't prepare you for reality.
What I’d do now:
I create much more comprehensive flows now. They're messier and more complex - because real products are complex.
My current flows include:
- Proper notation (circles for start/end, diamonds for decisions)
- Multiple paths - not just happy path, but error states, edge cases, alternative routes
- Different entry points (users don't always start where you expect)
- Error handling documented
- Screen references with annotations
- Color coding to distinguish paths (e.g., green for success, red for errors)
They're not pretty. But they're useful. A clean, simple flow usually means you haven't thought through what actually happens when things go wrong.
Wireframing & Prototyping
Then (2021-2022):
I started with paper sketches - I really enjoyed drawing them. I sketched each screen including different states, then moved to low-fidelity wireframes in Figma. I created a clickable prototype, though it was linear and not conditional. Finally, I made a handover document with annotations for developers.
What was actually fine:
My design decisions were based on research. Users struggled to find hotels in relation to places they needed to be - stations, public transit, landmarks. So I integrated that into the map. Users wanted to compare prices, room sizes, conditions. So I built a comparison tool. I focused on the aspects people struggled with most, even if it was only 3 people.
Could I defend every decision with a direct quote from research? Probably not perfectly. But the direction came from observation, not assumption.
What was wrong:
No testing of the solution. I designed based on flawed research, but I never tested whether my design actually solved the problems. Did the comparison tool work the way users expected? Did the map help them find what they needed? I don't know - it wasn't in scope of the project. That's a gap.
Linear prototype. My Figma prototype was clickable but linear - it didn't account for different paths or conditions. Real users don't follow a single path.
What I'd do now:
I don't do paper sketches anymore - it's time-consuming, and honestly my ‘fast’ sketches look like horror scribbles I can barely understand myself. Since I already have design systems in place, I mostly go straight to design. But if I'm in doubt about flow or functionality, I do basic wireframes first.
Handover Document
Then (2021-2022):
I created a handover document using Figma with annotations for developers. I added comments to the designs with instructions explaining how things should work.
What was wrong:
The annotations were basic and unstructured. I didn't have a clear system for organizing them. I also didn't really know what or how to communicate things like interactions, conditions, and the logic behind the rules - the stuff developers actually need to build it properly.
What I’d do now:
I learned that numbered annotations work best. I keep flows on separate pages and use a simple number sequence - clear and easy for developers to follow. No hunting around for scattered comments. Each annotation now includes what it needs to: conditions ("only visible when..."), dynamic content rules ("label changes depending on..."), error handling with templates, and edge cases. The goal is to answer the questions developers would ask before they have to ask them. Handover isn't just about pretty documentation - it's about reducing back-and-forth questions. The clearer the specs, the fewer misunderstandings during development. I also usually have an alignment call with developers before creating these annotations
The biggest lesson?
Bootcamps teach clean, pretty ways of doing things. Real projects are never like that. The methodology looked good on paper, but I didn't understand why each step mattered - so when things got messy, I didn't know how to adapt. Now I do.
What I hope this case study shows?
Growth. I learned a lot in the years since this project. I'm open to being critical of my own work and changing how I do things when it's warranted. The mistakes here aren't embarrassing - they're the foundation for everything I do better now.