When a Project Doesn't Go as Planned
A reflection on a machine learning project that didn’t prove what we expected, and why it was still one of the most valuable experiences of my degree.
When a Project Doesn't Go as Planned
One of the most memorable projects from my computer science degree was a machine learning project focused on bird migration patterns and climate change. It’s memorable not because it produced a groundbreaking result, but because it didn’t.
And that turned out to be the point.
Starting with excitement (and a shaky premise)
This project came at a moment when I had just learned a solid foundation of machine learning techniques. I was excited, maybe too excited, to finally apply them to a real dataset and build something meaningful.
Our initial hypothesis was that we would be able to observe measurable shifts in migratory bird patterns as a result of climate change. On paper, it sounded intuitive and impactful. In practice, we didn’t spend enough time critically evaluating whether this hypothesis was scientifically sound or realistically testable given the data and time constraints we had.
Looking back, this was the first big lesson: technical excitement can easily outrun careful planning.
Falling into the “build harder” trap
Once we had committed to the project direction, I threw myself into the technical side.
I spent a large portion of the semester:
- cleaning and preprocessing data
- experimenting with features
- training and tuning models
- optimizing performance
- validating results
From a purely machine learning perspective, this part went better than I expected. In particular, a climate-related prediction model we built ended up performing surprisingly well, and I’m still proud of how that component came together.
But there was a tradeoff.
By investing so heavily in the modeling itself, we left ourselves very little time to step back and ask:
- Is this still the right question?
- Are we measuring the right thing?
- Do these results actually support the original hypothesis?
By the time we realized the answer was “probably not,” the semester clock had run out.
Presenting an unexpected (but honest) result
At the end of the semester, our presentation didn’t follow the narrative we originally imagined.
Instead of presenting a strong correlation between migration patterns and climate change, we had to stand up and say something closer to:
Based on our analysis, we were unable to find a meaningful correlation.
At first, that felt like failure.
But the more I thought about it, the more I realized that this was actually a valid scientific outcome. Negative results are still results, especially when they’re reached carefully and honestly.
We didn’t hide it. We explained what we tried, what worked, what didn’t, and what we would change if we had more time.
What this project taught me
1. Excitement needs structure
I learned that I sometimes get so excited about one part of a project, in this case, building and training models, that I rush through planning and context-setting.
Since then, I’ve been much more intentional about slowing down early:
- clearly defining goals
- questioning assumptions
- stress-testing hypotheses before committing too deeply
2. Wrong hypotheses aren’t failures
This project helped reframe how I think about being “wrong.”
In both science and engineering, incorrect assumptions are normal. What matters is:
- recognizing them early
- being honest about results
- adapting when evidence contradicts expectations
That mindset has stuck with me far beyond this class.
3. Long-term projects require balanced focus
Finally, I learned how important it is to distribute effort across all phases of a project:
- planning
- experimentation
- implementation
- evaluation
- communication
Putting too much energy into one phase, even a technically interesting one, can undermine the project as a whole.
Why I still value this project
Despite the outcome, this project fundamentally changed how I approach machine learning and software projects today.
It taught me that:
- strong code doesn’t guarantee meaningful results
- good engineering includes knowing when to pivot
- reflection is just as important as execution
And perhaps most importantly, it reminded me that learning doesn’t always look like success in the moment, but it compounds over time.
If you’re interested in the technical details of this project, including the models, data pipeline, and evaluation approach, I’ve included a full case study elsewhere on this site.