24 Hours, 5 Layers, and 0 Caffeine: My Hackathon Dive into Supply Chain Optimization
Published:
The Arena: A University Hackathon
The setting was a buzzing university auditorium on a Saturday morning. Around us, student teams were clustering around tables littered with laptops, cables, and the nervous energy of a 24-hour hackathon about to begin. The event organizers, fellow students and volunteer faculty, were announcing the challenge tracks. Among them was an “Industry 4.0” track featuring a real-world problem from the local agri-tech sector: “Intelligent Logistics for Pig Transport in Catalonia.”
It wasn’t the only ambitious project in the room. Nearby, a team was already sketching a Docker Swarm architecture for a distributed system. Another was unpacking two webcams, talking excitedly about 3D motion sensing in Unity. The air was thick with the potential for both spectacular success and glorious failure.
When the “pig logistics” slide appeared, detailing weight penalties, truck capacities, and weekly farm constraints, it immediately stood out. It wasn’t a toy problem, it was a messy, constrained, multi-variable optimization puzzle straight from the real economy. I turned to my friends: “That’s the one. That’s a real system to build.”
Team Formation: Dividing to Conquer
Our team of four had mixed motivations. Two friends were there for the classic hackathon experience, the adrenaline, the free pizza, the all-nighter camaraderie. Another friend and I were drawn to the technical depth of the logistics challenge. After a quick huddle, we formed our battle plan, splitting into parallel tracks from the very first hour:
- My Track (The Algorithmic Core): I would focus entirely on the “intelligence”, the multi-layered optimization engine that would decide which farms to pick up, when, with which trucks, and in what order. This was my self-assigned crash course in operations research.
- Their Track (The Visualization Layer): Two teammates would dive into the frontend. Their mission was to build an interactive dashboard and map to visualize the decisions my engine made, using a web framework they were also learning on the fly.
- The Wildcard: Our fourth member became a floater, sometimes helping me reason through a constraint, sometimes assisting the UI team with data formatting, a crucial bridge between our two increasingly isolated worlds.
By 11 AM, hacking time began, and we diverged into our respective code editors, connected only by a shared GitHub repo and the occasional shout across the table.
Building in Parallel Worlds
For the next 14 hours, our halves of the project lived in different universes.
In my universe, it was a silent, deep dive into theory and Python. I was learning by doing: reading about Mixed-Integer Linear Programming (MILP) and implementing it with the PuLP library to create a strategic two-week plan. I built a Monte Carlo simulator to model the uncertainty of pig weights. I coded a Genetic Algorithm from scratch to solve routing problems. The architecture crystallized into a five-layer pipeline, each layer consuming the output of the last. It was intellectually thrilling. The optimization_conductor.py file grew, a testament to a day spent turning academic concepts into runnable code.
In their universe, it was a battle with Streamlit, mapping libraries, and charting tools. I’d hear frustrated mutters about “Map not loading” and “GeoJSON formatting.” They were building the face of our system, a crucial part of the challenge, and fighting their own uphill battle against unfamiliar technology.
We’d sync every few hours. I’d show them a terminal output: “Look, it scheduled Farm_C for Day 3!” They’d show me a map with a draggable pin. The progress was real, but on entirely different planes.
The Collision: 4 AM and the Integration Abyss
The collision of our two worlds began around 4 AM. The initial learning high was gone, replaced by a deep, physical fatigue. My brain, running on zero caffeine (by choice) and pure stubbornness, began to fog. Thinking through a recursive function felt like wading through mud.
This was also the moment we entered Integration Hell. My beautiful backend pipeline output complex Python dataclasses. Their frontend expected clean, simple JSON. The floater became our diplomat, trying to translate between the two.
We’d pipe data over, and the UI would freeze or plot farms in the middle of the ocean. Was it a bug in my state-update logic? A race condition in their rendering? In our sleep-deprived state, every error message was a cryptic enigma. The elegant logic of my optimization engine felt worlds away from the interactivity of the dashboard. The distributed system teams seemed to be deploying containers smoothly, the Unity teams had a flickering 3D model on screen. We had a brilliant brain and a shaky, unreliable nervous system.
The Graceful Retreat and the Real Win
By 9 AM Sunday, reality set in. Our frontend was a fragile prototype. Buttons had unpredictable effects, and the map would sometimes give up entirely. We had about two hours until judging.
We looked at the other projects taking shape: polished web apps, functioning game demos, robust-looking architectures. We had no illusion of winning. Our “why” for being here wasn’t a prize, it was the learning and the experience. That goal, at least, we had absolutely achieved.
We made a conscious decision to step back. We spent the final hours not in a panic, but in consolidation: documenting our code, writing a README that explained our five-layer architecture, and preparing to present not a product, but a proof-of-concept and a learning journey.
The Atmosphere and the Real Prize
The hackathon ambiance was a spectrum of intensity. You could spot the three tribes:
- The Professionals: Teams with a pre-baked idea and practiced stack, here to execute and win.
- The Explorers (Us): There for the raw challenge and the thrill of building something new under fire.
- The Vibers: There for the community, the snacks, and the fun of it all, keeping the room’s energy light.
The organizers, shepherding all of us, were saints. I survived on an epic intake of pizza, soda, and chips, a hackathon diet I do not recommend, but proved you don’t need coffee to code for 24 hours (just a high tolerance for absurdity).
When demo time came, we showed what we had: slides explaining the complex logistics problem, a screenshot of our optimizer’s terminal output making intelligent decisions, and a very cautious live demo of the map that we prayed wouldn’t crash. We didn’t win. The prizes went to the teams with complete, polished systems.
But our prize was different. I walked out with a fully designed and partially implemented optimization engine in my portfolio and a practical understanding of MILP, Monte Carlo simulation, and heuristic search that no tutorial could have provided. My teammates had battled and learned a modern frontend stack under extreme pressure.
We didn’t build a winning app. We lived the full, messy, exhausting, and exhilarating cycle of a hackathon: the excitement of a new idea, the flow of deep work, the despair of integration, and the clarity of knowing what you came for. For 24 hours, we weren’t just students, we were system architects, full-stack developers, and problem-solvers. And for that, I’d sign up again in a heartbeat.
The Aftermath: From Hackathon Chaos to Portfolio Clarity
In the days following the hackathon, as the sleep debt was repaid and the memories of 4 AM JSON errors faded, something interesting happened. The experience of the event, the adrenaline, the teamwork, the struggle, remained a great story. But the technical work I’d done demanded its own space.
The five-layer optimization engine wasn’t just hackathon code, it was a legitimate, structured exploration of operations research concepts. I realized the project had two distinct lives:
- The 24-Hour Experience: The story you just read, of learning under pressure, parallel team tracks, and the beautiful mess of creation against a clock.
- The Technical Artifact: The carefully designed system of constraints, algorithms, and data flow that I had architecturally mapped out and partially implemented.
I decided to give each its proper home. The human story belongs here, in this blog post. But the technical system deserved a detailed, permanent record in my professional portfolio.
If you’re interested in the how rather than the what it felt like, I’ve written a detailed technical breakdown. In my portfolio post, Building a Multi-Layer Optimization Engine for Livestock Logistics, I strip away the hackathon context and focus purely on the engineering:
- Architectural Diagrams: How the five layers interconnect.
- Constraint Formulation: My exact approach to encoding the “one farm per week” rule into a linear MILP constraint.
- Algorithm Deep Dives: The logic behind the predictive scoring system, the design of the Genetic Algorithm’s fitness function, and the smart farm-splitting heuristic.
- Code Snippets & Design Decisions: Key implementation details from the optimization_conductor.py and why certain algorithmic choices were made over others.
- Technical Learnings: My reflections on moving from predictive AI to prescriptive optimization, and the challenges of hierarchical system design.
The hackathon was the crucible where this system was forged under extreme time pressure. The portfolio post is the laboratory analysis of what was created, a permanent record of the technical journey from zero to a functional optimization pipeline.
Both pieces are true to the same 24 hours, just viewed through different lenses. One is about the sprint, the other is about the blueprint we drafted while running it.
Explore the technical deep dive here: Building a Multi-Layer Optimization Engine for Livestock Logistics
Leave a Comment