The Hackathon that updated my Software
It was an uneventful evening when my friends and I were on meet and were just discussing our usual topics, when we stumbled upon the topic of hackathons. We had made it a habit to apply to an average of 4 hackathons per semester. While the aim is to win one, we end up learning a lesson and spoiler alert we didnt win this one either but I learnt something even better. One of my friend sugggested that we attend a Beach Hackathon, for context, we aren’t from a place that is close to a beach, so we excitedly applied.
In our first round, we had to submit a project on any topic and then pitch it to the judges. After which, they asked us a bunch of questions that we smoothly answered. A few days, we were notified that that we were selected as finalists. Anyone walking past us would assume we were mad, cos we celebrated by holding each other’s hands and jumping. Sure, we were happy that our project was selected but a big part of us was also was ecstatic to go to the beach with our friends!
On the 29th Jan 2026, morning, we reached Guruvayur, ready to have the time of our lif,e which I am glad to report we did have. But here is where the actual story starts. About 2 days before we were given our problem statement, we chose to do our project in the field of IoT.
https://hackmd.io/@alexaa34/Skr3ZfRj-e
https://medium.com/@alexharris59600/the-hackathon-that-updated-my-software-efc203372a39
Our problem statement is that industries generate huge amounts of machine sensor data, but maintenance systems often give alerts without explaining why the decision was made, making it hard for engineers to trust, verify, or act on those alerts confidently.
Our solution to this problem, was to build an explainable predictive maintenance system that not only predicts failures but also clearly shows the reasoning behind every maintenance decision.
So let me walk you through what we actually built, because honestly, I think it’s pretty cool.
We called it TraceMaint, and the whole idea behind it was one simple frustration, why do industrial AI systems tell you what is wrong but never why they think so? Engineers get an alert that says “Pump failure imminent” and they’re just supposed to trust it? No reasoning, no context, no paper trail. Just blackbox. That felt wrong to us, and that became our problem to solve.
The core philosophy we built around was something we kept calling “Trace First.” The idea is that explainability shouldn’t be something you bolt on after the AI makes a decision; it should be baked into every step of the process. Traditional systems guess first and explain later, and those explanations are often just the AI making something up that sounds reasonable. We didn’t want that. We wanted every single alert to come with a full reasoning path that you could inspect, replay, and audit. If the system can’t show you how it concluded, it shouldn’t be trusted. Simple as that.
Now, the system itself is built as a pipeline with three main layers, and each one has a very specific job.
The first is the Data and Sensor Layer. Since we obviously couldn’t wire up a real factory floor at a beach hackathon, we simulated it. We wrote scripts that generate synthetic sensor data for industrial assets — things like pumps, conveyors, and compressors. The raw signals from these simulated machines then go through a feature extraction step where they’re processed into more meaningful metrics like RMS values, trends, and deltas. Think of this as the system’s eyes and ears — it’s constantly watching the machines and converting their readings into something the decision engine can actually work with.
The second, and honestly the most interesting part, is the Trace Engine. This is the heart of the whole project. Instead of using a neural network that nobody can interrogate, we built a rules-based inference engine. Human-defined thresholds are applied to the feature vectors, and when a rule fires, the system doesn’t just note the outcome — it records exactly which rule fired, in what order, and how confidence accumulated across the decision. This gets packaged into what we call a reasoning_trace, a JSON object that is essentially a step-by-step diary of how the system made up its mind. We also built a small Streamlit UI on top of this so you can actually watch the traces come in live and inspect the system health in real time. Seeing it work on screen after two days of furious coding at a beach resort was genuinely one of the best feelings.
Comments
Post a Comment