A couple of weeks ago, we held our Tech Joker Days. During these two days engineers only have the instruction to work on whatever they think may help FAIRTIQ. I had this blog article planned and so I decided to work on a showcase to support it. I teamed up with Manu and we built Nicht FAIRTIQ (german pun, nicht fertig, “not finished” or “not done”).
Nicht FAIRTIQ is a realtime counter, which shows the number of trips people have done with FAIRTIQ. Every time someone checks out, the counter goes up by one.
We created this within a day but building the core of it was actually much shorter. Most of our time went into setting up the code project, updating everything (as a CTO I don’t get to code very often) and fiddling with the CSS until it looked more or less like what we wanted. What’s really cool is that we did not have to ask anyone to support us to create this. How is that even possible in such a complex software system?
FAIRTIQ has grown from the garage startup to a 70 people company. Although we’re keeping the startup spirit, lately it has become more and more difficult to experiment and quickly try out random stuff.
Today there are six dev teams. Every time you want to make something, you’re likely involving at least one or two other teams, to create an API that you’d need for example. Usually, your crazy idea is not at the top of their list. So you end up writing the code but then again, you need someone to review it. You find yourself spending more time justifying why your small tiny idea could potentially be great in order to give it a chance to enter a development cycle. Bullshit. What if you could just do it? Without being blocked by anyone?
This is precisely the issue I wanted to tackle for my team and for the company. In order to stay ahead and keep innovating, it’s crucial that employees work in an environment that supports experimentation and personal initiatives. Not only culturally, but also with the right tooling.
To address the team's interdependence issue, the key principle is to make the data available in its raw form with stable interfaces. The team who owns or produces the data does not format it for a specific use case. Instead the team who uses or consumes the data transforms it to fit their own needs. That enables multiple consumers to use a data source without requiring the producer to adapt it for every possible use case. Technically that can be realised in various ways. Rest APIs, data lakes and data streams are some examples.
Whereas data lakes and rest APIs allow you to query data, they don’t provide an easy way to react to events, i.e. things that happened in the system, in real time. This is exactly what stream-processing platforms, such as Kafka, enable. Unlike queues, streams can have as many consumers as you want because reading a stream does not modify it. It’s a bit like watching boats passing by on a river.
We introduced Kafka one year ago and since then teams started to publish more and more events into it. Example events at FAIRTIQ could be “someone has checked in”, “the price of a journey has been computed” or “this user has changed their email address”.
Even though we need to further develop other types of data sources and provide better tooling for A/B testing and feature flags, stream processing was a big missing piece of our puzzle. If events and data are available for me to use off the shelf, I can build whatever I want without bugging anyone. I just take my computer, a big fat pizza (without pineapple, in doubt), and start hacking. That’s how Manu and I built Nicht FAIRTIQ.
Nicht FAIRTIQ is a service that consumes the journey stream and adds one to an internal counter every time there’s a new journey. The counter is seeded from an internal API which returns the total number of journeys. The clients connect to the service using SSE (server sent events). Every time the internal counter is updated, an SSE event that contains the counter’s value is sent to all connected clients.
Whereas Nicht FAIRTIQ is more of a pet project to showcase how Kafka helps make people and teams more independent, we implemented stream processing in multiple mission critical components at FAIRTIQ. For example, sensor data collected by the app during your journey (such as your position) is recorded as events into a Kafka stream. An obvious consumer of this stream is the journey mapper, which will compute the actual journey. When we developed Smart Stop - a feature that terminates your journey for you if you forget to check out - we consumed the exact same Kafka stream. And if tomorrow I wanted to create a rail network disruption detector, I could use the very same stream again.
Having event streams as part of the infrastructure also makes other architecture paradigms possible, such as event sourcing and microservices choreography, which allow for more decoupling and resilience.
Nicht FAIRTIQ, despite being a modest showcase, inspired not only engineers at FAIRTIQ but also the whole company. As we’re at the time of writing in the middle of the Teamwiq (our company-wide retreat), I can tell you that we’ll soon have other awesome projects to show you. It’s interesting to see that the more the ones use data made available, the more the others make data available.
This emphasises that technical decisions and the infrastructure put in place are key for startups and established companies to foster innovation.