LAIKA technological stack and design approach
This article provides details regarding LAIKA tech stack so you could compare it with your current landscape and evaluate our approach. We also highlight main solution design principles and concepts, so it could be a valuable starting point for technical people who just met our platform.
LAIKA is a digital platform that covers multiple areas like DAM, PCM, MCM, brand management, digital publications and many others.
Our platform:
- modern
- scalable
- flexible
- extensible
- supports large amount of data
- customizable
- designed for enterprise
- cross-platform
LAIKA is headless / API-first platform. It means that it could be used without any user interface, just by calling our APIs. But it has also another big advantage: if you want to create a new frontend application (like web application, mobile application or even desktop tool) there is no need to implement something complex. In the most cases UI application just calls backend, shows results, gather user input and then calls backend again. It could be extremely useful if you want to create some simple yet useful user interface for some specific user group / persona. It's always better to create some fine-tailored rather than to use one unified UI that looks like startship management panel, right?
LAIKA is microservice-based. What does it mean? For us it means that the platform functionality divided to independed services that are loose-coupled. Each service could have specific logic and tech stack (but we trying to keep it as unified in terms of stack as it possible to avoid "zoo of technologies"). To sync data between services we're using event-based model via message bus: in this case sender just submit some event and subscribers reacts on this event appropriately. For sure, there is still direct API calls and dependencies between services, but all of them minimized as much as it possible (usually, direct calls used to implement some kind of transactional approach when we, for example, builing data model, to be sure that it was applied successfully).
LAIKA has modern and flexible technological stack to meet the requirements described above. We're using Microsoft .NET Core as a framework for our modules, services and applications. This version of .NET is cross-platform and has a lot of improvements in comparison to old-school .NET Framework. Our team is using LTS versions only since we prefer stability to the latest language features. Current version is Microsoft .NET Framework 3.1 LTS with latest security patches.
For our web applications (including our web APIs) we're using Microsoft ASP.NET MVC with a little fleur of pure JavaScript and jQuery (for our UI applications). Why? Just because we know these technologies better and we could keep our high tempo of development. But not only. It's also cross-platform web technology with exceptional performance, scalability and configuration features. For example, we're using async methods to provide better performance for high-load methods of LAIKA API.
Most likely now you have a question: so it means that I have to know and use .NET to install, extend and customize LAIKA digital platform? And the answer is NO.
You don't need to know .NET, C#, ASP.NET MVC or things like that to install, extend or customize LAIKA platform. LAIKA is language and technology agnostic. All web APIs implements HTTP REST or GraphQL and could be used with any language or technology that supports REST calls. Like Java, Ruby, Python, Haskell, Erlang, C99, C++, F#, Scala, Kotlin, bash, PowerShell, curl or many others. ASM x86? I'm not sure, only if you want to send REST calls by sending interruptions directly to your network card. So in this case .NET Core for us it's just a runtime installed on the machine (or inside Docker container) to execute our code. It's like JRE / JVM, I'm sure you're running already a lot of Java-based applications but don't even know about it. It works the same in .NET Core case.
We also using GraphQL for our search subsystem. This protocol was made by Facebook and allow to formulate queries in a modern and flexible way. You could specify query parameters and filters as well as to specify the response format: it could be great for some integration use-cases when the communication contract specified by client, so it provides better backward compatibility.
Let's talk a little about storages. LAIKA is modern platform designed to work with large amount of data. We're using NoSQL as a database concept and MongoDB as database engine in the most cases. NoSQL perfectly fits to our data model: assets, products and other object could have various metadata and other properties and it's better to represent as a single document rather than set of relational tables and dictionaries. It's wasn't a simple choice for us since we aware of advantages and disadvantages of such approach. But usage of MongoDB allow us to minimize negative effects of NoSQL database usage: for example, it's possible to run transactions on the whole replica set to ensure data consistency of the storage. MongoDB is scalable and shows good performance results in complex conditions and data represented as BSON documents simplifies backup, restore and disaster recovery.
But once you get locked into serious performance questions the tendency is to push it as far as you can. So we're also using ElasticSearch as an enterprise search platform. It's modern, fast, reliable, has great language-related features and based on the same Lucene inside. Our team work with both Apache Solr and ElasticSearch so we had a chance to compare these platforms. For us choice was clear: ElasticSearch is better in terms of installation, configuration and maintenance and has a lot of additional features. We're using it for full-text search, faceted search, filtering, ordering and similar things. It's also possible to use our search subsystem as a source of asset or product metadata if needed (but it's not always reasonable).
And we also using ElasticSearch as a centralized log storage. Our platform has more than 60 components so it's "must-have" for us. ElasticSearch is a great structured storage for our logs and we're using Kibana on top of it for better access to our logs and dashboards.
To sync changes between services we're using message bus approach and event-based model. Direct calls between services builds a complex relationships between them which could be a problem if you have big number of services. In opposite message bus sync creates loose-coupled systems which is a big plus for fast growing platform. We're using Apache Kafka for this purpose. For sure, there were other candidates like RabbitMQ, MSMQ, Microsoft Service Bus etc. We prefer Kafka over the other options since it's scalable and shows great performance on the large amount of data, event-based model is a native for Kafka as well. As a downside it has complex architecture and configuration. RabbitMQ could be a good option, it supports event-based model (but native approach is index management) and Erlang-based.
We also using Redis Server but only for things like shared sessions for our web applications. Yes, it could be used for other caching scenarios, but for now LAIKA using it only this way. Maybe in future we will implement some shared cache for our APIs using Redis.
So as you see we have a modern and robust technological stack. And design principles that applies to the whole platform allows it to be scalable and work fine under the pressure.
Stay with us to know more about LAIKA internals! There is a lot of topics to discuss. Like, for example, maintenance, deployment strategy, performance or many more.