LAIKA deployment options

This article describes different options for LAIKA installation for different use-cases, so you could compare it with you specific landscape or even estimate your environment costs.

LAIKA is product created for enterprise. And in enterprise conditions it makes sense to have some strong requirements on infrastructure: each customer is special and in the most cases have its own preferences. So we tried to make LAIKA as much flexible in terms of technologies and hosting as it possible. LAIKA is language agnostic (we have a separate article about it). But LAIKA is also cross-platform and cloud-agnostic. Beside that we have a several deployment options.

LAIKA is based on .NET Core which means that LAIKA is cross-platform. It could be installed and executed on the most popular operating systems like Windows, Linux and MacOS. Hosting scenarios could be different, but LAIKA as .NET Core application has its own built-in web server Kestrel that is cross platform, so basically it's just usage of some proxy on top of it. In case of Windows it's IIS + Kestrel, in case of Linux it could be NGINX or Apache + Kestrel etc.

LAIKA is cloud agnostic. We have no idea which cloud will be preferable for some customer. Sometimes it's not even cloud but some on-premises infrastructure. So yes, sometimes it could be better to use some cloud-service instead of some component or deploy our micro services as lambdas inside some cloud. But we're not doing that to be sure that LAIKA could be used in different conditions and environments. At the same time it's not a problem: if you need to use cloud-based MongoDB installation instead of built-in local version it's possible just by changing of the connection string; if you want to use some cloud service instead of built-in LAIKA functionality it's also possible - usually we have extension points in these parts of the application. LAIKA could be used in almost any cloud like Amazon Web Services, Microsoft Azure or Google Cloud Platform (for example).

Photo by Ivan Henao / Unsplash

If we're talking about deployment options there is two main options. The most simple and straightforward is VM installation. You could use only two virtual machines to install and run LAIKA. 2 VMs (one for apps and other for databases) of general purpose large shape (it's 2 CPU and 8 GB RAM with disk space for database and assets) should be enough. LAIKA is scalable so you could play with these specs:

  • increase specs for database machine in case when you expect large amount of data or activities related to migration of data (increase it to xlarge 4 CPU 16 GB RAM for example)
  • increase specs for application VM if you expect user-related activity or some processing etc. or you could split LAIKA services to several machines to distribute load
  • you could also put some load balancer and create clusters for the main components

It looks too manual and too static? I'm sure it is! We proposed to use the scenario described above for some simple use-cases: like development environment or installation for some small group of end-users (like marketing team). But for some real-life use-cases when LAIKA integrated into a data flow and stores large amount of data it's better to use auto-scaling deployment scenario. In such case LAIKA will be executed under Kubernetes control and the whole platform will be scalable. For sure, this kind of scenario involves more infrastructure and requires more components, but at the same it's ready for any kind of challenge. Deployment in this case will be driven by Terraform infrastructure-as-a-code (IaaC) scripts developed for specific cloud.

It could be a headache to manage 60+ microservices, copies etc. LAIKA IaaC scripts will do it for you. It will take the latest versions of Docker images from LAIKA repositories and apply them according to the configuration. For sure, it's possible to configure number of VMs and shapes, scalability strategy for the main components etc.

This kind of deployment scenario will support some complex changes in terms of usage: for example, some seasonal activities (like, for example, preparation of some event or new collection of products) or migrations and scale up if needed. It will scale down to save some costs if there is no need in such workforce.

Photo by Ibrahim Boran / Unsplash

Our platform is good from the maintenance perspective. Platform-wide logs are collected and stored into ElasticSearch. We have Kibana on top of it for logs navigation, filtering and near-realtime monitoring purposes.

We also have a platform-wide telemetry. It's not the usual VM metrics. We're gathering details from within the application and then supplying them to Prometheus. So you could track various characteristics in near-realtime: number of threads used, current load, response times for specific methods, machine resources etc. Grafana is responsible for the visualization of these metrics.

Backups are quite simple in case of LAIKA. All of them are file-based. You just need to backup:

  • dump of MongoDB databases by using mongodump tool
  • files on file storage (assets and previews)

In case of disaster recovery plan is pretty simple:

  1. apply infrastructure using Terraform scripts (IaaC also will deploy the applications and services)
  2. restore files to file storage
  3. restore databases
  4. rebuild ElasticSearch index

So you could use any tool applicable for your specific use case: from cron + rsync to company-wide backup solutions or cloud services.


As you could see our infrastructure solutions and deployment scenarios are quite flexible to adopt to almost any use-case and standards of real-life. LAIKA is cross-platform, platform/cloud agnostic and could be deployed from simple xcopy to Kubernetes-based auto-scaling clusters.

Still have any questions? Please don't hesitate to ask us! We're always happy to help and could suggest or estimate LAIKA installation for your specific case.