architectural overview

layline.io’s design employs the paradigm of a microservices architecture. as such individual workflows can be

  • created
  • deployed
  • and operated

in an autonomous fashion.

the core processing framework of layline.io is based on akka streams, an open source toolkit which embraces the actor model for scalable and responsive large scale processing.

layline.io encapsulates akka streams and adds crucial functionality to make the power of akka available to a broader audience.

layline.io enables you to configure, deploy and operate reactive event-processing scenarios without actual object coding. it productizes the actor model paradigm into an army of reactive engines which represent configurable microservices. in contrary to traditional microservices, however, the engines form part of a virtual data mesh where all engines are interchangable, and look out for each others health, load and balance. all for one, and one for all.

this allows for automatic cloning of workflows in case of load pressure, automatic rebalancing of load across nodes, as well as  the solution to observe all services and spawn, tear down and balance services across the data mesh created.

components

layline.io moving parts

architecture

01 web ui

layline.io’s web interface supports you in creating and modifying individual projects and workflows which can then be executed on 1..n reactive engines, which in turn can be executed on 1..n nodes.

02 web server

the layline.io web server provides the necessary web service to configure workflows. workflows are held in the local web repository until they are commanded to be deployed to a data mesh comprised of one or more reactive engines.

03 reactive engine

one reactive engine accommodates one or more workflows. based on load pressure  conditions and setup the engine can dynamically clone and spawn additional workflows, or tear them down when they are no longer needed.

04 reactive data mesh

even though you can just run one reactive engine on one physical node, the full power of layline.io is unleashed by running a bunch of reactive engines in concert, forming a reactive data mesh.

in this scenario, all reactive engines have situational awareness across the data mesh. data pressure, failure and imbalance can all be mitigated by engines coordinating with each other and agreeing on rebalancing and redistributing load across the whole data mesh.

05 public/private hub*

layline.io public hub (launching soon) will provide an open playground to configure and test workflow scenarios and share them with others. users can pick from a growing repository of ready-made projects, workflows and components to copy and reuse in their own setups.

the same technology can be used in a private setting, then called „private hub“.

based on akka streams

layline.io uses akka and akka streams to support important parts of its processing architecture.

akka streams is a proven open source stream processing library based on an actor model architecture. it provides important functionality to layline.io, some of which are:

    • distributed deployment
    • asynchronous and non-blocking processing
    • elastic scalability
    • decentralized processing
    • back-pressure aware

akka streams is used by some of the most accomplished companies in the world:

credit

distributed

cloud-native architecture designed to run in a distributed fashion

layline.io reactive engines work in concert and form a reactive and distributed data mesh. this enables to locate crucial processing close to the source of the data’s origin. this dramatically reduces latency as well as overall processing cost. it also warrants that processing can continue even when connectivity between individual nodes is temporarily hampered.

resilience

keeps running in unstable environments

The system stays responsive in the face of failure. This applies not only to highly-available, mission-critical systems — any system that is not resilient will be unresponsive after a failure. Resilience is achieved by replication, containment, isolation and delegation. Failures are contained within each component, isolating components from each other and thereby ensuring that parts of the system can fail and recover without compromising the system as a whole.

Recovery of each component is delegated to another (external) component and high-availability is ensured by replication where necessary. The client of a component is not burdened with handling its failures.

(reactive manifesto)

scale up

layline.io provides the technology to power enormous growth both in function, agility, scalability and resilience. Create a reactive data mesh using the distributed backbone of layline.io. Tap into source and create services in an agile fashion and grow with the world.

auto scaling workflows within nodes

layline.io’s architecture takes care of dynamically scaling up and down based on data pressure and individual node and workflow configuration.

scale out & balance

In addition to scaling by way of spawning additional workflow workers within a node, layline.io can also automatically scale and balance load across nodes in the data mesh.