motivation for layline.io

what motivated us to create layline.io were a number of real-life problems in the area of event-processing which were largely unaddressed by existing solutions. we deemed these issues so important, that we thought it is worthwhile addressing them.

  • real-time volume to increase tenfold in the next few years.
  • companies struggling to reap transactional value of real-time data (do something useful with it in the first 0-3 seconds)
  • companies‘ system architectures solidly migrating to cloud architectures, without proper solutions to support this in all aspects.
  • lack of actual products to support distributed real-time event processing, instead of having to create custom programmed solution based on development toolkits
  • awesome streaming technology like akka and flink, which is missing to fill the gaps of actual streamed business logic in a configurable manner.
  • lots more.

configuration

main notions of configuring layline.io

idea

the idea of layline.io is quite simple: allow users to

  1. create individual workflow processing logic
  2. which can be deployed across a distributed processing network (physically, logically and geographically)
  3. which allows to configure and deploy everything without custom coding, and
  4. supports any event-processing scenario from small to super extra large in a highly scalable and resilient environment (or simply on your laptop!)

comprehensive configuration environment

layline.io utilizes the akka streams programming toolkit under the hood. it’s awesome. but there are a number of things it really lacks which we have added. one of them is a product-grade configurability framework. this is what layline.io provides in the form of a configuration concept paired with a web-based configuration environment.

workflows

workflows are a core part of layline.io. everything revolves around workflows. the web-based configuration center helps to configure workflows. each workflow manifests an event processing logic to run on one or more nodes.

within a workflow you can pick from a number of pre-defined, but configurable processors. examples are filtering, data mapping, custom javascript logic, and more.

data awareness

by data awareness, we understand that layline.io is able to understand any form of structured data by way of configuration. you simply define the data format using layline.io’s own grammer language and can instantly see the result using a sample data file.

generic format asset editor

javascript logic

for now individual business logic can be „configured“ using vanilla javascript within workflows. wherever logic is required, you can simply use javascript to analyze and manipulate the data. this really opens the system to an extremely broad array of use cases, while maintaining to be a product, as the logic is part of the configuration, not of a custom coded one-off development.

configuration hierarchy

layline.io uses a number of „parts“ within a configuration which are organized in a hierarchy:

the parts are subsequently explained in more details.

projects and workflows

using the web-driven configuration you can define a project. each project in turn can hold 1..n individual workflows. each workflow will later map to one or more nodes where this workflow will actually run.

workflow processors

each workflow consists of a number of processors which are wired up to form the actual event-flow to be reflected. processors can be picked from a list of pre-made processors (which is continuously extended). generally you can distinguish between source, logic, and sink processors. a workflow can have an unlimited number of processors, with the exception that only one source processor can be present in any given workflow.

assets

an asset is a pre-made template for a processor, workflow or project. this is userfuly when to provide reusable components. for example when a processor is added to a workflow, you can decide on

  1.  whether you want to create a completely new asset that this processor is based on, or
  2. whether you want to reuse an existing asset and inherit all its settings for this processor, or
  3. not use an asset at all but simply enter the configuration for this one processor and never reuse any of it.

once a processor is based on an asset, all its settings are inherited. they can be individually overwritten to deviate from the asset’s standard settings.

assets in turn can be based on other assets. so for example, a kafka source asset B can be based on a kafka source asset A. B would then inherit all settings from A. individual properties can be overwritten and so forth.

this concept allows for a high reusability of settings across projects and workflows.

generic format asset editor
generic format asset editor

reactive cluster

reactive engine

at the heart of layline.io’s processing power is the reactive engine. this component provides everything necessary to execute workflows, scale up and out, provide the resiliency and much much more.

an important part of this is covered by akka streams, the popular open source toolkit.

reactive cluster

you can combine almost any number of reactive engines to form a reactive cluster. the cluster provides for the actual scalability, resilience, distribution of logic and processing etc.

the cluster receives a project and it’s workflows as a configuration which then is executed within it.

deployment

deployment concept

a project and its configured workflows is deployed to a cluster. the cluster runs 1..n reactive engines. each of the engines is assigned to one or more roles. workflows do also have roles assigned.

upon receipt of the project configuration through a seed node in the cluster, the cluster automatically distributes the configuration throughout the cluster. reactive engines in the cluster pick up the workflows which match their roles. that’s how they know which workflows to execute.

concept_overview_01

deploy through ui

a project configured by way of the web configuration center can be deployed either directly to a cluster using the user interface, or exported to a configuration file. the file can be used to manually upload it into a cluster which may be inaccessible to the web interface.

assignment of deployment

a configuration deployed to a cluster can be assigned to the whole cluster (default deployment) or to specific nodes within the cluster.

a cluster can therefore run multiple deployments at the same time, on different nodes.

operations & monitoring

monitoring

workflows in clusters can be monitored for operability and performance:

operations

operations enables the complete control of the cluster. new nodes can be joined, existing nodes can be ejected. the assignment of deployed configurations to nodes within the cluster can be modified at runtime, and the number of workflows instances to run can be manually modified (if not auto-scaled).