Arquitecture and Components

OMniLeads is a multi-component based application that resides in individual GitLab repositories, where the code is stored source and / or configuration, build, deploy and pipelines scripts CI / CD.

Although when executing an instance of OMniLeads the components interact as a unit through TCP / IP connections, the reality is that each one is its own entity with its GitLab repository and DevOps cycle.

At the build level, each component is distributed from RPMs (for installations on Linux) and Docker-IMG (for installations on Docker Engine). Regarding the deploy, each component has a script first_boot_installer.tpl, with which it can be invoked as Provisioner and thus easily deploy on a Linux host way automated, or run manually by editing variables about the script in question.

We can think of each component as a piece of a puzzle withits attributes:

_images/arq_component.png

Description of each component

Each component is described below:

  • OMLApp (https://gitlab.com/omnileads/ominicontacto): The web application (Python / Django) is contained in OMLApp.

    Nginx is the webserver that receives HTTPS requests and redirects such requests to OMLApp (Django / UWSGI). OMLApp interacts with various components, either to store / provision configuration as well as in the generation of calls, or at the time of returning Agent / Campaign Monitoring and Report Views.

    OMLApp uses PostgreSQL as SQL engine, Redis as cache and for provision Asterisk configuration, either via files .conf as well as generating certain key / value structures that are consulted by Asterisk in real time when processing calls about campaigns. OMLApp connects to the Asterisk AMI interface for generating calls and reloading some other configuration. It also makes connections to the WombatDialer API when necessary generate campaigns with predictive dialing.

_images/arq_omlapp.png
  • Asterisk (https://gitlab.com/omnileads/omlacd): OMniLeads is based on the Asterisk framework as the basis of the ACD (Automatic Distributor of Calls). It is responsible for the implementation of business logic (campaigns telephone numbers, recordings, reports and channel metrics telephone). At the networking level, Asterisk receives AMI requests from OMLApp and from WombatDialer, while needs to run connections towards PostgresSQL to leave logs, towards Redis to query parameters of campaigns provisioned from OMLApp, and also to access Nginx for the establishment of the Websocket used to bring the content of configuration files contained in Asterisk (etc / asterisk) and generated from OMLApp.
_images/arq_omlacd.png
  • Kamailio (https://gitlab.com/omnileads/omlkamailio): This component is used in conjunction with RTPEngine (WebRTC bridge) at the time of managing WebRTC (SIP over WSS) communications against agents, while holding sessions (SIP over UDP) against Asterisk. Kamailio receives the REGISTERs generated by the webphone (JSSIP) from the Agents, therefore takes care of the work of * registration and locating * users using Redis to store the address of network of each user.

    For Asterisk, all agents are available at the URI of Kamailio, so Kamailio receives INVITEs (UDP 5060) from Asterisk when it requires locate some agent to connect a call.Finally, it is worth mentioning the fact that Kamailio generates connections towards RTPEngine (TCP 22222) requesting an SDP at the time of establishing SIP sessions between Asterisk VoIP and WebRTC agents.

_images/arq_omlkamailio.png
  • RTPEngine (https://gitlab.com/omnileads/omlrtpengine): OMniLeads supports RTPEngine at the time of transcoding and bridge between the WebRTC technology and VoIP technology from the audio point of view of the. Component maintains audio channels sRTP-WebRTC with Agent users on the one hand, while establishing channels on the other RTP-VoIP against Asterisk. RTPEngine receives connections from Kamailio to port 22222.
_images/arq_omlrtpengine.png
  • Nginx (https://gitlab.com/omnileads/omlnginx): The web server of the project is Nginx, and its task is to receive the requests TCP 443 by part of the users, as well as from some components like Asterisk. From one hand, Nginx is invoked every time a user accesses the URL of the deployed environment. If user requests have as destination render some view of the web application Django, then Nginx redirects the request to UWSGI, while if the user requests are destined to the REGISTER of their webphone JSSIP, then Nginx redirects the request to Kamailio (for setting a websocket SIP ). Also, *Nginx is invoked by Asterisk when setting websocket against Websocket-Python from OMniLeads, provisioning the provided configuration from OMLApp.
_images/arq_omlnginx.png
  • Python websocket (https://gitlab.com/omnileads/omnileads-websockets): OMniLeads relies on a websockets server (based on Python), used to leave background tasks running (reports andgeneration of CSVs) and receive an asynchronous notification when the task has been completed, which optimizes the performance of the application. It is also used as a bridge between OMLApp and Asterisk in provisioning the configuration of .conf files (etc / asterisk).

    When starting Asterisk, a process coonects the websocket against component, and from there, it receives notifications every time configuration changes are provided. In your default configuration , it raises the port TCP 8000, and the connections received are always redirected from Nginx.

_images/arq_omlws.png
  • Redis (https://gitlab.com/omnileads/omlredis): Redis is used with 3 very specific purposes. On the one hand, as a cache to store recurring Query Results Involved in Supervisory Views of campaigns and agents; on the other hand it is used as DB for the presence and location of users; and finally for the * Asterisk * configuration storage (etc / asterisk /) as well as also of the configuration parameters involved in each module (campaigns, trunks, routes, IVR, etc.), replacing the native alternative of * Asterisk * (* AstDB *).
_images/arq_omlredis.png
  • PostgreSQL (https://gitlab.com/omnileads/omlpgsql): PGSQL is the DB SQL engine used by OMniLeads. From there, it materializes all the reports and metrics of the system. Also, it stores all the configuration information that should persist in time. Receives connections on its TCP port 5432 from components OMLApp (read / write) and from Asterisk (write logs).
_images/arq_omlpgsql.png
  • WombatDialer (https://manuals.loway.ch/WD_UserManual-chunked / ch07.html): To work with predictive dialing campaigns, OMniLeads uses this third party software called WombatDialer. This dialer, has an API TCP 8080 on which OMLApp generates connections to provision campaigns and contacts, while WombatDialer falls back to the AMI interface of the Asterisk component at the Time to generate automatic outgoing calls, and check the status of The agents of each campaign. This component uses its own engine*MySQL* to operate.
_images/arq_omlwd.png

Deploy and environment variables

Having processed the previous exposition on the function of each component and its interactions in terms of networking, we move on to address the * deploy * process.

Each component has a bash script and Ansible playbook that allows the materialization of the component, either on a Linux-Host dedicated or living with others on the same host.

This is thanks to the fact that the Ansible playbook can be invoked from the bash script called first_boot_installer.tpl, in the case of going to the same as provisioner of a Linux-Host dedicated to host the component within the framework of a cluster, as well as imported by the Ansible playbook of the OMLApp component at the time ofdeploying several components on the same host where the application runs OMLApp.

_images/arq_deploy_cluster_aio.png

Therefore, we conclude that each component can either exist in a standalone host, or also coexist with OMLApp in the same host. These possibilities are covered by the installation method.

Above installation method is completely based on variables from environment that are generated in the deploy and are intended among other stuff, containing the network and port addresses of each component required to achieve interaction. That is, all files from configuration of each OMniLeads component, search for its peer by invoking OS environment variables. For example, the Asterisk component points your AGIs to the envvar $REDIST_HOST and $REDIS_PORT at the time of trying to generate a connection to Redis.

Thanks to the environment variables, a compatibility between the bare-metal approaches and docker containers is reached, that is, we can deploy OMniLeads by installing all components on one host, distributing them in several hosts or directly on Docket containers.

_images/arq_envvars_deploy.png

The fact of provisioning the configuration parameters via variables of environment, and also considering the possibility of always deploying the application protecting the data that must persist (recordings of calls and DB PostgreSQL) on resources mounted on the Linux system files that host each component, we can then raise the fact of working with immutable infrastructure as an option if we would like to. We can easily destroy and recreate each component, without losing important data when resizing the Components or raising updates. We can just discard the host where a version runs and deploy a new one with the latest upgrade.

We have the potential of the approach posed by the paradigm of infrastructure as code or immutable infrastructure, raised from the perspective of the new IT generations that operate within the DevOps culture. This approach is somewhat optional, as it can manage updates from the most traditional perspective without having to destroy the instance that hosts the component.

_images/arq_envvars_deploy_2.png

The potential of turning to cloud-init as a provisioner

Cloud-init is a software package that automates the initialization of Instances from the cloud during system startup. You can configure cloud-init to perform a variety of tasks. Some Examples of tasks that cloud-init can perform are:

  • Configure a hostname.
  • Installing packages on an instance.
  • Running provisioning scripts.
  • Suppress the default behavior of the virtual machine.

As of OMniLeads 1.16, each component contains a script called fisrt_boot_installer.tpl. Such a script can be precisely invoked at the cloud-init level, so that on the first boot of the operating system A fresh install of the component can be released.

As we have mentioned, it is possible to invoke the script at cloud VM creation time:

Another option is to render as Terraform template to be launched as provisioning of every instance created from Terraform.

Beyond the component in question, the first_boot_installer.tpl has as a purpose:

  • Install some packages.
  • Adjust some other configurations of the virtual machine.
  • Determine network parameters of the new Linux instance.
  • Run the Ansible playbook that installs the component on the operating system.

The first 3 steps are skipped when the component is installed from OMLApp, thus sharing the host.