Installation

EVA ICS Machine Learning kit server must be installed on any machine which runs a EVA ICS v4 node. It is recommended to install the server on a dedicated node if possible.

Note

ML kit server requires EVA ICS 4.0.1. It is also recommended to update the system to the build 2023032901 or newer.

ML kit server works on x86-64 Linux only.

Downloading/updating

The server binaries can be manually downloaded from https://pub.bma.ai/eva-mlkit/server/. There is also an installer script which automatically downloads and extracts the latest stable server tar-ball:

curl https://pub.bma.ai/eva-mlkit/server/install | sh

The script installs server binaries into /opt/eva4/mlkit/ folder. To customize the target path, execute installer as the following:

curl https://pub.bma.ai/eva-mlkit/server/install | TARGET_DIR=/path/to/folder sh

The script also can update an existing installation with the same command as above.

After the server is updated, it is necessary to manually restart all ML kit service instances, e.g. with eva-shell:

eva svc restart <SVC_ID>

Installing server license key

EVA ICS Machine Learning kit server is not included into EVA ICS Enterprise and must have own product key deployed. The license key can be deployed with the following command:

/opt/eva4/sbin/eva-registry-cli set eva/user_data/mlkit/license - --type json < license-file.json

when there is less than 30 days before the expiration date left, deployed instances start sending warning messages in logs every hour.

A new license can be imported on-the-flow, no service/node restart is required.

The license expiration UNIX timestamp can be obtained with a command:

/opt/eva4/sbin/eva-registry-cli get-field eva/user_data/mlkit/license expires

Creating/deploying service instances

The ML kit server is a standard EVA ICS v4 service and can be created as:

eva svc create eva.svc.ml /opt/eva4/mlkit/svc-tpl-mlsrv.yml

where the service configuration template is:

command: /opt/eva4/mlkit/eva-mlsrv
bus:
  path: var/bus.ipc
config:
  allow_push_formats:
    - arrow.stream
    - arrow.file
    - csv
  # allow push via the following services
  allow_push_svcs:
    - eva.db.default
  # dedicated HTTP API host/port
  listen: 0.0.0.0:8811
  # max response data size, per request
  response_max_size: 1_000_000_000
  # default RPC clients pool for requests to a particular db svc
  default_rpc_clients: 4
  ## if a front-end server or TLS terminator is used
  #real_ip_header: X-Real-IP
  # HMI service, used for authentication, required
  hmi_svc: eva.hmi.default
  # data map
  data_map:
    - oids:
      - '#'
      svc: eva.db.default
      # max clients to the database service
      clients: 10
timeout:
  default: 120
user: eva

The server can work with databases which are connected to the local node bus via database services. The default services which are currently supported:

Using front-end web server

With a front-end web server both HMI service and ML kit server can be mapped on the same URL port.

See Front-end server for HMI service about the general info how to use NGINX as the front-end web server.

To include ML kit server, add the following lines to the NGINX web site configuration:

upstream eva-mlkit {
    server 127.0.0.1:8811;
}

server {
# ...
location /ml/ {
        gzip                on;
        gzip_min_length     8192;
        gzip_proxied no-cache no-store private expired auth;
        gzip_types          application/vnd.apache.arrow.stream text/csv;
        gzip_vary on;
        proxy_buffers 16 16k;
        proxy_buffer_size 16k;
        proxy_busy_buffers_size 240k;
        proxy_pass http://eva-mlkit;
        # a few variables for backend, in fact HMI requires X-Real-IP only
        proxy_set_header X-Host $host;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-Proto https;
        proxy_set_header X-Frontend "nginx";
    }
}