App Service Configurable
Getting Started
App-Service-Configurable is provided as an easy way to get started with processing data flowing through EdgeX. This service leverages the App Functions SDK and provides a way for developers to use configuration instead of having to compile standalone services to utilize built in functions in the SDK. Please refer to Available Configurable Pipeline Functions section below for full list of built-in functions that can be used in the configurable pipeline.
To get started with App Service Configurable, you'll want to start by determining which functions are required in your pipeline. Using a simple example, let's assume you wish to use the following functions from the SDK:
- FilterByDeviceName - to filter events for a specific device.
- Transform - to transform the data to XML
- HTTPExport - to send the data to an HTTP endpoint that takes our XML data
Once the functions have been identified, we'll go ahead and build out the configuration in the configuration.yaml
file under the [Writable.Pipeline]
section.
Example - Writable.Pipeline
Writable:
Pipeline:
ExecutionOrder: "FilterByDeviceName, Transform, HTTPExport"
Functions:
FilterByDeviceName:
Parameters:
FilterValues: "Random-Float-Device, Random-Integer-Device"
Transform:
Parameters:
Type: "xml"
HTTPExport:
Parameters:
Method: "post"
MimeType: "application/xml"
Url: "http://my.api.net/edgexdata"
The first line of note is ExecutionOrder: "FilterByDeviceName, Transform, HTTPExport"
. This specifies the order in which to execute your functions. Each function specified here must also be placed in the Functions:
section.
Next, each function and its required information is listed. Each function typically has associated Parameters that must be configured to properly execute the function as designated by Parameters:
under {FunctionName}
. Knowing which parameters are required for each function, can be referenced by taking a look at the Available Configurable Pipeline Functions section below.
Note
By default, the configuration provided is set to use EdgexMessageBus
as a trigger. This means you must have EdgeX Running with devices sending data in order to trigger the pipeline. You can also change the trigger to be HTTP. For more details on triggers, view the Triggers
documentation located in the Triggers section.
That's it! Now we can run/deploy this service and the functions pipeline will process the data with functions we've defined.
Pipeline Per Topics
The above pipeline configuration in Getting Started section is the preferred way if your use case only requires a single functions pipeline. For use cases that require multiple functions pipelines in order to process the data differently based on the profile
, device
or source
for the Event, there is the Pipeline Per Topics feature. This feature allows multiple pipelines to be configured in the [Writable.Pipeline.PerTopicPipelines]
section. This section is a map of pipelines. The map key must be unique , but isn't used so can be any value. Each pipleline is defined by the following configuration settings:
- Id - This is the unique ID given to each pipeline
- Topics - Comma separated list of topics that control when the pipeline is executed. See Pipeline Per Topics for details on using wildcards in the topic.
- ExecutionOrder - This is the list of functions, in order, that the pipeline will execute. Same as
ExecutionOrder
in the above example in the Getting Started section
Example - Writable.Pipeline.PerTopicPipelines
In this example Events from the device Random-Float-Device
are transformed to JSON and then HTTP exported. At the same time, Events for the source Int8
are transformed to XML and then HTTP exported to same endpoint. Note the custom naming for TransformJson
and TransformXml
. This is taking advantage of the Multiple Instances of a Function described below.
Writable:
Pipeline:
PerTopicPipelines:
float:
Id: float-pipeline
Topics: "edgex/events/device/+/Random-Float-Device/#, edgex/events/device/+/Random-Integer-Device/#"
ExecutionOrder: "TransformJson, HTTPExport"
int8:
Id: int8-pipeline
Topic: edgex/events/device/+/+/+/Int8
ExecutionOrder: "TransformXml, HTTPExport"
Functions:
FilterByDeviceName:
Parameters:
FilterValues: "Random-Float-Device, Random-Integer-Device"
TransformJson:
Parameters:
Type: json
TransformXml:
Parameters:
Type: xml
HTTPExport:
Parameters:
Method: post
MimeType: application/xml
Url: "http://my.api.net/edgexdata"
Note
The Pipeline Per Topics
feature is targeted for EdgeX MessageBus and External MQTT triggers, but can be used with Custom or HTTP triggers. When used with the HTTP trigger the incoming topic will always be blank
, so the pipeline's topics must contain a single topic set to the #
wildcard so that all messages received are processed by the pipeline.
Environment Variable Overrides For Docker
EdgeX services no longer have docker specific profiles. They now rely on environment variable overrides in the docker compose files for the docker specific differences.
Example - Environment settings required in the compose files for App Service Configurable
EDGEX_PROFILE : [target profile]
SERVICE_HOST : [services network host name]
EDGEX_SECURITY_SECRET_STORE: "false" # only need to disable as default is true
CLIENTS_CORE_COMMAND_HOST: edgex-core-command
CLIENTS_CORE_DATA_HOST: edgex-core-data
CLIENTS_CORE_METADATA_HOST: edgex-core-metadata
CLIENTS_SUPPORT_NOTIFICATIONS_HOST: edgex-support-notifications
CLIENTS_SUPPORT_SCHEDULER_HOST: edgex-support-scheduler
DATABASE_HOST: edgex-redis
MESSAGEQUEUE_HOST: edgex-redis
REGISTRY_HOST: edgex-core-consul
TRIGGER_EDGEXMESSAGEBUS_PUBLISHHOST_HOST: edgex-redis
TRIGGER_EDGEXMESSAGEBUS_SUBSCRIBEHOST_HOST: edgex-redis
Example - Docker compose entry for App Service Configurable in no-secure compose file
app-rules-engine:
container_name: edgex-app-rules-engine
depends_on:
- consul
- data
environment:
CLIENTS_CORE_COMMAND_HOST: edgex-core-command
CLIENTS_CORE_DATA_HOST: edgex-core-data
CLIENTS_CORE_METADATA_HOST: edgex-core-metadata
CLIENTS_SUPPORT_NOTIFICATIONS_HOST: edgex-support-notifications
CLIENTS_SUPPORT_SCHEDULER_HOST: edgex-support-scheduler
DATABASE_HOST: edgex-redis
EDGEX_PROFILE: rules-engine
EDGEX_SECURITY_SECRET_STORE: "false"
MESSAGEQUEUE_HOST: edgex-redis
REGISTRY_HOST: edgex-core-consul
SERVICE_HOST: edgex-app-rules-engine
TRIGGER_EDGEXMESSAGEBUS_PUBLISHHOST_HOST: edgex-redis
TRIGGER_EDGEXMESSAGEBUS_SUBSCRIBEHOST_HOST: edgex-redis
hostname: edgex-app-rules-engine
image: edgexfoundry/app-service-configurable:2.0.0
networks:
edgex-network: {}
ports:
- 127.0.0.1:59701:59701/tcp
read_only: true
security_opt:
- no-new-privileges:true
user: 2002:2001
Note
App Service Configurable is designed to be run multiple times each with different profiles. This is why in the above example the name edgex-app-rules-engine
is used for the instance running the rules-engine
profile.
Deploying Multiple Instances using profiles
App Service Configurable was designed to be deployed as multiple instances for different purposes. Since the function pipeline is specified in the configuration.yaml
file, we can use this as a way to run each instance with a different function pipeline. App Service Configurable does not have the standard default configuration at /res/configuration.yaml
. This default configuration has been moved to the sample
profile. This forces you to specify the profile for the configuration you would like to run. The profile is specified using the -p/--profile=[profilename]
command line option or the EDGEX_PROFILE=[profilename]
environment variable override. The profile name selected is used in the service key (app-[profile name]
) to make each instance unique, e.g. AppService-sample
when specifying sample
as the profile.
Note
If you need to run multiple instances with the same profile, e.g. http-export
, but configured differently, you will need to override the service key with a custom name for one or more of the services. This is done with the -sk/-serviceKey
command-line option or the EDGEX_SERVICE_KEY
environment variable. See the Command-line Options and Environment Overrides sections for more detail.
Note
Functions can be declared in a profile but not used in the pipeline ExecutionOrder
allowing them to be added to the pipeline ExecutionOrder
later at runtime if needed.
The following profiles and their purposes are provided with App Service Configurable.
rules-engine
Profile used to push Event messages to the Rules Engine via the Redis Pub/Sub Message Bus. This is used in the default docker compose files for the app-rules-engine
service
One can optionally add Filter function via environment overrides
WRITABLE_PIPELINE_EXECUTIONORDER: "FilterByDeviceName, HTTPExport"
WRITABLE_PIPELINE_FUNCTIONS_FILTERBYDEVICENAME_PARAMETERS_DEVICENAMES: "[comma separated list]"
There are many optional functions and parameters provided in this profile. See the complete profile for more details
http-export
Starter profile used for exporting data via HTTP. Requires further configuration which can easily be accomplished using environment variable overrides
Required:
-
WRITABLE_PIPELINE_FUNCTIONS_HTTPEXPORT_PARAMETERS_URL: [Your URL]
There are many more optional functions and parameters provided in this profile. See the complete profile for more details.
metrics-influxdb
Starter profile used for exporting telemetry data from other EdgeX services to InfluxDB via HTTP export. This profile configures the service to receive telemetry data from other services, transform it to Line Protocol syntax, batch the data and then export it to an InfluxDB service via HTTP. Requires further configuration which can easily be accomplished using environment variable overrides.
Required:
-
WRITABLE_PIPELINE_FUNCTIONS_HTTPEXPORT_PARAMETERS_URL: [Your InfluxDB URL]
- Example value: `"http://localhost:8086/api/v2/write?org=metrics&bucket=edgex&precision=ns"``
-
`WRITABLE_INSECURESECRETS_INFLUXDB_SECRETS_TOKEN
: [Your InfluxDB Token]-
Example value:
"Token 29ER8iMgQ5DPD_icTnSwH_77aUhSvD0AATkvMM59kZdIJOTNoJqcP-RHFCppblG3wSOb7LOqjp1xubA80uaWhQ=="
-
If using secure mode, store the token in the service's secret store via POST to the service's
/secret
endpoint
Example JSON to post to /secret endpoint
{ "apiVersion":"v2", "secretName":"influxdb", "secretData":[ { "key":"Token", "value":"Token 29ER8iMgQ5DPD_icTnSwH_77aUhSvD0AATkvMM59kZdIJOTNoJqcP-RHFCppblG3wSOb7LOqjp1xubA80uaWhQ==" }] }
-
Optional Additional Tags:
WRITABLE_PIPELINE_FUNCTIONS_TOLINEPROTOCOL_PARAMETERS_TAGS: <your additional tags>
- Currently set to empty string
- Example value: `"tag1:value1, tag2:value2"
Optional Batching parameters (see Batch function for more details):
WRITABLE_PIPELINE_FUNCTIONS_BATCH_PARAMETERS_MODE: <your batch mode>
- Currently set to
"bytimecount"
- Valid values are
"bycount"
,"bytime"
or `"bytimecount"``
- Valid values are
- Currently set to
`WRITABLE_PIPELINE_FUNCTIONS_BATCH_PARAMETERS_BATCHTHRESHOLD: <your batch threshold count>
- Currently set to
100
- Currently set to
WRITABLE_PIPELINE_FUNCTIONS_BATCH_PARAMETERS_TIMEINTERVAL: <your batch time interval>
- Currently set to
"60s"
- Currently set to
mqtt-export
Starter profile used for exporting data via MQTT. Requires further configuration which can easily be accomplished using environment variable overrides
Required:
-
WRITABLE_PIPELINE_FUNCTIONS_MQTTEXPORT_PARAMETERS_BROKERADDRESS: [Your Broker Address]
There are many optional functions and parameters provided in this profile. See the complete profile for more details
sample
Sample profile with all available functions declared and a sample pipeline. Provided as a sample that can be copied and modified to create new custom profiles. See the complete profile for more details
functional-tests
Profile used for the TAF functional testing
external-mqtt-trigger
Profile used for the TAF functional testing of external MQTT Trigger
What if my input data isn't an EdgeX Event ?
The default TargetType
for data flowing into the functions pipeline is an EdgeX Event DTO. There are cases when this incoming data might not be an EdgeX Event DTO. There are two setting that configure the TargetType to non-Event data.
Raw TargetType
In these cases the Pipeline
can be configured using TargetType="raw"
to set the TargetType
to be a byte array/slice, i.e. []byte
. The first function in the pipeline must then be one that can handle the []byte
data. The compression, encryption and export functions are examples of pipeline functions that will take input data that is []byte
.
Example - Configure the functions pipeline to compress, encrypt and then export the []byte
data via HTTP
Writable:
Pipeline:
TargetType: "raw"
ExecutionOrder: "Compress, Encrypt, HTTPExport"
Functions:
Compress:
Parameters:
Alogrithm: "gzip"
Encrypt:
Parameters:
Algorithm: "aes256"
SecretName: "aes"
SecretValueKey: "key"
HTTPExport:
Parameters:
Method: "post"
Url: "http://my.api.net/edgexdata"
MimeType: "application/text"
If along with this pipeline configuration, you also configured the Trigger
to be http
trigger, you could then send any data to the app-service-configurable' s /api/v3/trigger
endpoint and have it compressed, encrypted and sent to your configured URL above.
Example - HTTP Trigger configuration
Trigger:
Type: "http"
Metric TargetType
This setting when set to true will cause the TargeType
to be &dtos.Metric{}
and is meant to be used in conjunction with the new ToLineProtocol
function. See ToLineProtocol section below for more details. In addition the Trigger
SubscribeTopics
must be set to "edgex/telemetry/#"
so that the function receives the metric data from the other services.
Example - Metric TargetType
Writable:
Pipeline:
TargetType: "metric"
ExecutionOrder: "ToLineProtocol, ..."
...
Functions:
ToLineProtocol:
Parameters:
Tags: "" # optional comma separated list of additional tags to add to the metric in to form "tag:value,..."
...
Trigger:
SubscribeTopics: telemetry/#"
Multiple Instances of a Function
Now multiple instances of the same configurable pipeline function can be specified, configured differently and used together in the functions pipeline. Previously the function names specified in the [Writable.Pipeline.Functions]
section had to match a built-in configurable pipeline function name exactly. Now the names specified only need to start with a built-in configurable pipeline function name. See the HttpExport section below for an example.
Available Configurable Pipeline Functions
Below are the functions that are available to use in the configurable pipeline function pipeline ([Writable.Pipeline]
) section of the configuration. The function names below can be added to the Writable.Pipeline.ExecutionOrder
setting (comma separated list) and must also be present or added to the [Writable.Pipeline.Functions]
section as {FunctionName}]
. The functions will also have the {FunctionName}.Parameters:
section where the function's parameters are configured. Please refer to the Getting Started section above for an example.
Note
The Parameters
section for each function is a key/value map of string
values. So even tough the parameter is referred to as an Integer or Boolean, it has to be specified as a valid string representation, e.g. "20" or "true".
Please refer to the function's detailed documentation by clicking the function name below.
AddTags
Parameters
tags
- String containing comma separated list of tag key/value pairs. The tag key/value pairs are colon seperated
Example
AddTags:
Parameters:
tags: "GatewayId:HoustonStore000123,Latitude:29.630771,Longitude:-95.377603"
Batch
Parameters
Mode
- The batch mode to use. can be 'bycount', 'bytime' or 'bytimecount'BatchThreshold
- Number of items to batch before sending batched items to the next function in the pipeline. Used with 'bycount' and 'bytimecount' modesTimeInterval
- Amount of time to batch before sending batched items to the next function in the pipeline. Used with 'bytime' and 'bytimecount' modesIsEventData
- If true, specifies that the data being batched isEvents
and to un-marshal the batched data to[]Event
prior to returning the batched data. By default the batched data returned is[][]byte
MergeOnSend
- If true, specifies that the data being batched is to be merged to a single[]byte
prior to returning the batched data. By default the batched data returned is[][]byte
Example
Batch:
Parameters:
Mode: "bytimecount" # can be "bycount", "bytime" or "bytimecount"
BatchThreshold: "30"
TimeInterval: "60s"
IsEventData: "false"
MergeOnSend: "false"
or
Batch:
Parameters:
Mode: "bytimecount" # can be "bycount", "bytime" or "bytimecount"
BatchThreshold: "30"
TimeInterval: "60s"
IsEventData: "true"
MergeOnSend: "false"
or
Batch:
Parameters:
Mode: "bytimecount" # can be "bycount", "bytime" or "bytimecount"
BatchThreshold: "30"
TimeInterval: "60s"
IsEventData: "false"
MergeOnSend: "true"
Compress
Parameters
Algorithm
- Compression algorithm to use. Can be 'gzip' or 'zlib'
Example
Compress:
Parameters:
Algorithm: "gzip"
Encrypt
Parameters
Algorithm
- AES256SecretName
- (required for AES256) Name of the secret in theSecret Store
where the encryption key is located.SecretValueKey
- (required for AES256) Key of the secret data for the encryption key in the secret's data.
Example
# Encrypt with key pulled from Secret Store
Encrypt:
Parameters:
Algorithm: "aes256"
SecretName: "aes"
SecretValueKey: "key"
FilterByDeviceName
Parameters
DeviceNames
- Comma separated list of device names for filteringFilterOut
- Boolean indicating if the data matching the device names should be filtered out or filtered for.
Example
FilterByDeviceName:
Parameters:
DeviceNames: "Random-Float-Device,Random-Integer-Device"
FilterOut: "false"
FilterByProfileName
Parameters
ProfileNames
- Comma separated list of profile names for filteringFilterOut
- Boolean indicating if the data matching the profile names should be filtered out or filtered for.
Example
FilterByProfileName:
Parameters:
ProfileNames: "Random-Float-Device, Random-Integer-Device"
FilterOut: "false"
FilterByResourceName
Parameters
ResourceName
- Comma separated list of reading resource names for filteringFilterOut
- Boolean indicating if the readings matching the resource names should be filtered out or filtered for.
Example
FilterByResourceName:
Parameters:
ResourceNames: "Int8, Int64"
FilterOut: "true"
FilterBySourceName
Parameters
SourceNames
- Comma separated list of source names for filtering. Source name is either the device command name or the resource name that created the EventFilterOut
- Boolean indicating if the data matching the device names should be filtered out or filtered for.
Example
FilterBySourceName:
Parameters:
SourceNames: "Bool, BoolArray"
FilterOut: "false"
HTTPExport
Parameters
Method
- HTTP Method to use. Can bepost
orput
Url
- HTTP endpoint to POST/PUT the data.MimeType
- Optional mime type for the data. Defaults toapplication/json
if not set.PersistOnError
- Indicates to persist the data if the POST fails. Store and Forward must also be enabled if this is set to "true".ContinueOnSendError
- For chained multi destination exports, if true continues after send error so next export function executes.ReturnInputData
- For chained multi destination exports if true, passes the input data to next export function.HeaderName
- (Optional) Name of the header key to add to the HTTP headerSecretName
- (Optional) Name of the secret in theSecret Store
where the header value is stored.SecretValueKey
- (Optional) Key for the header value in the secret data.
Example
# Simple HTTP Export
HTTPExport:
Parameters:
Method: "post"
MimeType: "application/xml"
Url: "http://my.api.net/edgexdata"
# HTTP Export with secret header data pull from Secret Store
HTTPExport:
Parameters:
Method: "post"
MimeType: "application/xml"
Url: "http://my.api.net/edgexdata"
HeaderName: "MyApiKey"
SecretName: "http"
SecretValueKey: "apikey"
# Http Export to multiple destinations
Writable:
Pipeline:
ExecutionOrder: "HTTPExport1, HTTPExport2"
Functions:
HTTPExport1:
Parameters:
Method: "post"
MimeType: "application/xml"
Url: "http://my.api1.net/edgexdata2"
ContinueOnSendError: "true"
ReturnInputData: "true"
HTTPExport2:
Parameters:
Method: "put"
MimeType: "application/xml"
Url: "http://my.api2.net/edgexdata2"
JSONLogic
Parameters
Rule
- The JSON formatted rule that with be executed on the data by JSONLogic
Example
JSONLogic:
Parameters:
Rule: "{ \"and\" : [{\"<\" : [{ \"var\" : \"temp\" }, 110 ]}, {\"==\" : [{ \"var\" : \"sensor.type\" }, \"temperature\" ]} ] }"
MQTTExport
Parameters
BrokerAddress
- URL specify the address of the MQTT BrokerTopic
- Topic to publish the dataClientId
- Id to use when connecting to the MQTT BrokerQos
- MQTT Quality of Service (QOS) setting to use (0, 1 or 2). Please refer here for more details on QOS valuesAutoReconnect
- Boolean specifying if reconnect should be automatic if connection to MQTT broker is lostRetain
- Boolean specifying if the MQTT Broker should save the last message published as the “Last Good Message” on that topic.SkipVerify
- Boolean indicating if the certificate verification should be skipped.PersistOnError
- Indicates to persist the data if the POST fails. Store and Forward must also be enabled if this is set to "true".AuthMode
- Mode of authentication to use when connecting to the MQTT Brokernone
- No authentication requiredusernamepassword
- Use username and password authentication. The Secret Store (Vault or InsecureSecrets) must contain theusername
andpassword
secrets.clientcert
- Use Client Certificate authentication. The Secret Store (Vault or InsecureSecrets) must contain theclientkey
andclientcert
secrets.cacert
- Use CA Certificate authentication. The Secret Store (Vault or InsecureSecrets) must contain thecacert
secret.
SecretName
- Name of the secret in the SecretStore where authentication secrets are stored.
Note
Authmode=cacert
is only needed when client authentication (e.g. usernamepassword
) is not required, but a CA Cert is needed to validate the broker's SSL/TLS cert.
Example
# Simple MQTT Export
MQTTExport:
Parameters:
BrokerAddress: "tcps://localhost:8883"
Topic: "mytopic"
ClientId: "myclientid"
# MQTT Export with auth credentials pull from the Secret Store
MQTTExport:
Parameters:
BrokerAddress: "tcps://my-broker-host.com:8883"
Topic: "mytopic"
ClientId: "myclientid"
Qos="2"
AutoReconnect="true"
Retain="true"
SkipVerify: "false"
PersistOnError: "true"
AuthMode: "usernamepassword"
SecretName: "mqtt"
SetResponseData
Parameters
ResponseContentType
- Used to specify content-type header for response - optional
Example
SetResponseData:
Parameters:
ResponseContentType: "application/json"
Transform
Parameters
Type
- Type of transformation to perform. Can be 'xml' or 'json'
Example
Transform:
Parameters:
Type: "xml"
ToLineProtocol
Parameters
Tags
- optional comma separated list of additional tags to add to the metric in to form "tag:value,..."
Example
ToLineProtocol:
Parameters:
Tags: "" # optional comma separated list of additional tags to add to the metric in to form "tag:value,..."
Note
The new TargetType
setting must be set to "metric" when using this function. See the Metric TargetType section above for more details.
WrapIntoEvent
Parameters
ProfileName
- Profile name to use for the new EventDeviceName
- Device name to use for the new EventResourceName
- Resource name name to use for the new Event'sSourceName
and Reading'sResourceName
ValueType
- Value type to use the new Event Reading's value typeMediaType
- Media type to use the new Event Reading's value type. Required when the value type isBinary
Example
WrapIntoEvent:
Parameters:
ProfileName: "MyProfile"
DeviceName: "MyDevice"
ResourceName: "SomeResource"
ValueType: "String"
MediaType: "" # Required only when ValueType=Binary