Kylo Application Properties

Below you can find all the properties used for the kylo-services application

Common Configuration Properties

Property Default Value Description
spring.profiles.include native,nifi-v1.2,auth-kylo,auth-file,search-esr,jms-activemq
Profiles that should be used. Different profiles will enable certain behaviors in Kylo.
Indicate the NiFi version you are using with the correct spring profile.
- For NiFi 1.0.x: nifi-v1
- For NiFi 1.1.x: nifi-v1.1
- For NiFi 1.2.x or 1.3.x: nifi-v1.2
Additionally you can separate properties into separate files with the notation application-<ProfileName>.You can separate properties into separate files and load them
Then add the ProfileName to the active profile property to use/override properties.
server.port 8420 The port Kylo runs on
liquibase.enabled true Liquibase allows Kylo to automatically update the database to ensure the Kylo metastore is current. If this is set to false you will need to manually run any SQL scripts when upgrading Kylo.
liquibase.change-log classpath:com/thinkbiganalytics/db/master.xml the location of the liquibase scripts

Kylo Operations

Property Default Value Description
kylo.cluster.jgroupsConfigFile  
Only for Clustered Kylo
The name of the kylo jgroups configuration file (i.e. ‘kylo.cluster.jgroupsConfigFile=kylo-cluster-jgroups-config.xml’ )
kylo.feed.mgr.cleanup.timeout 60000 The amount of time to wait when removing feeds before issuing a Timeout error. Sometimes it can take a while to remove a feed and its data. Increase this value if you need more time to cleanup a feed.
kylo.ops.mgr.query.nifi.bulletins true If a failure event is detected query NiFi for any related bulletins and add them to the Job details
kylo.ops.mgr.stats.nifi.bulletins.mem.size 30
The limit to the number of bulletins to store for streaming feed failures. If statistics for streaming feeds detects a failure it will store any related NiFi bulletins in memory.
This is a rolling queue that will keep the last # of errors per feed
Since 0.8.3
kylo.ops.mgr.stats.nifi.bulletins.persist false
When getting aggregate stats back for flows if errors are detected Kylo will query NiFi in attempt to capture matching bulletins.
By default this data is stored in memory. Setting this to true will store the data in the MySQL table
Since 0.8.3
kylo.provenance.retry.unregistered.events.enabled true
Only for Clustered Kylo
When receiving provenance data when Kylo is clustered, sometimes the JMS message will come through before the Cluster notification is sent to all nodes.
When receiving JMS provenance events if the events are not found to match a Kylo feed set this to true to have it retry and process the events again.
Since 0.8.4
kylo.provenance.retry.unregistered.events.maxRetries 3
Only for Clustered Kylo
The number of times to retry unregistered provenance events from JMS that dont match a Kylo feed.
Since 0.8.4
kylo.provenance.retry.unregistered.events.waitTimeSec 5
Only for Clustered Kylo
The wait time in seconds to retry unregistered provenance events from JMS that dont match a Kylo feed.
Since 0.8.4
nifi.auto.align true When saving a feed Kylo will auto align processors in NiFi to make the canvas clean and readable. You can set this property to false and manually align the processors via a rest endpoint.
nifi.flow.inspector.threads 1
When starting Kylo it will scan NiFi to get processors and connections. Usually 1 thread is sufficient in inspecting NiFi. Only under rare circumstances should you increase this.
Since 0.8.2.4 and 0.8.3.3
nifi.flow.max.retries 100 If Kylo fails to inspect the NiFi flows it will retry this many times.
nifi.flow.retry.wait.time.seconds 5 If Kylo fails to inspect the NiFi flows it will wait this many seconds before retrying.
nifi.remove.inactive.versioned.feeds true When Kylo saves a feed it will version off the older feed. If the save is successful and nothing is running in the older feed and this property is true, Kylo will remove the old process group in NiFi
sla.cron.default 0 0/5 * 1/1 * ? * Interval for when SLA’s should be checked. Default is every 5 minutes. Use http://cronmaker.com for help in creating a cron expression
kylo.template.remote-process-groups.enabled false
By default Kylo will allow you to use Remote Process groups and reusable flows only in a NiFi clustered environment.
Set this property to true if you want to use kylo with remote process groups in a non NiFi clustered environment.
This will provide additional options when registering the reusable template in kylo.
Since 0.9.1
kylo.template.repository.default /opt/kylo/setup/data/templates/nifi-1.0
Default location where Kylo looks for templates and feeds. Kylo UI won’t be able to publish to this location.
Additional repositories can be setup using config/repositories.json where templates can be published.
Since 0.10.0
kylo.install.template.notification true
Display notification when there is new template version available in template repository.
Since 0.10.0
expire.repository.cache false
Set this to true when Kylo is running in a clustered mode so that all nodes are aware when there is a template update available.
Since 0.10.0

Database Connection

Kylo

Property Default Value Description
spring.datasource.driverClassName org.mariadb.jdbc.Driver The database driver to use. The default is for MariaDB. Be sure this matches your database (i.e. Postgres: org.postgresql.Driver, MySQL: com.mysql.jdbc.Driver)
spring.datasource.maxActive 30 Max number of connections that can be allocated by the pool at a given time
spring.datasource.username   the user name to connect to the database
spring.datasource.password   the database password
spring.datasource.testOnBorrow true true/false if the connection should be validated before connecting
spring.datasource.url jdbc:mysql://localhost:3306/kylo URL for the database
spring.datasource.validationQuery SELECT 1 Query used to validate the connection is valid.
spring.jpa.database-platform org.hibernate.dialect.MySQL5InnoDBDialect Platform to use. Default uses MySQL. Change this to the specific database platform (i.e. for Postgres use: org.hibernate.dialect.PostgreSQLDialect
spring.jpa.open-in-view true true/false if spring should attempt to keep the connection open while in the view
metadata.datasource.driverClassName ${spring.datasource.driverClassName} Connection to Modeshape database. This defaults to the standard Kylo spring.datasource property
metadata.datasource.testOnBorrow true Connection to Modeshape database. This defaults to the standard Kylo spring.datasource property
metadata.datasource.url ${spring.datasource.url} Connection to Modeshape database. This defaults to the standard Kylo spring.datasource property
metadata.datasource.validationQuery SELECT 1 Query used to validate the connection is valid.
modeshape.datasource.driverClassName ${spring.datasource.driverClassName} Connection to Modeshape database. This defaults to the standard Kylo spring.datasource property
modeshape.datasource.url ${spring.datasource.url} Connection to Modeshape database. This defaults to the standard Kylo spring.datasource property
modeshape.index.dir /opt/kylo/modeshape/modeshape-local-index Directory on this node that will store the Modeshape index files. Indexing Modeshape speeds up access to the metadata. The indexes are defined in the metadata-repository.json file

Hive

Property Default Value Description
hive.datasource.driverClassName org.apache.hive.jdbc.HiveDriver The driver used to connect to Hive
hive.datasource.url jdbc:hive2://localhost:10000/default The Hive Url
hive.datasource.username   The username used to connect to Hive
hive.datasource.password   The password used to connect to Hive
hive.datasource.validationQuery show tables ‘test’ Validation Query for Hive.
hive.userImpersonation.enabled false true/false to indicate if user impersonation is enabled
hive.userImpersonation.cache.expiry.duration 4 time units to wait before expiring cached catalog queries
hive.userImpersonation.cache.expiry.time-unit HOURS can be one of TimeUnit.java values, e.g. SECONDS, MINUTES, HOURS, DAYS
kerberos.hive.kerberosEnabled false true/false to indicate if kerberos is enabled
hive.metastore.datasource.driverClassName org.mariadb.jdbc.Driver The driver used to connect to the Hive metastore
hive.metastore.datasource.url jdbc:mysql://localhost:3306/hive The Hive metastore location
hive.metastore.datasource.username   the username used to connect to the Hive metastore
hive.metastore.datasource.password   the password used to connect to the Hive metastore
hive.metastore.datasource.testOnBorrow true true/false if the connection should be validated before connecting
hive.metastore.datasource.validationQuery SELECT 1 Query used to validate the connection is valid.
kylo.feed.mgr.hive.target.syncColumnDescriptions true
true/false. If true Kylo will update the target Hive table with comments matching the kylo field column description. If false it will not add the comment to the Hive fields.
Since 0.9.1

JMS

More details about these properties can be found here JMS Providers
Property Default Value Description
jms.activemq.broker.url tcp://localhost:61616 The JMS url
jms.connections.concurrent 1-1
The MIN-MAX threads to have listening for events. By default its set to 1 thread. Example. A value of 3-10 would create a minimum of 3 threads, and if needed up to 10 threads
Since: 0.8.1
jms.client.id thinkbig.feedmgr The name of the client connecting to JMS

JMS - ActiveMQ

More detail about the ActiveMQ redelivery properties can be found here: http://activemq.apache.org/redelivery-policy.html

Property Default Value Description
jms.activemq.broker.username  
The username to connect to JMS
Since: 0.8
jms.activemq.broker.password  
The password to connect to JMS
Since: 0.8
jms.backOffMultiplier 5
The back-off multiplier
Since: 0.8.2
jms.maximumRedeliveries 100
Sets the maximum number of times a message will be redelivered before it is considered a poisoned pill and returned to the broker so it can go to a Dead Letter Queue.
Set to -1 for unlimited redeliveries.
Since: 0.8.2
jms.maximumRedeliveryDelay 600000L
Sets the maximum delivery delay that will be applied if the useExponentialBackOff option is set. (use value -1 to define that no maximum be applied) (v5.5).
Since: 0.8.2
jms.redeliveryDelay 1000
Redeliver policy for the Listeners when they fail (http://activemq.apache.org/redelivery-policy.html)
Since: 0.8.2
jms.useExponentialBackOff false
Should exponential back-off be used, i.e., to exponentially increase the timeout.
Since: 0.8.2

JMS - Amazon SQS

Note

To use SQS you need to replace the spring profile, jms-activemq, with jms-amazon-sqs

spring.profiles.include=[other profiles],jms-amazon-sqs
Property Default Value Description
sqs.region.name  
the sqs region, example: eu-west-1
Since: 0.8.2.2

Kylo SSL

The following should be set if you are running Kylo under SSL

Property Default Value Description
server.ssl.key-store    
server.ssl.key-store-password    
server.ssl.key-store-type jks  
server.ssl.trust-store    
server.ssl.trust-store-password    
server.ssl.trust-store-type JKS  

Security

Property Default Value Description
security.entity.access.controlled false
To enable entity level access control change this to “true”.
WARNING: Enabling entity access control is a one-way operation; you will not be able to set this poperty back to “false” once Kylo is started with this value as “true”.
security.jwt.algorithm HS256 JWT algorithm
security.jwt.key <insert-256-bit-secret-key-here> The encrypted jwt key. This needs to match the same key in the kylo-ui/conf/application.properties file
security.rememberme.alwaysRemember false  
security.rememberme.cookieDomain localhost  
security.rememberme.cookieName remember-me  
security.rememberme.parameter remember-me  
security.rememberme.tokenValiditySeconds 1209600 How long to keep the token active. Defaults to 2 weeks.
security.rememberme.useSecureCookie    

Security - Authentication

Below are properties for the various authentication options that Kylo supports. Using an option below requires you to use the correct spring profile and configure the associated properties. More information on the different authentication settings can be found here: Authentication

Security - auth-simple

The following should be set if using the auth-simple profile

Property Default Value Description
authenticationService.username    
authenticationService.password    

Security - auth-file

Property Default Value Description
security.auth.file.password.hash.algorithm MD5  
security.auth.file.password.hash.enabled false  
security.auth.file.password.hash.encoding base64  
security.auth.file.groups file:///opt/kylo/groups.properties Location of the groups file
security.auth.file.users file:///opt/kylo/users.properties Location of the users file

Security - auth-ldap

Property Default Value Description
security.auth.ldap.authenticator.userDnPatterns uid={0},ou=people user DN patterns are separated by ‘|’
security.auth.ldap.server.authDn    
security.auth.ldap.server.password    
security.auth.ldap.server.uri ldap://localhost:52389/dc=example,dc=com  
security.auth.ldap.user.enableGroups true  
security.auth.ldap.user.groupNameAttr ou  
security.auth.ldap.user.groupsBase ou=groups  

Security - auth-ad

Property Default Value Description
security.auth.ad.server.domain test.example.com  
security.auth.ad.server.searchFilter (&(objectClass=user)(sAMAccountName={1}))  
security.auth.ad.server.uri ldap://example.com/  
security.auth.ad.user.enableGroups true  
security.auth.ad.user.groupAttributes   group attribute patterns are separated by ‘|’

NiFi Rest

These properties allow Kylo to interact with NiFi

Property Default Value Description
nifi.rest.host localhost The hose NiFi is running on
nifi.rest.port 8079 The port NiFi is running on. The port should match the port found in the /opt/nifi/current/conf/nifi.properties (nifi.web.https.port)

NiFi Rest SSL

The following properties need to be set if you interact with NiFi under SSL Follow the document NiFi and SSL for more information on setting up NiFi to run under SSL

Property Default Value Description
nifi.rest.https false Set this to true if NiFi is running under SSL
nifi.rest.keystorePassword    
nifi.rest.keystorePath    
nifi.rest.keystoreType   The keystore type i.e. PKCS12
nifi.rest.truststorePassword   the truststore password needs to match that found in the nifi.properties file (nifi.security.truststorePasswd)
nifi.rest.truststorePath    
nifi.rest.truststoreType   The truststore type i.e JKS
nifi.rest.useConnectionPooling false Use the Apache Http Connection Pooling client instead of the Jersey Rest Client when connecting.

NiFi Flow/Template Injection

Kylo will inject/populate NiFi Processor and Controller Service properties using Kylo environment properties. Refer to this document Configuration Properties for details as Kylo has a number of options allowing it to interact and set properties in NiFi. Below are the default settings Kylo uses.

Property Default Value Description
config.category.system.prefix  
A constant string that is used to prefex the category reference.
This is useful if you have separate dev,qa,prod that might use the same hadoop cluster and want to prefex the locations with the environment.
config.elasticsearch.jms.url tcp://localhost:61616 the JMS url that will be used to send/receive notification when something should be indexed in Elastic Search
config.hdfs.archive.root /archive Location used by the standard-ingest template to archive the data
config.hdfs.ingest.root /etl Location used by the standard-ingest template to land the data
config.hive.ingest.root /model.db Location used by the standard-ingest template for the Hive tables
config.hive.master.root /app/warehouse description
config.hive.profile.root /model.db Location used by the standard-ingest template for the Hive _profile table
config.hive.schema hive Schema used to query the JDBC Hive metastore. Note for Cloudera this is metastore
config.metadata.url http://localhost:8400/proxy/v1/metadata JDBC url for the Hive Metastore
config.nifi.home /opt/nifi Location of NiFi
config.nifi.kylo.applicationJarDirectory /opt/nifi/current/lib/app Location of the NiFi jar files used in NiFi templates for processors such as ExecuteSpark
config.spark.validateAndSplitRecords.extraJars /usr/hdp/current/hive-webhcat/share/hcatalog/hive-hcatalog-core.jar Location of the extra jars needed for the Spark Validate/Split processor in standard-ingest template
config.spark.version 1 The spark version. Used in the Data Transformation template
nifi.executesparkjob.driver_memory 1024m Memory setting for all ExecuteSparkJob processors
nifi.executesparkjob.executor_cores 1 Spark Executor Cores for all ExecuteSparkJob processors
nifi.executesparkjob.number_of_executors 1 Spark Number of Executors for all ExecuteSparkJob processors
nifi.executesparkjob.sparkhome /usr/hdp/current/spark-client Spark Home for all ExecuteSparkJob processors
nifi.executesparkjob.sparkmaster local Spark master setting for all ExecuteSparkJob processors
nifi.service.hive_thrift_service.database_connection_url jdbc:hive2://localhost:10000/default Controller Service named, Hive Thirft Service, default url
nifi.service.kylo_metadata_service.rest_client_password   Controller Service named, Kylo Metadata Service, Rest client password. This controller service is used for NiFi to talk to Kylo
nifi.service.kylo_metadata_service.rest_client_url http://localhost:8400/proxy/v1/metadata Controller Service named, Kylo Metadata Service, Rest Url. This controller service is used for NiFi to talk to Kylo
nifi.service.kylo_mysql.database_user   Controller Service named, Kylo Mysql, database user
nifi.service.kylo_mysql.password   Controller Service named, Kylo Mysql, database password
nifi.service.mysql.database_user   Controller Service named, Mysql, database user
nifi.service.mysql.password   Controller Service named, Mysql, database password
nifi.service.standardtdchconnectionservice.jdbc_driver_class com.teradata.jdbc.TeraDriver Controller Service named, StandardTdchConnectionService, jdbc driver class
nifi.service.standardtdchconnectionservice.jdbc_connection_url jdbc:teradata://localhost Controller Service named, StandardTdchConnectionService, connection url
nifi.service.standardtdchconnectionservice.username dbc Controller Service named, StandardTdchConnectionService, user
nifi.service.standardtdchconnectionservice.password   Controller Service named, StandardTdchConnectionService, password
nifi.service.standardtdchconnectionservice.tdch_jar_path /usr/lib/tdch/1.5/lib/teradata-connector-1.5.4.jar Controller Service named, StandardTdchConnectionService, location for the TDCH jar
nifi.service.standardtdchconnectionservice.hive_conf_path /usr/hdp/current/hive-client/conf Controller Service named, StandardTdchConnectionService, location for the Hive client configuration
nifi.service.standardtdchconnectionservice.hive_lib_path /usr/hdp/current/hive-client/lib Controller Service named, StandardTdchConnectionService, location for the have library
nifi.service.kylo-teradata-dbc.database_driver_location(s) Controller Service named, StandardTdchConnectionService, Teradata drivers
nifi.service.kylo-teradata-dbc.database_connection_url ${nifi.service.standardtdchconnectionservice.jdbc_connection_url} Controller Service named, Kylo-Teradata-DBC, connection url. This references the another property (above) resolving to ‘jdbc:teradata://localhost
nifi.service.kylo-teradata-dbc.database_driver_class_name ${nifi.service.standardtdchconnectionservice.jdbc_driver_class} Controller Service named, Kylo-Teradata-DBC, jdbc driver class. This references the another property (above) resolving to ‘com.teradata.jdbc.TeraDriver’
nifi.service.kylo-teradata-dbc.database_user ${nifi.service.standardtdchconnectionservice.username} Controller Service named, Kylo-Teradata-DBC, user. This references the another property (above) resolving to ‘dbc’
nifi.service.kylo-teradata-dbc.password= ${nifi.service.standardtdchconnectionservice.password} Controller Service named, Kylo-Teradata-DBC, password. This references the another property (above).

Schema Detection

These properties affect Kylo’s sample file schema detection.

Property Default Value Description
schema.parser.csv.buffer.size 32765 Size of the internal buffer for reading the first 100 lines of CSV files. If you receive a “Marker invalid” error when uploading a sample file then try increasing this value.

Unused properties

Property Default Value Description
application.debug true  
application.mode STANDALONE  
spring.batch.job.enabled false  
spring.batch.job.names