This help topic is about performing transformation of source data using the command line interface provided by hale studio. It allows you to execute transformations based on mappings defined in hale studio, without having to use hale studio as desktop application, for instance to run the transformation automatically on a regular basis or to integrate it with existing infrastructure.
The following are the basic things you need to use the command line interface:
You can run hale studio on the command line either using the hale studio executable (i.e. HALE.exe) or using Java directly. Depending on the operating system, behavior may be different, but in general it is better to use Java directly (either use locally installed compatible version of Java or the version shipped with hale studio), especially if you include the task in an automated process. The advantage of using the hale studio executable is that it uses the Java version that is shipped with hale studio automatically and sets some important system properties for you. The following are the commands to show the usage information of the command line interface, with the executable or via Java, assuming your working directory is the hale studio installation folder:
Running the hale studio executable:
> HALE -nosplash -application hale.transform
> java -Xmx1024m -Dcache.level1.enabled=false -Dcache.level1.size=0 -Dcache.level2.enabled=false -Dcache.level2.size=0 -jar plugins\org.eclipse.equinox.launcher_1.3.0.v20140415-2008.jar -application hale.transform
Running the command above will show the usage information of the transformation application. Please note that for the Java call, the version of the launcher JAR file may change with new hale studio versions and you should use a path delimiter appropriate for your system.
Additionally, the Java call specifies settings and system
properties for the Java VM. You cannot provide these settings to a
call to the hale studio executable - in that case you have to adapt the file
HALE.ini. Following is a short description of the most important
-Dcache.level1.enabled=false -Dcache.level1.size=0 -Dcache.level2.enabled=false -Dcache.level2.size=0
log.hale.levelcontrols the log level for components of hale studio, while
log.root.levelcontrols the log level for other code, such as third party libraries. See the logback documentation for more information on logging levels and other details about the logging configuration.
http.proxyHost- the proxy host name or IP address
http.proxyPort- the proxy port number
http.nonProxyHosts- hosts for which the proxy should not be used, separated by | (optional)
http.proxyUser- user name for authentication with the proxy (optional)
http.proxyPassword- password for authentication with the proxy (optional)
-Dhttp.proxyHost=webcache.example.com -Dhttp.proxyPort=8080 -Dhttp.nonProxyHosts="localhost|host.example.com"
For simplicity, the following examples will use the
executable, you can substitute
HALE by the call to Java as above.
Please note that every argument before
and after the executable or JAR file is a launcher argument,
every argument after
is an application argument.
Note: As an alternative to using the hale studio application to
launch the CLI, you can use the dedicated hale-cli.
With hale-cli a transformation is called like this:
hale transform <arguments...>
hale-cli also offers other kinds of commands and can be extended with custom functionality via an extension point.
If using the hale studio executable on Windows you will probably want to add the
-console launcher argument as well:
HALE -nosplash -console -application hale.transform
Otherwise you may get no feedback from the application (if you still get no feedback, launch via Java).
Note: If using a version of hale studio installed with the Windows
installer (or generally a version of hale studio that has no write access to
its directory) it is needed to specify a data location with the
additional launcher argument
, for instance like this:
HALE -nosplash -console -data "%APPDATA%\dhpanel\HALE"
The following is the usage information provided by the transformation application:
HALE -nosplash -application hale.transform [-argsFile <file-with-arguments>] -project <file-or-URI-to-HALE-project> -source <file-or-URI-to-source-data> [-include <file-pattern>] [-exclude <file-pattern>] [-providerId <ID-of-source-reader>] [<setting>...] [-filter <filter-expression>] [-filterOn <type> <filter-expression>] [-excludeType <type>] [-exclude <filter-expression>] -target <target-file-or-URI> [-preset <name-of-export-preset>] [-providerId <ID-of-target-writer>] [<setting>...] [-validate <ID-of-target-validator> [<setting>...]] [options...] where setting is -S<setting-name> <value> -X<setting-name> <path-to-XML-file> and options are -reportsOut <reports-file> -stacktrace -trustGroovy -overallFilterContext
A hale project contains all the necessary information to perform the
transformation from one data model into another. It references the
source and target schemas and describes the transformation rules in
parameter to provide the location to your project file, as a relative
or absolute path, or as a URI.
If you want to share your project, the best option is to save it as a project archive. In the save wizard you can specify to include online resources to make it loadable offline. You can also exlude any source data from the project, which will be ignored for command line transformation anyway.
-project C:\Hale-Project\myProject.halez(absolute path)
-project myProject.halez(relative path)
-project "C:\Hale-Project\my Project.halez"(quoted absolute path with spaces)
-source <file-or-URI-to-source-data> [-providerId <ID-of-source-reader>] [<setting>...]
The source data you can as well provide as path to a file or a URI. For instance you can provide a URI to a Web Feature Service GetFeature request. Specifying a source data location is mandatory, any source data configured in the hale project will be ignored for the transformation.
If the source is a directory, you can specify multiple
parameters to control which files to load. If you do not specify
, it defaults to
, i.e. all files being included, even if they are in sub-directories.
Patterns use the glob
pattern syntax as defined in Java and should be quoted to not be interpreted by the shell.
You can transform data from multiple sources if you provide a
argument for each.
hale studio will try to guess the file format and how to read it, so in most cases it will be enough to specify the location of the source data. But you also have the possibility to control in detail which hale data reader to use and how to configure it.
Please take a look at the InstanceReader reference to see what kind of providers are available for you and what kind of configuration options they offer. Please note that this reference documentation is generated from the I/O providers present in your local hale studio installation (including eventual additional plugins and custom implementations), so it is only available in the local hale studio help and not in the online version on the web.
By default hale studio uses all data passed in as sources for the transformation. The filter options allow you to filter the source data before it is passed to the transformation. This can be helpful for selecting only objects actually needed for the transformation (e.g. to reduce processing time and temporary storage used), or to exclude objects that would falsify the result.
you can specify a filter expression that is checked against all
objects read from the source. The filter language can be specified at
the beginning of the filter expression, followed by a colon. If no
language is provided explicitly, the expression is assumed to be CQL.
Following a simple example filter only accepting instances with the
value 'Berlin' for the property
-filter "CQL:name = 'Berlin'"
To apply a filter only to objects of a certain type use
. The first argument to
is the type. You can specify it's name with or without namespace. If
you want to specify the namespace, wrap it in curly braces and prepend
it to the type name (for example:
). The second argument is the filter expression that is to be applied
to that type.
If filters are defined, generally, any object needs to be accepted by
at least one of the filters defined with
. If there are only filters for specific types (
), and no general filters defined, objects of other types pass without
Exception to that are only the exclusion filters.
prevent an instance to be passed to the transformation even if they
were accepted by a different filter.
will prevent any instance of a specific type from being passed to the
on the other hand allows specifying a filter. Only instances that
don't match the filter pass on to the transformation.
is another filter related option. If you pass this flag to the call,
it is ensured that any context aware filters share a context across
loading all of the defined sources. Context aware filters can right
now only be supplied in Groovy.
-target <target-file-or-URI> [-preset <name-of-export-preset>] [-providerId <ID-of-target-writer>] [<setting>...]
Also you need to specify where to write the transformation result to.
Usually this is a file.
In addition you need to provide either an export preset or a hale data writer ID and configuration.
The recommended approach is to use an export preset. You can easily define it in hale studio with support through the UI, and save it as part of the project. An export preset essentially stores the configuration information on how to save the data. Create it in hale studio via File→Export→Create custom data export... in the main menu. Configure the export and specify a name for the preset - this name is what you specify to use the preset on the command line.
GMLyou can use it for the transformation like this:
-target output.gml -preset GML
Even when using a preset, you can still provide setting parameters to override specific behavior.
Please take a look at the InstanceWriter reference to see what kind of providers are available for you and what kind of configuration options they offer. Please note that this reference documentation is generated from the I/O providers present in your local hale studio installation (including eventual additional plugins and custom implementations), so it is only available in the local hale studio help and not in the online version on the web.
-validate <ID-of-target-validator> [<setting>...]
The transformation result can optionally also be validated. To do so, specify a validator to use by its ID in hale. For example to validate a created XML/GML file against it's XML Schema Definition use:
Please take a look at the Instance validators reference to see what kind of validators are available for you and what kind of configuration options they offer. The validator will by default be configured with the content type of the transformation result writer. Please note that this reference documentation is generated from the I/O providers present in your local hale studio installation (including eventual additional plugins and custom implementations), so it is only available in the local hale studio help and not in the online version on the web.
Here is an example on how to use the command line interface. There is a
hale project named
toInspire.halez which contains a source schema
and a mapping one of the INSPIRE application schemas. The source data is contained
geographicData.shp file, which is encoded in
UTF-8. The transformed
data should be stored in
inspireData.gml and as GML with an INSPIRE SpatialDataSet as container.
> HALE -nosplash -application hale.transform -project toInspire.halez -source geographicData.shp -Scharset UTF-8 -target inspireData.gml -preset SpatialDataSet
> HALE -nosplash -application hale.transform -project toInspire.halez -source geographicData.shp -Scharset UTF-8 -target inspireData.gml -providerId eu.esdihumboldt.hale.io.inspiregml.writer -Sinspire.sds.namespace http://gdi-de.org/oid/de.beispiel.namespace -Sinspire.sds.localId 10
Based on a public example project we created an example you can directly try, complete with project data and script file to launch the transformation. You can download the example here. Please take a look at the README provided as part of the example, on how to set it up.