How to use the Discngine Knime Data Functions
Data Functions are initially calculations based on S-PLUS, open-source R, SAS®, MATLAB® scripts, or R scripts running under TIBCO Enterprise Runtime for R for Spotfire, which you can make available in the TIBCO Spotfire® environment. With the Connector, you can base the Data Functions on KNIME workflows.
In this tutorial you will learn how to register and use Knime Data Functions in TIBCO Spotfire®.
Materials
You will need followings pre requiresite :
- the Discngine sbdf I/O node collection installed on your Knime® Server
- a Knime® account with execution rights
- a Knime® workflow accesible on your Knime® Server with a download output
- a Spotfire® account with sufficient rights to see and use data functions
1. Registration of Knime Data Functions
To register new Data Function :
-
Open the Tools > Discngine Knime data functions administration
-
Log in to the Knime® server using your Knime credentials:
The server URL is the one set in the Discngine Knime Data Function preferences.
-
Click on "Register new" and browse the Knime hierarchy to select the worflow.
-
A pop-up window opens, displaying information about the Data Function and the workflow. You can also assign a few keywords to the Data Function, so that you can easily search for them later. You can also define whether you want to allow caching for this Data Function. The library path is set automatically, using the value of the Discngine Knime Data Function preference, but you can change it. Finally, the input and output of the protocol are reminded.
2. Run Knime Data Functions
Load data
In TIBCO Spotfire® Analyst:
-
Starting without opened analysis in TIBCO Spotfire®, click on Files and Data and search for the data function you have just registered. Click on it.
-
Here you can define the input and output parameters.
-
Set input parameter that you want, for example here is a number of compound loaded from the load maybridge data. Tick the refresh function automatically and click on OK
-
-
Change the name of the output datatable you want to import into your analysis.
-
Click "OK".
Data are now added to the analysis.
On the Knime® side, the workflow "Load maybridge" simply reads the Maybridge data set (here from a SDF file), computes the SLogP, convert the molecule to its Mol representation, and writes the results in an SBDF file, and expose it through a Download file node. It should be noted that:
- the input parameter "NumberOfCompound" is a parameter of the workflow,
- the workflow must output a Download file with an SBDF file. It is then this file that is read by TIBCO Spotfire® to load data in the analysis.
Add data using an existing TIBCO Spotfire® data table
Following the same steps as before, insert now a Knime® workflow Data Function that takes in input an upload file that will be a sbdf file and then read it with the Discngine SBDF Reader node.
For example here we use the "Cluster_Molecules" workflow, that clusterize molecules from a read dataset.

Select at least the column "Molecule" from the Maybridge data table as input:
Concerning the output parameter, we will now add a new data table named clustered (we do not add to existing data as these computed new columns depends on existing column to avoid a cycling dependency):