IdentitySync is Gigya's robust ETL solution (Extract, Transform, Load) that offers an easy way to transfer data in bulk between platforms.
With IdentitySync, you can:
- Export: Take all the permission-based social and profile identity information stored at Gigya and channel it into another platform, such as an ESP, CRM, or marketing automation system.
- Import: Get up-to-date data from a 3rd party platform, such as a user newsletter subscription status, survey responses or account balance, and sync existing Gigya user profiles or create new ones ad-hoc.
- Transfer users from one Gigya site to another
IdentitySync is the engine that runs Gigya integrations with:
- ESP marketing systems
- Customer relationship management systems (CRM)
- Data management platforms (DMPs)
- Any file based integrations, using interim platforms such as SFTP, Azure and Amazon.
- Other platform types, by dynamically writing data to an external endpoint.
IdentitySync jobs can be carried out on a one-time basis, for example if migrating data, or they can be scheduled to run on a regular basis in order to keep your platforms synchronized.
These are some of the integrations supported via IdentitySync:
IdentitySync gives you the flexibility to use your data in any way you need. For example, with IdentitySync, the following scenarios are supported:
- Query the audit log to retrieve deleted users, and use a batch job to other external systems so that they can be deleted from there, for data compliance reasons.
- Retrieve all accounts that have remained unverified or unregistered for over a week (isVerified==false or isRegistered==false and created>'one week ago'), export the relevant email addresses to an ESP, from which to send follow-up emails.
- As a sports club, regularly import accounts from an external ticketing system, thus fortifying your fanbase.
- Set up data fields for segmenting users according to certain types of behavior in your site, then use an IdentitySync job to send only users that match these criteria to a marketing system for a targeted campaign.
- Use a Gigya-to Gigya IdentitySync job to query users by Facebook likes stored in their profiles - for example, people who like vampire and zombie related content - and to plant a value (e.g. "horrorFic") in a Gigya data field. Then launch a gruesome Halloween marketing campaign targeting these users.
IdentitySync is incredibly flexible, and supports many technologies, source and target platforms, and data transformation.
For full, up-to-date details of the service's capabilities, see the Component Repository.
|Data Sources / Targets||Sample Transformations||File Formats|
Main file formats supported:
Each IdentitySync job runs a dataflow. The building blocks of the dataflow are dedicated components. A component is a pre-configured unit that is used to perform a specific data integration operation. The components include readers, writers, transformers and lookups. Each component is responsible for performing a single task, such as:
- Extracting accounts from Gigya based on specific parameters
- Changing some field names
- Creating a CSV file
- Uploading a file to FTP
- Writing data directly to a target platform or sending it to a generic API endpoint
Components can be added to the dataflow, removed or changed as needed.
For detailed information, visit the Component Repository.
The license could not be verified: License Certificate has expired!
Steps are the building blocks of the dataflow. Each step is a call to a component that performs a specific task, such as extracting accounts from Gigya, or compressing a file in GZIP format. The step output is passed on to the next step in the dataflow for further processing. Each step includes the following attributes:
- id: the unique identifier of the step within a given dataflow (e.g., "Read from Gigya").
- type: the component used in this step (see Component Repository) (e.g., "datasource.read.gigya.account").
- params: a set of parameters for this step.
- next: an array containing one or more IDs of the next step(s) to be carried out.
- error: the next step to perform, in case of an error in the current step. For example, writing errors to a log.
A dataflow is a series of steps, that comprises the complete definition for a transfer of information between Gigya and a third-party platform. A dataflow is also assigned a scheduling and may be executed once or repeatedly.
IdentitySync includes a built-in capability for separating failed records and writing them to a file, so that they may be reviewed and handled, and fed back into the flow.
Note that IdentitySync jobs are scheduled in UTC time. Therefore, the platform participating in the flow should be set to the UTC timezone to ensure that file requests are handled properly.
To create an integration based on IdentitySync, complete the following process:
1. Create Dataflow
Open the Dataflows in Gigya's Console. Make sure your are signed in and have selected the relevant site. The IdentitySync dashboard may also be accessed by clicking Settings in the upper menu and then Dataflows in the left menu.
In the dashboard, click Create Data Flow.
In the Create Data Flow window, select the data flow integration from the dropdown. If the flow you wish to create is not available in the dropdown, select any available flow: it is customized in the next steps.
Select the data flow template. Note that at the bottom of this window, you can see an outline of the flow that will be created (e.g., Account > rename > dsv > gzip >sftp).
Click Continue. As a result, the IdentitySync Studio screen opens in the dashboard.
2. Edit the Data Flow
The data flow you created is built of the required steps for data transfer between Gigya and the selected vendor. Use the Component Repository to understand the structure and parameters required in each step.
Using IdentitySync Studio, you can:
- Specify passwords, IDs, API keys etc. required for accessing each system and customer database.
- Add the names of fields included in the data flow.
- Flatten fields, remove non-ASCII strings, specify the compression type, parse in JSON or DSV format, etc.
- Map fields and extract array data, for example using field.array.extract.
- Change the name of the data flow.
- Split a data flow, for example if you want to create two duplicate files and upload each file into a different destination. To do so, simply drag and drop the relevant step into the flow, and add connecting arrows as needed. In the code for the flow, this will be expressed in the next attribute, where you will find reference to the next two steps rather than just one. For a sample dataflow which employs this method, see the Epsilon Dataflow.
- Add Custom Scripts.
- Write failed records, that did not complete the flow successfully, to a separate file for review.
To do so:
- If it's more convenient, you can work in full screen mode by clicking the full-screen toggle on the top rigt corner.
Double-click any of the steps to add or edit its parameters. Click OK when finished.
- To add a new step, start typing its name in the Search component box. Drag the step from the list of components into the canvas.
- Drag arrows from/to the new step and from/to existing steps, to include it in the correct place in the flow. Make sure the "Success path" arrow is selected, under Connector Type.
- To add a custom step, locate the record.evaluate step in the list of components and drag it to the canvas.
- To split the data flow (for example to write to two target platforms), add the relevant step (e.g. another "write" step) and draw arrows accordingly:
- Handling failed records: You can add additional steps after a "writer" step, for writing to a separate file the records that did not complete the flow successfully. To do so:
- Add the relevant components to the flow (for example, a file.format step to write the records to a file, and a writer to write the file to the relevant destination).
- Under Connector Type, select the "Error path" connector.
Draw a connection from the original writer, to which successful records will be written, to the next step that handles failed records (e.g., the file.format step).
Under Connector Type, select the "Success path".
Connect the next steps that handle the failed records (e.g., the writer) using the "Successful path" connector.
- If necessary, click Source to review the data flow code , and edit the code as needed.
- Click Save.
Your dashboard should now look something like this:
Click the ellipsis for the Actions menu. The following actions are available:
|Edit||Opens the current data flow in IdentitySync Studio and change any of its attributes, steps and parameters.|
|Run Test||Runs the data flow once on 10 records for test purposes. If the test was successful, after refreshing the dashboard, you will see the timestamp of the test run under Last Successful Run. Use the Status button to view the details of the run. See Job History section on this page.|
|Duplicate||Useful for creating a new data flow based on a flow which has already been customized, if you wish to create a similar flow with slight variations.|
|Status||Displays the status of the current jobs running in your IdentitySync configuration. See Job History section on this page.|
|Delete||Deletes this data flow.|
3. Schedule the Dataflow
- Under Actions, select Scheduler.
- Click Create Schedule.
- Configure the schedule:
- Enter a schedule name
- Change the start time as needed
- Choose the log level:
- Error: Only error logs will be displayed in the job trace
- Info: Info and error logs will be displayed in the trace
Debug: Besides info and error logs, each record will be logged between every 2 steps. This should be used only if the dataflow is not working as expected, and is limited to a batch of 3 records.
- The log level does not affect the step metrics and errors (see Test and Monitor below).
- The job trace is limited to 1000 entries per job.
- Choose whether to run once or at scheduled intervals
- "Pull all records" should usually be selected only in the first run, when migrating records from one database to the other, and in any case should be used with caution. If the checkbox is not selected, and this is the first time this dataflow is run, records will be pulled according to the following logic:
- If the dataflow is set to run once, all records from the last 24 hours will be pulled.
- If the dataflow is recurring, records will be pulled according to the defined frequency. For example, if the dataflow is set to run once a week, the first time it is run, it will pull all records from the previous week.
- (Optional) Enter the email adress(es) for success and failure of the dataflow run. Use commas to enter a list of emails.
- (Optional) Limit to a specific number of records. This is usually used for test runs: when running a test from the dashboard, a one-time schedule is created which runs immediately for 10 records.
- Click Create, and, once you are back in the Schedule dashboard, click the Refresh button.
- The status of the scheduling is indicated in the Status column.
- You can stop a job mid-run by clicking the Stop icon under Actions:
Creating a Variable
You may create and manage variables, that can be shared in different dataflows within the same site, the same data center or across all partner sites. This is useful for credentials (for example, to an SFTP repository) or any variable that is reused in different flows. It saves the hassle of retyping and minimizes manual errors; and also enables updating variable values in a single location, instead of manually updating different dataflows. To create and mange your variables:
- Open the Dataflows option in Gigya's Console. Make sure your are signed in and have selected the relevant site. The IdentitySync dashboard may also be accessed by clicking Settings in the upper menu and then Dataflows in the left menu.
- Click the Shared Variables tab.
- To create a new variable, click Create.
- Enter the name and value of the variable.
- From the dropdown, select the scope of the current variable. This means it may be reused across different flows within the same site, across different sites for the same data center, or across different data centers for the same partner.
- Click the plus button to enter the new value.
- Click Save.
Using in Dataflows
To use a shared variable in a dataflow, write the variable name between in the following structure: <$VARIABLE_NAME$>. The field value can contain one or more shared variables, for example: test_<$USERNAME$>_<$ID$>.
For example, if the name of your variable is sftpUsername, and you are entering it in the username parameter of the datasource.write.sftp step, enter your value as follows: <$sftpUsername$>. In the source code of the dataflow, it will appear as:
Generally, variables may be deleted if they are not included in dataflows. The exception is that a variable may be deleted, if there is another variable defined with the same name but different scope, as long as a dataflow that uses this variable is within that scope.
For example, on site 'X', the variable "myVariable" is defined twice - once with a "site" scope and once with a "partner" scope. A dataflow is defined for site X that uses myVariable. The "myVariable" with a site scope may be deleted, since "myVariable" with a partner scope will be used, instead.
- Before deleting a variable, open the Actions menu and select Check Usage to check if the variable is used in a flow.
- You may delete a variable that is not in use.
- It may take a while for changes to appear on the Shared Variables page. Refresh the page to receive an updated view of your variables.
- Shared variables are case sensitive.
- The variable name must be unique for the defined scope i.e., the same variable name may be defined for different scopes. In that case, the value will be used in the following order:
- Currently, only string variables are supported.
- The variable name cannot contain a space.
- Maximum number of shared variables per partner: 50
- Maximum length of variable name/value: 100
- The value of a shared variable cannot start with the [ character or end with the ] character.
- Shared valuables cannot be used in a record.evaluate step (custom scripts).
Account Deletion Propagation
Accounts that are deleted from SAP Customer Data Cloud, in most cases should also be deleted from downstream applications. A business flow would typically consist of the following steps:
- Deactivating the account by calling accounts.setAccountInfo with isActive set to false. This disables the user from logging in to their account.
- Flagging the account for deletion by creating a custom data field (e.g. data.deleteUser) and setting that field to true.
- Querying the SAP Customer Data Cloud database for accounts that are disabled and have been flagged for deletion, and passing them on to downstream applications to be handled there (by either deleting them or flagging them for deletion).
- Finalize the deletion process by deleting the account from SAP Customer Data Cloud, e.g. by calling accounts.deleteAccount.
Account Deletion Orchestration
In other cases, if an account is deleted from SAP Customer Data Cloud but not from downstream systems, you should sync that information to trigger a downstream account deletion.
To do so, you have the following options:
Export a File
Account deletion is recorded to the Audit Log. Use the datasource.read.gigya.audit IdentitySync component to extract deleted records, and write them to a file. This can be imported into the downstream system and handled there. See below for a list of relevant endpoints to query in the Audit Log.
Use the Generic Writer
Account deletion is recorded to the Audit Log. Use the datasource.read.gigya.audit IdentitySync component to extract deleted records, then use the datasource.write.external.generic component to write data to an external endpoint. See below for a list of relevant endpoints to query in the Audit Log.
The SAP Customer Data Cloud Webhooks solution supports an "account deleted" webhook, fired whenever an account is deleted. You may send the webhook notification directly to the downstream system from which the account is to be deleted, or a middleware platform that handles the deletion.
Audit Log Deletion Endpoints
The following endpoints that appear in the Audit Log, indicate an account deletion:
In addition, the following APIs may indicate an account deletion in certain scenarios (especially when 2 UIDs are merged into 1). Depending on your site implementation (e.g., sync is based on the UID), they may be used in a flow that syncs account deletion to downstream systems:
- accounts.login (loginMode = link)
Test and Monitor
Test the data flow by clicking(run test) under Actions. This creates an immediate one-time run for 10 records. If the run was successful, after refreshing the dashboard (with the Refresh button) you will see its timestamp under Last Successful Run.
When scheduling the dataflow, you can enter email addresses to which a success and/or failure notification will be sent. We recommend adding firstname.lastname@example.org to the list of failure notification email addresses, so that Gigya will receive feedback of system health.
You can monitor data flows by reviewing previous runs (jobs). The job history displays the status of each run, its start and end times, and the number of records for which the data flow was completed successfully (under Processed).
Under Actions, click the Status buttonto open the Job History screen.
For advanced monitoring and debugging, click the info icon for the relevant job under Details, and the Job Status Details screen opens.
Note the tabs that display the following detailed information:
Trace: Contains a detailed trace of the job execution, including the log level and timestamp of each log message.
The job trace is limited to 1000 records. The log level is defined when scheduling the job.
- Step metrics: Displays the following metrics for each step: Duraion, Input, Output and Errors. Using step metrics, you can find out what were the bottlenecks of a job that took a long time to run, review performance issues, and monitor the number of records that completed the flow.
- Errors: Displays details of the errors that occurred during the job execution.
Certain restrictions are imposed on the amount and frequency of data flows each partner can run. By default, these are as follows:
- Maximum allowed number of data flows: 5
- Maximum allowed number of scheduled data flows: 5
- Frequency: The smallest frequency to which a dataflow can be set, i.e., the minimal time that must pass between each schedule for that flow. The default is 10 minutes
- Full extract frequency: The smallest frequency to which a "full extract" dataflow can be set, i.e., the minimal time that must pass between each schedule for that flow. These are flows for which the "Pull all records" checkbox is flagged. Default: 7 days.
- Maximum allowed number of full extract scheduled flows: 2
Copying Accounts From One Site to Another
IdentitySync gives you the option of copying the account database from one Gigya site to another, using the read from Gigya and write to Gigya components. When doing so:
- The source and target sites should belong to the same data center.
- Make sure all the fields being written to the target site exist on that site's schema.
- Policies should identical on both sites. The email verification policy needs to be disabled on the target site for the import to work (this one policy may be configured differently on the source site).
- Schema and site configuration should be completed prior to the user import. An easy way to do this is using the Configuration Copy Tool.
- The job should be set up on the target site.
- On the source site, you should create an application, and use the credentials in the dataflow. We recommend using a high-rate application key. For more information, see Signing Requests to SAP Customer Data Cloud.
- When setting up the import job, update the "account" step to include the source API key, the user key and secret key. .
Any operation involving importing accounts can be quite complex. if your use-case does not fall within the parameters defined above, SAP strongly recommends contacting your Customer Engagement Executive to scope an engagement with SAP’s digital services consulting team.
Depending on your networking policies, you may have to add the IPs of IdentitySync servers to a whitelist in order to allow IdentitySync to upload/pull information.
The full list of Gigya IPs are listed here. IdentitySync uses the IPs listed under "NAT IPs".
Following is a list of common issues you may run in to when using IdentitySync, and their solution.
Files Not Uploaded
Usually, when files from IdentitySync are not uploaded into the target platform, such as S3, the reason is usually that the SAP Customer Data Cloud IP addresses are not whitelisted on that system. See above for a list of addresses.
When using Custom Scripts, note that they run on Nashorn, and not all JS functionality is supported.
Each script is allowed a maximum of 10 logged lines. On production dataflows, make sure you don't define a log for each record, but only for errors and occasional messages.