Tuesday, July 31, 2018

Web-Services Best Practice : Using Parallel queueing to streamline Web-service data loads

Using Parallel queuing to streamline Web-service data loads

The key to moving data between applications that are in and out of the cloud is using web-service APIs.  This works great for individual transaction done in real-time, but what happens when you have a large batch of transactions to process, say 100000 transactions?   And 90% of the transactions take three seconds and the other 10% take an average of 45 seconds.   Well let’s do some math:
90000 transaction * 3 seconds = 270000 seconds = 4500 minutes = 75 hours
10000 transaction * 45 seconds = 450000 seconds = 7500 minutes = 125 hours
Total : 200 hours or 8.3 days.
As a data integrator, I worked at a healthcare provider, and as a team we found that this completely unacceptable.  The customer experience for processing EDI 834 Enrollment files for that project demanded a 24 hour turnaround time to printing enrollment cards.
However we knew that web-services are transaction independent.  Well pretty much.   An example, when processing these EDI 834 Enrollment files we needed to prioritize that the Subscribers are processed before dependents, but other than this, transactions could be processed in parallel within limits.   The limits for the HealthEdge system we were loading to seemed to have a threshold of about 75 parallel queues.   I have run into smaller thresholds in Oracle CX , Salesforce , and other cloud apps of 5 and 10 and 20.   Some of these queue limits may have come from the source side.   I was using SQL-Server Express, and it has a low threshold for open query sessions, while SQL-Server Enterprise is much larger.   We used SQL-Server to store messages to be processed.   This allowed us a lot of flexibility to be able to the queues, assign priorities based upon multiple criteria, such as message-type, or Subscriber/Dependent.   The database model is agnostic though, it will work in Oracle, MySQL, Postgres and well as the a fore mentioned SQL-Server.
The data model is quite simple.  Load data tables – these are tables where each column is a data value identified by the column header.  XML-Message views – These are views that translate the data into a single XML message per row from the Load data tables.  XML-Message tables take the generated message from the view and stores it in a Message request column, as well as having the URL to send the message to, and the credentials..
From here there are multiple ways to send the message and get its response and store it back into the original table.
So taking the above scenario applied to each threshold limit, please see the potential results below:
As you can see, the initial 200 hours was reduced to 2.7 hours when using 75 parallel queues.
The results of this strategy can be a critical approach for an organization to meet tactical objectives.
If you think this approach can help your company, please contact us to discuss your possibilities.

Related Content:

Tuesday, May 15, 2018

Actian DataConnect Breathes new life into Oracle CRM OnDemand Connector


Actian DataConnect Breathes new life into Oracle CRM OnDemand Connector


by David Byrd

Recently one of our customers ran into a problem with a particular connection to a custom object in Oracle CRM OnDemand.   This process had been in place since it was originally developed under the Pervasive Data Integrator name.   And after much analysis we determined it was a bug in the existing version the client had installed.  We installed a more current version that Actian provided, but that did not work.

I made a few product suggestions to my Actian support rep, and that was run up the chain of command to the Product manager who agreed these suggestions were worthy.    SO their engineering team got on with the changes and we recently started testing with them.

So maybe first it would be good to identify the issues I was running into when querying against a large custom object.

1.  The query would timeout when you hit Establish Connection.

2. The workaround for the query timeout was to set up a user that would only return records they owned ( this was normally a small set of records ).  This worked great unless you forget to set the macro back which would impact production.  

3.  The current map in the process which I rebuilt several times before Actian provided this latest version, would query for all records ( around 19000+) but only return 3000+.

So I waited patiently for Actian to provide an upgraded version.

They made their changes and provided a build for me.   They had now provided a timeout feature on the session screen.  



On the first try I built an new dataset, and it queried all 19000+ of the records.  Note that this took a long time... 20 plus minutes, but it did not timeout.



I thought-- awesome this works.    So I save and back out to the map.

However when I ran the map with the new dataset and retrieved the 3000+ records. 

So the error/bug was in the map!

So I went thru building the map, and using the dataset wizards to build it.  Upon building the new source, the Dataset builder it ran very quickly retrieving only 25 records and building out the schema.   The 25 records was surprising, and everything looked good, so using the map dataset wizard now build out the schema using just a sample of 25 records. 

I did set the timeout for 0 if that makes a difference.

I am going to reset to larger of 200 and see how that goes.
And that returned 25 too.   Interesting. 
So, the bottom line is if you click build new map and walk-through the wizards that walk you thru building a data-set, when you hit establish connection,
it works really quickly and returns 25 rows.... no matter what value you put in the timeout field.
I like this!
Once I had all the mappings completed in this new map, I ran it , and it retrieved all 19000+ records.   Yes - works.







Just some notes,  if you are in a newly created map and do change schema, and then create new dataset, when you hit establish connection,  it works really slow and returns all the  rows.... around 19000.   But the map still works.  The new changes definitely have some nuisances, but ultimately provide a much more enriched customer experience.








Friday, March 9, 2018

Reviews

Capterra
https://www.capterra.by/reviews/152298/dell-boomi

TrustRadius
https://www.trustradius.com/products/actian-dataconnect/reviews
https://www.trustradius.com/products/55d4b0d25c6c010e00a451a7/reviews
https://www.trustradius.com/products/55931b1f57e6291300b77ca4/reviews
https://www.trustradius.com/products/5061d969e1ff5d020000003a/reviews

G2Crowd
https://www.g2crowd.com/products/actian-dataconnect/reviews/actian-dataconnect-review-166019
https://www.g2crowd.com/products/oracle-content-marketing/reviews/oracle-content-marketing-review-538563
https://www.g2crowd.com/products/oracle-sales-cloud/reviews/oracle-sales-cloud-review-538629
https://www.g2crowd.com/products/microsoft-sql/reviews/microsoft-sql-review-538597
https://www.g2crowd.com/products/dell-boomi/reviews/dell-boomi-review-165908

Actian Integrations : Best Practice – Change Control Steps between Actian Server systems

Actian Integrations : Best Practice – Change Control Steps between Actian Server systems

Recently, I was reviewing a client’s Data Integrator server to deploy an approved package from there Staging to their production system.  I wanted to make sure that the full setup was documented as a Best Practice to be used with all our clients using Actian.
1.    On your Staging server, connect your Production Server as an Option for Deployment.
a.    First click on the Config tool ---  see pink arrow in picture below
b.    First hit the Plus Button ( see #1), then add the Url to the Server (#2) and the port (#3).
c.    Test the Connection and then Save.
2.    Now once you have a Deployed package to the Staging server that has been QA’d approved, you are ready to Deploy to Production.  So first click the Deploy button.
3.    Choose the Select Server radio button, and the appropriate Server from the pulldown Arrow.

4.    Next choose the package version to Deploy
5.    Then press the DEPLOY button.
The package is now deployed
Package Management
    Now we want to setup Integration Manager to be able to flip between Staging and Production Easily.  
The first thing we want to make sure of is that each server is named appropriately.  We can do this on the Admin Designer.  On the Settings Tab make sure the server name is descriptive.  In this case it is labelled as “Data Integrator Server – Staging”
Do this for each server.


Now that that is taken care of, we need to Log into Integration Manager in Staging First.   
Go to the Server Groups tab.
This tab is similar to the one above.  Make sure each server is listed and if not, press the Add button and fill out the Host Name and Port and then Test it.
Once all the servers are added.  Then you can go to  the Integrations tab.  From here you can choose the server you want to view.


Each user that uses Integration Manager has to do this, but it greatly improves the Customer Experience.

Actian DataConnect Best Practices: Clean up obsolete artifacts before you bring your server down!

Actian DataConnect Best Practices:  Clean up obsolete artifacts before you bring your server down!
by  David Byrd
Recently I had a server almost crash.  I quickly realized the disk space was extremely low.   I looked at the data directories, but the core data is stored on a data drive and not the system drive.   Please review the two best practices for data integrators below to see what you can do to prevent a system crash from no disk space left.

Best Practice # 1
Actian Integration processes can be deeply enriched by using the Logmessage function to record information about how you process is running.   This can be critical to have posted pertinent information just before a process crashes, potentially ruining a good customer experience..
That is the good side of having robust logging.   Unfortunately, the down side of robust logging is a large log file.   Actian’s integration manager does a good job of trying to manage small logs, however, when a log is generated that reaches too large a threshold, it will save it in the server local directory.  
The problem: There is no Cleanup of these logs.  
The solution:  Build a Bat file that delete all files in all sub directories like the following:
Note you will need to confirm the actual paths based on your installation.
Result:   Clearing these directories of old logs gave us back 45 GB.

Best Practice # 2
Actian Integration processes can be directly in the designer.  Everytime it runs this with a new configuration ( i.e. you made a change to the process ) the designer actually builds a DJAR file behind the scenes.   This is the same directory where the deployment djars are built too… so be careful.    All the djars built from the design have the word SNAPSHOT in it.   See picture below:
The problem: There is no Cleanup of these djars.  
The solution:  clear the SNAPSHOT Djars:
Note you will need to confirm the actual paths based on your installation.
Result:   Clearing these directories of old djars gave us back 4 GB.

Five ways you will benefit from StratusLite Quick Books Edition





QuickBooks to Oracle Sales Cloud
By David Byrd

Do you have a small business using Quickbooks?  Are you looking to make your sales team more efficient? Are you considering the use Oracle Sales Cloud (Oracle CX) and other Oracle applications to improve the customer experience? Do you want your data moved automatically from Quickbooks to Oracle Applications and back when required, but you do not have a data integrator on staff? No problem.
If you answered yes to any of those questions, you should consider using the SFCG Stratus-lite QuickBooks Integration edition. This article will help you understand the abilities of our integration and how it can help you.
One of the first things that the integration does is build Accounts and/or Contacts off of the Customer object within QuickBooks. The integration gives some limited options to control how this integration loads. For example, the first configurable option is the select statement for the customer object in QuickBooks.
The Select Statement is very similar to standard SQL. We use the “*” option to choose all columns of the data to be reported back. This part of the Select Statement is not configurable. However, the next part which is the “where clause” is. The where clause is the place in the SQL-like code that allows you to choose what conditions that you are querying.
The second Configurable piece is in the second step, the set Properties Step.
This step allows you to configure two options. One of the options is the customer name. This is used in the Integration Reporting.
The second option that is configurable in this step Load Orphans option. We give a firm the option of choosing whether to load all contacts, or just contacts that have accounts.
The mappings for accounts and the mappings for Contacts consist of:
Lastly, this process has the ability to report the good and bad responses from each message. It will combine these reports into a message which is sent via email. It will have a pre-configured dynamic subject line so it will not be grouped like some email systems do. It can be configured to use all notifications to be sent by: dc-notifications@sfcg.com. Notice the “To” option is david.byrd@sfcg.com. This could be anyone, or even a list.

So that about wraps up this offering. If you need more data brought over, speak with us to discuss an additional project for our Integration Services team.
Other Related Content :

  

Monday, November 20, 2017

Integration Computing Blog

An old Blog Post Site of Mine.

http://integrationcomputing.blogspot.com/

Enjoy!!

David Byrd