Yet another trendsetter feature of CAPI and first of its kind is to selectively refresh the application data between your PeopleSoft environments. In any PeopleSoft installation, there may be a requirement to copy/refresh the application data from either your production or UAT environment into the downstream environments.
Consider that you have a production issue, say the pay cycle (in Financials product suite, Accounts Payables module) is not processing a particular payment. In such a case, you may wish to get the voucher data alone as-is from the production environment and get it loaded into your development/test environment to debug the issue; as the issue pertains to that single voucher and not the paycycle process itself.
Consider yet another scenario where you need get a copy of all the data for a particular BUSINESS_UNIT and/or SETID. Most of the time, in such a case, you will request your DBA to get you a copy of production database in one of your development/test databases.
In most of the PeopleSoft installations, when such requirements arises, the customers would normally tend to take a backup of the entire production database and get it restored on top of the development/test environment. Very few PeopleSoft customers has developed a set of SQL/data mover script or a custom bolt-on application that would export the application data from the production database and import the same into the downstream environments. Since the development/test environments are not usually secured like the production environment, either of the above approaches may breach the security setup or the internal control procedure. From an auditing/regulatory standpoint, it is a very clear violation to say that the development/test environment looks exactly the same as the production environment (say, right after a database refresh), but is not as much secured as the production environment.
Contact us now to watch a live demo of this unique Selective Data Refresh feature. The live demo will help you to understand how this feature can help you to streamline the data refresh procedure, stop performing those time consuming database refreshes and, most importantly, drastically cut-down on the production issue response time.
Well, CAPI addresses this very important requirement by delivering a simple, easy-to-use but a powerful feature using which you can selectively refresh the application data between the environments and also keep track of when the data was refreshed, what data was refreshed, who refreshed the application data and most importantly who approved the data refresh.
With the introduction of this trendsetter feature, to resolve a data issue in the production database, you NO LONGER HAVE TO PERFORM any of the following tasks, which you normally used to do:
- Shutdown production database to take a hot database backup
- Run the export dump of production database, which would normally run long hours in a high volume database
- Restore the database from the backup or import from the datadump, which would run for several hours
- Re-apply the security
- Depending on the DBA´s availability
Using CAPI´s selective data refresh feature, you can perform the following within no time:
- Refresh selective data, either bulk of records (say, all vouchers in a business unit) or a single record (say, a particular voucher)
- Run the selective data refresh online
- Schedule to run the selective data refresh at a particular date and time
- Capture and store manager´s approval
- From an audit trail perspective, track when the data was refreshed, who authorized the refresh and who performed the data refresh
- Any valid user can run the data refresh, without depending on DBA´s availability
Most importantly, if you normally used to run any data scrambling script (say, after database refresh of your HCM database), you can have those scripts to be automatically run post data refresh in CAPI now. This, obviously, will help your database refresh procedure to be SOX/regulatory compliant.
CAPI´s selective data refresh feature has been meticulously designed to deliver a high performance while working on a large volume of data. The in-built multi-threading process (for load balancing) will equally distribute the load across the threads, thereby delivering a high performance.