I hit this issue while cloning a vision instance I had setup on my laptop under VirtualBox to a new VM on my new ESXi host 🙂
If you search metalink for this, you will get a direct hit. Oracle Note ID: 1431581.1. The note explains that the issue is caused due to “high row counts in Workflow tables” and as a solution recommends that we run the “Purge Obsolete Workflow Runtime Data” concurrent request.
Wait, what? I am doing adcfgclone, which means that I have already copied Terabytes of stuff into my target node and you now want to me to run a concurrent program? This would mean that I run it on source and then do adpreclone.pl again, wouldn’t it?
Oracle is a giant with multiple arms, and it often happens that one doesn’t know what the other is doing. The solution was found in yet another metalink note, on a unrelated issue.
Workflow Purge Data Collection Script (Doc ID 750497.1). This note gives you a downloadable script atg_supp_fnd_purge.sql. You run it as apps and it will generate wf_purge.sql
This html file has details of items eligible to be purged, and commands to do it. Click the hyperlinks
Click the hyperlinks that says Purge .. and it will take you to a table that has commands generated for you. There are 3 such sections. Run these commands as apps.
After running the purge commands, oracle recommends that we run yet another concurrent program.
OK, so here I cheated. I looked up the concurrent program definition and then wrote quick anonymous block to run the stored procedure underlying the concurrent program.
DECLARE a VARCHAR2(400); b VARCHAR2(400); BEGIN wf_oam_metrics.workitemsstatconcurrent(a,b); dbms_output.put_line(a|| ','|| b); commit; END;
Now run the atg_supp_fnd_purge.sql again and you will see that there are no more purgable items.
You can now re-run your adcfgclone (if it had timed-out last time) after cleaning up.