/appsutil/scripts folder missing and java.lang.UnsupportedClassVersionError: Bad version number in .class file

I’ve often seen that when we run admkappsutil.pl and copy appsutil.zip from App Node to DB node, on unzip, we do not get the full set of folders.

In my case, what I got was

To fix this, oracle recommends that we run adbldxml.pl to recreate the context file, followed by adconfig.sh. Unfortunately, I get the below error when I run adbldxml.pl

I see that the script is using jre from appsutil/jre, but the folder doesn’t exist at all. So I tried copying the jre from $ORACLE_HOME/jdk/jre to appsutil/jre. No use.

After a bit of trial & error, I figured that this needs the jre from the 10.1.2 home. The rationale was that since the script came from App node, it will expect same version of JRE

When I re-ran adbldxml.pl after this, it was created successfully.


Hands On Lab | EBS 12.2 : Hardware Requirement 1

Most people that I know of can’t afford, or do not wish to spend much on hardware. However, in case of EBS 12.2, there is a bare minimum that is needed for you to be able to do online patching etc. live on your lab instance. Here is what I recommend.

A laptop or desktop with the following spec:

Quad Core or Higher CPU  (Intel/AMD)

16 GB RAM

500 GB Free Space in HDD

Let me explain why we need this as a bare minimum:

CPU: We will be using VirtualBox, a type 2 hypervisor for this lab. This is a software that will run on your windows machine and will enable you to create Linux guest VM inside windows. If you have a dual core system, you can only assign one core to the VM, if you assign more, your host machine (laptop) will suffer and you’ll get frustrated. Note that some steps are CPU bound, like adutlrecomp, and even a 2 core VM will take over 5 hours to run it. Trust me, I’ve been there.

RAM: In my experience, for online patching to work, you will a minimum of 10 GB RAM for the application node. The DB will need 2 GB. so you need a minimum of 12 GB for the VM. Assuming that you will have Google Chorme running on the host with multiple tabs open (a big memory hog) it’s safe to set aside 3 GB for the host OS.

HDD: This one is straightforward.  15GB for OS+, 12GB of swap, 290 GB of post install application, ~50 GB of stage. During online patching we also need additional space for the ADOP tasks. 500 GB will serve us well, while providing a little bit of wiggle room.

 


Quick & Dirty rapidclone cleanup

If you are cloning and instance and it fails on the target, Oracle recommends that you cleanup. Here’s my quick and dirty script that will clean up, assuming that you do not have any other instance in there.

rm -rf /u01/oraInventory/
cd /u01/app/
rm -rf fs2 fs_ne
cd fs1
rm -rf FMW_home inst

Above is assuming that fs1 is your run_fs and /u01/oraInventory is your inventory location defined in /etc/oraInst.loc


txkWfClone.sh/txkWfClone.sql slow while running adcfgclone.pl appsTier dualfs

I hit this issue while cloning a vision instance I had setup on my laptop under VirtualBox to a new VM on my new ESXi host 🙂

If you search metalink for this, you will get a direct hit. Oracle Note ID: 1431581.1. The note explains that the issue is caused due to “high row counts in Workflow tables” and as a solution recommends that we run the “Purge Obsolete Workflow Runtime Data” concurrent request.

Wait, what? I am doing adcfgclone, which means that I have already copied Terabytes of stuff into my target node and you now want to me to run a concurrent program? This would mean that I run it on source and then do adpreclone.pl again, wouldn’t it?

Oracle is a giant with multiple arms, and it often happens that one doesn’t know what the other is doing. The solution was found in yet another metalink note, on a unrelated issue.

Solution:

Workflow Purge Data Collection Script (Doc ID 750497.1). This note gives you a downloadable script atg_supp_fnd_purge.sql. You run it as apps and it will generate wf_purge.sql

WF_PURGE.html

This html file has details of items eligible to be purged, and commands to do it. Click the hyperlinks

Click the hyperlinks that says Purge .. and it will take you to a table that has commands generated for you. There are 3 such sections. Run these commands as apps.

After running the purge commands, oracle recommends that we run yet another concurrent program.

OK, so here I cheated. I looked up the concurrent program definition and then wrote  quick anonymous block to run the stored procedure underlying the concurrent program.

DECLARE
 a VARCHAR2(400);
 b VARCHAR2(400);
BEGIN
 wf_oam_metrics.workitemsstatconcurrent(a,b);
 dbms_output.put_line(a|| ','|| b);
 commit;
END;

Now run the atg_supp_fnd_purge.sql again and you will see that there are no more purgable items.

You can now re-run your adcfgclone (if it had timed-out last time) after cleaning up.


ORA-12709: error while loading create database character set

Well, it so happens that I started a recently upgraded 11.2.0.4 DB and got this error. It also meant that the DB wouldn’t mount.

It turns out that the default ORA_NLS10 variable to set to a wrong value (I am suspecting EBS autoconfig, but this research is still pending). Setting it to the correct value fixed the issue for me. I also updated the EBS environment setting script with the correct value.

ORA_NLS10=$ORACLE_HOME/nls/data
export ORA_NLS10

 

Reference: ORA_NLS (ORA_NLS32, ORA_NLS33, ORA_NLS10) Environment Variables explained. (Doc ID 77442.1)


ORA-29024: Certificate validation failure

Often, as a DBA you get dragged into the mess that a developer has brought upon themselves. It so happened that a critical project deliverable was dependent on the ability to fetch data using URL.HTTP_REQUEST, and the target URL was a secure one (HTTPS). Now, HTTP and HTTPS is all the same, but with an extra S, isn’t it? 🙂

If you try to fetch the URL without doing anything, you will get an ORA-29024. This is expected as the database, does not have the means to validate the certificate presented by https://google.co.in.

Why is this error thrown?

But it works from my browser when I try from my local you say? When you access https://google.co.in via your browser, the browser is able to traverse the certificate chain, pulling details from the web if needed.

In case of https://www.google.co.in, the certification path is as below. A browser is able to fetch “Google Internet Authority G2” which is an intermediate certificate authority (CA) to validate *.google.com and “Geotrust Global CA” which is a Root CA to validate authenticity of “Google Internet Authority G2”.

In case of a database, it cannot go out to the internet and fetch the intermediate and root CA that are needed to establish the chain of trust. For this, Oracle DB relies on Wallets. Creating a wallet is straightforward.

Download the Certificates

First, we download the necessary certificates. In the above example, we will need GeoTrust Global CA and Google Internet Authority G2 certificates. I’ll explain this using Chrome as the example

Open the site in Chrome, click the green padlock icon on the address bar, then click on the Details link

 

Now a new windows opens on the right hand side, similar to below. Here, click on View Certificate.

On the new window that appears, click on the Certification Path

Now double click on Geotust Global CA, Go to details tab on the window that opens.

Click on the Copy to File. Follow prompts to save the certificate as Base-64 encoded. I saved mine as /tmp/GeoTrustGlobalCA.cer

Repeat the same for the next certificate in the chain, which is “Google Internet Authority G2”. I saved it as /tmp/GoogleIAG2.cer.

Note that we do not need the actual site certificate.

Create Wallet & Import Certificate.

Login to DB node as the database owner. Source environment.

orapki wallet create -wallet /d01/wallet -pwd Welcome123 -auto_login

Here, /d01/wallet is on a mount point that holds other datafiles. This ensures that if there is a failover, the wallet also travels with the datafiles and the functionality is not broken. the auto_login option enables that db to open the wallet at each startup, without is having to manually specify the password.

The next step is to add the certificates to the wallet, that we downloaded earlier.

orapki wallet add -wallet /d01/wallet -trusted_cert -cert "/tmp/GeoTrustGlobalCA.cer" -pwd Welcome123
orapki wallet add -wallet /d01/wallet -trusted_cert -cert "/tmp/GoogleIAG2.cer" -pwd Welcome123

Note: if your DB runs on windows (say you have one running on your laptop), theoretically, you could use the GUI tool to create wallet. But I have never been able to get it work. orapki command is platform independent, and I stick to it always.

Now if you query, you will be able to fetch details from https://www.google.co.in

 


ORA-12705: Cannot access NLS data files or invalid environment

Faced this issue while debugging another issue and was trying to login using SQLPlus. At that point of time, I had tried various steps to fix the original problem, and I was at a loss to figure out which step caused this.

From the error it was evident that the problem was due to some NLS issue and it is likely invalid environment.

I unset the NLS_LANG environment variable and then attempted to login. It worked perfectly.

unset NLS_LANG

PS: Do not forget to identify which script has this value incorrectly set and correct it.


Enable AutoConfig on DB node post out of place DB upgrade

When we upgrade database and it has to be an out of place upgrade, then the existing autoconfig setup becomes void (as it uses old o$ORACLE_HOME). In this case we need to enable autoconfig on new DB OH

As applmgr (application owner)
Source RUN FS

perl $AD_TOP/bin/admkappsutil.pl

This will create appsutil.zip in $APPL_TOP\admin\out
copy the appsutil.zip to $ORACLE_HOME

As oracle (database owner)

cd $ORACLE_HOME
unzip appsutil.zip
cd appsutil/bin

Below command will create the context file.

perl adbldxml.pl appsuser=apps appspasswd=apps jtop=/u01/app/oracle/VIS/fs1/EBSapps/comn/util/jdk64

Run adconfig to create scripts/<context_name> folder and others.

cd $ORACLE_HOME/appsutil/bin
sh adconfig.sh contextfile=/u01/db/oracle/VIS/ora11204/appsutil/VIS_ebs122.xml appspass=apps

Verify the AutoConfig log file

Now if you check in $ORACLE_HOME/appsutil/bin, you will see that it has now created more folders that it was present before.

Before

After

 

Finally, you can now run autoconfig

 


TIP: How to unzip multiple files in Linux in one go

When you want to unzip a file in Linux, it’s straightforward – you do

What if you have multiple zip files to decompress? Intuition says that unzip *.zip should work. Right? Let’s see.. when I do that, I get the below message

 So, how do you do it? You put the wildcard expression in single quotes so that the shell doesn’t expand it, Lo and behold, we have a winner here. 🙂

Why do we have to do this in the first place? When we do unzip *.zip, because of how unzip is written, the shell will expand the *.zip part and pass to unzip.

So your *.zip will become 001.zip and 002.zip, which translates into unzip 001.zip 002.zip. This, if you look at the man page for unzip translates to unzip 002.zip from 001.zip 🙂 No wonder it said

 


Forward, Reverse Cross Edition Triggers and Editioning Views – An integral part of ADOP

I was perplexed for quiet a while on how would Oracle handle TABLES in online patching as TABLE is a non-editonable object. This was a lingering question, but one I did not look into until we had a patch fail and I had to dig in to troubleshoot.

A Base Table is always defined in the product schema (say SCOTT for example). The base table (SCOTT.EMPLOYEE) is wrapped around by an Editioning View (EV). The EV will have the same name as the base table, but with a # suffixed to it (SCOTT.EMPLOYEE#). The APPS schema will have a SYNONYM with the same name as base table (APPS.EMPLOYEE), but it will point to the EV (SCOTT.EMPLOYEE#)

SCOTT.EMPLOYEE ----> SCOTT.EMPLOYEE# ---> APPS.EMPLOYEE
(base_table)         (editioning_view)     (synonym)

 

When we run ADOP PREPARE phase, EBS will create a child edition in the database. Since TABLE is a non editionable object, there will be only one entity. On the other hand, an EV is an editionable object, and you will see two instance of this EV one in the run edition and other in the patch edition (child).

When we add a column to a table, the base table (SCOTT.EMPLOYEE#) gets modified. Then the AD_ZD_TABLE.UPGRADE procedure is called on the patch edition EV, which recreates the patch EV, which will now show the new columns as well. However, the run edition EV is not modified and the currently connected users will continue to see only the columns as existed before the addition.

At this point of time, if you connect to run edition and do a “select * from scott.employee#”, you will see 2 columns, while if you run the same query after connecting to patch edition, you will see 3 columns.

Table Definition

CREATE TABLE EMPLOYEE (id       NUMBER
                       name     VARCHAR2(40)
                      )

 

EV on RUN Edition

CREATE OR REPLACE EDITIONING VIEW SCOTT.EMPLOYEE# AS
SELECT ID AS ID,
       NAME AS NAME
  FROM EMPLOYEE;

 

EV on PATCH Edition

CREATE OR REPLACE EDITIONING VIEW SCOTT.EMPLOYEE# AS
SELECT ID AS ID,
       FIRST_NANE AS FIRST_NAME,
       LAST_NAME AS LAST_NAME
  FROM EMPLOYEE;

 

Forward Cross edition Triggers:

These are special triggers defined to handle the data that are entered in the run edition while an ADOP session is in progress. This trigger will have logic to compensate for the newly added column in the base table, which is not yet visible to the application and users.

 

Let’s assume that we are adding two columns first_name and last_name to the EMPLOYEE table such that name = first_name ||’ ‘||last_name.

 

The forward cross edition trigger will look like

CREATE OR REPLACE EMPLOYEE_F01
BEFORE INSERT OR UPDATE
ON EMPLOYEE
FOR EACH ROW
FORWARD CROSSEDITION
DISABLE
BEGIN
     :NEW.first_name := TRIM(substr(:NEW.name,1,INSTR(:NEW.name,' ')-1));
     :NEW.last_name  := TRIM(substr(:NEW.name,INSTR(:NEW.name,' ')+1));
END EMPLOYEE_F01;

 

Reverse Cross Edition Triggers:

They do the exact opposite work as forward cross edition triggers do. They handle the data entered in the child edition while the ADOP session is in progress. If we had to create a reverse cross edition trigger for the above example, it would look like:

CREATE OR REPLACE EMPLOYEE_R01
BEFORE INSERT OR UPDATE
ON EMPLOYEE
FOR EACH ROW
REVERSE CROSSEDITION
DISABLE
BEGIN
     :NEW.name := :NEW.first_name || ' ' || :NEW.last_name;
END EMPLOYEE_R01;

 

After the patching, when we run CUTOVER, the APPS.EMPLOYEE is dropped and recreated, and it will not point to the old patch edition EV, which has the new columns visible.

 

Existing Data:

The existing data can be handled in two ways. One is to write an data-fix which will comb through the data and split the name filed into two and update the first_name and last_name field. Wait, isn’t that what the forward cross edition trigger does? Absolutely, but it fires only on inserts or updates. What if we do a pseudo update forcing the trigger to fire?

 

UPDATE EMPOYEE
   SET ID = ID;

 

References:

https://docs.oracle.com/cd/E11882_01/appdev.112/e41502/adfns_editions.htm#ADFNS0202

https://docs.oracle.com/cd/E11882_01/appdev.112/e41502/adfns_editions.htm#ADFNS918