This was an odd error, the database is Oracle 12.1.0.2 (after 9i,10g, 12c upgrades over the years), I had never seen it before during our migrations. Below are the steps I took to resolve, this migration error to PostgreSQL 9.6.3.

I initially started using AWS Data Migration Service and saw this error in two of our tables, in the error logs.  I then installed Ora2pg to validate the error, it occurred there as well when I tried to use the export data functionality.  My third workaround was dumping the tables to CSV and importing to PostgreSQL as CSV – pretty slow, but it worked.

  1. Compare same tables in the test environment, the error is not duplicated.
  2. load data from prod to test, the error is duplicated so its the data in the date columns.
  3. The best repository “Google” , so I dump the date columns on the bad data:
    1. I identified the bad data by the “expires > to_date(‘99001231’,’YYYYMMDD'” and 0000-00-00. So it looks like anything above CCYEAR of 9900 returns an error on the column, I dumped one column.
    2. select access_credential_id, dump(expires),to_char(expires, ‘HH24:MI:SS MM-DD-YYYY’)
      from trouble_table where expires > to_date(‘99991231′,’YYYYMMDD’);
      342051 Typ=12 Len=7: 199,199,12,31,24,60,1 23:59:00 12-31-9999

      The format of the date data type is

      byte 1 – century (excess 100) 199 – 100 = 99
      byte 2 – year (excess 100) 199 – 100 = 99
      byte 3 – month = 12
      byte 4 – day = 31
      byte 5 – hour (excess 1) 24 – 1 = 23
      byte 6 – minute (excess 1) 60 – 1 = 59
      byte 7 – seconds (excess 1) 1 – 1 = 00

    3. Appeared valid, but still returned an eror, date above TO_DATE(‘9900-12-31 00:00:00’, ‘YYYY-MM-DD HH24:MI:SS’) caused this issue for my tables.
  4. Contact support, reproduce the error, provide a dump file:

sys> show parameter max_dump_file_size
NAME TYPE VALUE
———————————— ———– ——————————
max_dump_file_size string 1024M
sys> ALTER SESSION SET TRACEFILE_IDENTIFIER = “oraerr181”;
Session altered.
sys> ALTER SYSTEM SET EVENTS ‘1801 trace name errorstack level 12’;
System altered.
sys> ALTER SYSTEM set MAX_DUMP_FILE_SIZE= unlimited ;
System altered.
sys> ALTER SYSTEM set TIMED_STATISTICS = true;
System altered.
sys> select * from trouble_table
where TO_CHAR (expires, ‘CC’) NOT IN (’18’,’19’,’20’,’21’); 2
select * from troubble_table
*
ERROR at line 1:
ORA-01801: date format is too long for internal buffer
sys> ALTER SYSTEM SET EVENTS ‘1801 trace name errorstack off’;
System altered.
sys> ALTER SYSTEM set MAX_DUMP_FILE_SIZE=’1024M’ ;
System altered.
sys> show parameter max_dump_file_size
NAME TYPE VALUE
———————————— ———– ——————————
max_dump_file_size string 1024M

  1. I provided the data that was provding the issues, offcourse the error would not replicate in their instance, this can be due to a lot of factors.. i.e. if I did a clean install of a 12c database and installed the same data with unchanged parameters, the data would probably be good.
  2. These cases are difficult to diagnose, in different environments.  One could also provide support a test case, this would probably be the best solution to discovering your issue.  Oracle has a document on how to do this:  Doc ID 235444.1

 

The quick fix in this scenario, if it matches yours is to update the bad columns with tested values, that do not break your application logic.  Ensure to pass it through the development lifecycle so it is QA’d, changes no matter how small can impact you down the road:

 

FIX:
SET EXPIRES to anything > than TO_DATE(‘9900-12-31 00:00:00’, ‘YYYY-MM-DD HH24:MI:SS’)  to NULL, as you are essentially saying don’t expire.  Its poor design, but this resolves the error, bad data.

 

Advertisements