Scheduled Downtime
On Tuesday 24 October 2023 @ 5pm MT the forums will be in read only mode in preparation for the downtime. On Wednesday 25 October 2023 @ 5am MT, this website will be down for maintenance and expected to return online later in the morning.
Normal Operations
The forums are back online with normal operations. If you notice any issues or errors related to the forums, please reach out to help@ucar.edu

model abort due to dvice negative (moved from CICE issues)

dbailey

CSEG and Liaisons
Staff member
I was running UFS S2SWA and model crashed with negative dvice error. It is not during the initialization period, the model crashed after the integration into 15-20 days. I was trying to find the discussion page for CICE to post the issue there, but couldn't find it. So I opened this issue here, hopefully get some help.

The CICE hash is from NOAA-EMC/CICE
commit 5840cd1 (HEAD, origin/emc/develop, origin/HEAD, emc/develop)
Merge: 6671e32 7df80ba
Author: Denise Worthen <denise.worthen@noaa.gov>
Date: Wed Mar 22 07:43:35 2023 -0400

The following is the error information.

(shift_ice)shift_ice: negative dvice
(shift_ice)boundary, donor cat: 2 3
(shift_ice)daice = 0.000000000000000E+000
(shift_ice)dvice = -5.200365076407644E-068
(shift_ice)puny = 9.999999999999999E-012
(shift_ice)vicen(nd) = -5.200365076407644E-068
(icepack_warnings_setabort) T :file icepack_itd.F90 :line 549
(shift_ice) shift_ice: negative dvice
(icepack_warnings_aborted) ... (shift_ice)
(icepack_warnings_aborted) ... (linear_itd)

The printout of "puny" and "vicen(nd)" was added by me.
Any of these values make sense?

I attached the code below

if (donor(n) > 0 .and. &
dvice(n) <= -puny*vicen(nd)) then
write(warnstr,*) ' '
call icepack_warnings_add(warnstr)
write(warnstr,*) subname, 'shift_ice: negative dvice'
call icepack_warnings_add(warnstr)
write(warnstr,*) subname, 'boundary, donor cat:', n, nd
call icepack_warnings_add(warnstr)
write(warnstr,*) subname, 'daice =', daice(n)
call icepack_warnings_add(warnstr)
write(warnstr,*) subname, 'dvice =', dvice(n)
call icepack_warnings_add(warnstr)
write(warnstr,*) subname, 'puny =', puny
call icepack_warnings_add(warnstr)
write(warnstr,*) subname, 'vicen(nd) =', vicen(nd)
call icepack_warnings_add(warnstr)
call icepack_warnings_setabort(.true.,__FILE__,__LINE__)
call icepack_warnings_add(subname//' shift_ice: negative dvice')
endif


Thanks,

Bing
 

dbailey

CSEG and Liaisons
Staff member
I think the question here is how vicen becomes negative? vicen is volume per unit area of ice

Bing
 

dbailey

CSEG and Liaisons
Staff member
I warm start the model with restart files from f03. The model forecast length is set to 35-day, with 300s timestep.
The crash occurred after model integration for 15-20 days, not at the initialization period.

Bing
 

dbailey

CSEG and Liaisons
Staff member
f03 means restart from forecast hour 03, I checked ice restart file, the vicen value range from 0 to 8.36596.

Bing
 

dbailey

CSEG and Liaisons
Staff member
more details: We have a continuous coupled run that is nudged to ERA5 in the atmosphere, ORAS5 and the ocean. Sea-ice concentration, thickness, and snow depth are updated once a day at 9Z to the ORAS5 sea-ice using JEDI software.
What Bing uses for his initial conditions are restart files for 3Z the following day, so there has been 18 hours of model integration with no additional input to the sea-ice or ocean, but the atmosphere model continues to be nudged with ERA5.
Bing then launches a free forecast, and the model is crashing in the 3rd week.

Phil Pegion
 

dbailey

CSEG and Liaisons
Staff member
"Sea-ice concentration, thickness, and snow depth are updated once a day at 9Z to the ORAS5 sea-ice using JEDI software."

This is likely the problem. You need to look at the sea ice fields in the restart. They are likely messing up the subgridscale category distribution. As I said, you should make plots of aicen, vicen, and vicen/aicen.
 

bingemc

Bing Fu
New Member
@dbailey Thanks for helping to move the post here.
Do you think the problem in initial condition fields will stay for as long as more than 2 weeks after model integration? Note, the crash occurred after free forecast 15-20 days.
I just have the locations of the one of the crashed cases. It is an ensemble run. 9 of 11 members crashed.
Here is the locations of crashes for the 9 members.
lonlat
-32.62492371-53.90536274
-16.12496229-54.05238264
-22.12494826-54.49033539
-18.12495762-55.20995378
-23.12494592-54.49033539
-26.37493832-52.55871645
-24.374943-54.92364639
-25.37494066-52.71043954
-20.62495177-55.3523403
 

dbailey

CSEG and Liaisons
Staff member
It has to be the initialization. Sometimes it takes a bit for the ITD to adjust. It also depends on the time of year whether it is melting or growing. What happens if you use an unmodified initial state? You can try reducing the timestep, but I suspect this is still an initialization problem.
 

npbarton

Neil Barton
New Member
@dbaily Is there a threshold for how similar the thickness (vicen / aicen) can/should be between the categories in the IC?

I don't see any categories having the same thickness at a grid point, but differences can be less than 0.001.

Thanks
 

dbailey

CSEG and Liaisons
Staff member
That is definitely a pretty small difference. I don't really have a good rule of thumb for this. The ITD will usually work itself out here. The code (CICE5) is in shift_ice:

if (dvice(ij,n) < c0) then
if (dvice(ij,n) > -puny*vicen(i,j,nd)) then
daice(ij,n) = c0 ! shift no ice
dvice(ij,n) = c0
else
dvice_negative = .true.
endif
endif

Where puny is 1.0e-11. So, it is very small. However. sometime if you start out too close say to an upper threshold for a category and there is thermodynamic growth which can bump it up too high. It could sometimes be resolved by reducing the timestep as well. We tried to "fix" this, but this often comes up when doing data assimilation across categories.
 
Top