Installing and running Serpent

From Serpent Wiki
Jump to: navigation, search

Compiling the source code

Serpent is written in the C programming language and compiled using the standard Make utility. The compilation (of version 2.2.0) should look something like:

~/src/serpent2$ make
gcc -Wall -ansi -ffast-math -O3 -Wunused -DOPEN_MP -fopenmp -Wno-unused-but-set-variable -Wno-deprecated-declarations -pedantic -fpic -c addbranching.c
gcc -Wall -ansi -ffast-math -O3 -Wunused -DOPEN_MP -fopenmp -Wno-unused-but-set-variable -Wno-deprecated-declarations -pedantic -fpic -c addbuf.c
gcc -Wall -ansi -ffast-math -O3 -Wunused -DOPEN_MP -fopenmp -Wno-unused-but-set-variable -Wno-deprecated-declarations -pedantic -fpic -c addbuf1d.c
...
gcc -Wall -ansi -ffast-math -O3 -Wunused -DOPEN_MP -fopenmp -Wno-unused-but-set-variable -Wno-deprecated-declarations -pedantic -fpic -c zaitoiso.c
gcc -Wall -ansi -ffast-math -O3 -Wunused -DOPEN_MP -fopenmp -Wno-unused-but-set-variable -Wno-deprecated-declarations -pedantic -fpic -c zdis.c
gcc -Wall -ansi -ffast-math -O3 -Wunused -DOPEN_MP -fopenmp -Wno-unused-but-set-variable -Wno-deprecated-declarations -pedantic -fpic -c zonecount.c
gcc addbranching.o addbuf.o addbuf1d.o ... zaitoiso.o zdis.o zonecount.o -lm -fopenmp -lgd -o sss2
Serpent 2 Compiled OK.
~/src/serpent2$

The compilation should not produce any error or warning messages and it should produce an executable named "sss2"

Compiler options

There are number of compiler options that can be invoked by editing the Makefile. When recompiled, it is recommended to do a "make clean" before "make".

Specific compilation options for Linux/macOS are detailed in the Makefile.

GD Graphics library

If the GD open source graphics library is not available in the system, line:

LDFLAGS += -lgd

must be commented out and line:

CFLAGS += -DNO_GFX_MODE

added. For more information, see the the GD Graphics library website. Also note that there are several threads addressing the installation of these libraries at the Serpent discussion forum.

Debug mode

The code can be compiled in the debugging mode by adding line

CFLAGS += -DDEBUG

in the Makefile. This activates various pointer and value checks during the calculation. The code runs slower, but errors are more likely to be caught before inducing unexpected results. In case of crash or any unexpected behavior, the best way to proceed is to recompile the code in debug mode and repeat the calculation using the same random number seed (the "-replay" command line option). For more information see the Pitfalls and troubleshooting section.

Parallel calculation using MPI

Parallel calculation using MPI also requires changes in the Makefile. The calculation mode is activated by adding line:

CFLAGS += -DMPI

The compiler needs to know where to look for the associated libraries. In some installations this can be accomplished simply by changing the compiler from “gcc” to “mpicc”.

Serpent 2.2.0 includes an additional flag for assisting the debugging in parallel by re-directing the MPI outputs to sss2_output_mpiid_[mpiid].txt.

CFLAGS += -DMPI_DEBUG

Making backups

A time-stamped backup can be made using the "bk" -option:

~/src/serpent2$ make bk
zip "`date +'backup/serpent_2.2.0_%y%m%d%H%M.zip'`" *.c *.h Makefile versions.txt
  adding: addbranching.c (deflated 91%)
  adding: addbuf.c (deflated 73%)
  adding: addbuf1d.c (deflated 70%)
  ...
  adding: surface_types.h (deflated 74%)
  adding: Makefile (deflated 83%)
  adding: versions.txt (deflated 82%)
cp sss2 "`date +'backup/serpent_2.2.0_%y%m%d%H%M'`"
cp "`date +'backup/serpent_2.2.0_%y%m%d%H%M.zip'`" ./serpent2.zip
chmod a-w "`date +'backup/serpent_2.2.0_%y%m%d%H%M.zip'`"
chmod a-w "`date +'backup/serpent_2.2.0_%y%m%d%H%M'`"
~/src/serpent2$

The source code is zip-compressed in the "backup" subdirectory, which must be found in the source directory. The executable is copied in the same directory and renamed using the same time stamp, so that earlier versions can be called without re-compiling the source code.

Installing updates

Updates are sent to registered users in tar.gz-compressed format by e-mail. The installation of updates is carried out by overwriting the existing source files:

~/src/serpent2$ make bk
...
~/src/serpent2$ tar -xzvf sssup2.2.X.zip
...
~/src/serpent2$ make clean
...
~/src/serpent2$ make
...
~/src/serpent2$

It is always good practice to make a backup of the source code before installing the update. It is also important to realize that any modifications made in the source code may be lost when the updated files are installed. The updates are cumulative in the sense that update "sssup2.2.X+1.tar.gz" contains all the modifications in update "sssup2.2.X.tar.gz" and earlier.

The code version can be checked after the compilation using the "-version" command line option:

~/src/serpent2$ sss2 -version

  _                   .-=-.           .-=-.          .-==-.       
 { }      __        .' O o '.       .' O o '.       /  -<' )--<   
 { }    .' O'.     / o .-. O \     / o .-. O \     /  .---`       
 { }   / .-. o\   /O  /   \  o\   /O  /   \  o\   /O /            
  \ `-` /   \ O`-'o  /     \  O`-'o  /     \  O`-`o /             
   `-.-`     '.____.'       `._____.'       `.____.'                        

Serpent 2.2

A Continuous-energy Monte Carlo Reactor Physics Burnup Calculation Code

 - Version 2.2.0 (May 5, 2022) -- Contact: serpent@vtt.fi

 - Reference: J. Leppanen, et al. "The Serpent Monte Carlo code: Status,
              development and applications in 2013." Ann. Nucl. Energy,
              82 (2015) 142-150.

 - Compiled May 5 2022 08:05:56

 - MPI Parallel calculation mode not available

 - OpenMP Parallel calculation mode available

 - Geometry and mesh plotting available

 - Default data path set to: "/XS"

 - Full command used to run Serpent:

   sss2 -version
 

Simulation completed.
------------------------------------------------------------
Thu Aug 18 19:14:10 2022  (errors: 0, warnings: 0, notes: 0)

~/src/serpent2$

This also prints information about the availability of parallel calculation, geometry and mesh plotting and the default data path.

Setting up the data libraries

Serpent reads continuous-energy cross sections from ACE format data libraries[1]. The directory file is different from the "xsdir" file used by MCNP and the conversion between the two formats is made using the "xsdirconvert.pl" perl script:

~/xsdata/jeff311$ xsdirconvert.pl sss_jeff311u.xsdir > sss_jeff311u.xsdata
~/xsdata/jeff311$

The output is written in a new file, "xsdata", which is read by setting the set acelib input parameter.

The conversion script checks that each library ACE-file in the original "xsdir" exists and it is located accordingly to data path defined in the "xsdir" directory. The data path (first line) in the "xsdir" file must be edited before running the script. If all file paths are set correctly, the "xsdir" file should be directly usable by MCNP and the conversion to Serpent format should work without problems. The "xsdir" file should look like:

datapath = [...]
atomic weight ratios
[...]
directory
[...]

The library directory includes two entries for each nuclide, one using the standard MCNP convention (ZA.id) and another one using the element symbol and the isotope mass (e.g. 92235.03c and U-235.03c for U-235). Either name can be used to identify the nuclide in the material compositions.

The script assumes that nuclides in isomeric states are identified by setting the third digit in the ZA to 3 (e.g. 61348 for Pm-148m or 95342 for Am-242m). If other convention is used, the isomeric state number (5. entry) must be set manually.

Radioactive decay and fission yield data is read in standard ENDF format[2] that requires no modifications. Photon transport calculations require additional physics data (photon_data.zip) that can also be downloaded separately. The file path for this data is set using the set pdatadir input option.

The default search path for cross section, decay, fission yield and isomeric branching data can be defined by setting environmental variable SERPENT_DATA. This allows accessing the files in this directory using the file name only. The default cross section library can be defined using variable SERPENT_ACELIB.

Serpent 2.2.0 adds the possibility of defining a default decay and induced-fission yields data libraries by including the variable SERPENT_DECLIB and SERPENT_NFYLIB, respectively.

When any of those are set, the corresponding library path (set acelib/set declib/set nfylib) is not required for running the simulation. These environmental variables can be set automatically in the ".bashrc" initialization file (or similar). For example:

SERPENT_DATA="/xs"
export SERPENT_DATA

SERPENT_ACELIB="sss_jeff311u.xsdata"
export SERPENT_ACELIB

SERPENT_DECLIB="sss_jeff311.dec"
export SERPENT_DECLIB

SERPENT_NFYLIB="sss_jeff311.nfy"
export SERPENT_NFYLIB

Distributed cross section and nuclear data libraries

Cross section and nuclear data libraries required for running Serpent are freely available online at VTT ShareFile.

Running Serpent

Serpent is run from the Linux command line interface. The general syntax is:

sss2 INPUT [ options ]

Where INPUT is the input file name and the available options are:

-casematrix  : run calculation with casematrix input, see detailed description
-checkstl N M  : check for holes and errors in STL geometries by sampling M directions in N points (see detailed description)
-checkvolumes N  : calculate Monte Carlo estimates for material volumes by sampling N random points in the geometry (see detailed description)
-coe  : run only restarts in coefficient calculation
-comp MAT [ ID ]  : print pre-defined composition of material MAT that can be copy-pasted into the inputfile (see detailed description)
-disperse  : generate random particle or pebble distribution files for HTGR calculations (see detailed description)
-elem SYM DENS [ ID ]  : decomposes natural element identified by symbol SYM at density DENS into its constituent isotopes (see detailed description)
-his  : run only burnup history in coefficient calculation
-input  : copy all inputs in a single file INPUT.input
-ip  : launch interactive (command-line) plotter
-matpos OPTIONS  : return material name at given position (see detailed description)
-mix  : decompose mixtures in file (see detailed description)
-mpi N  : run simulation in MPI mode using N parallel tasks
-nofatal  : ignore fatal errors
-noplot  : ignore geometry plots
-norun  : stop before running the transport simulation
-omp M  : run simulation in OpenMP mode using M parallel threads. (The key work "max" replacing M launches the run with the maximum available threads in the system).
-plot  : stop after geometry plot
-port N  : port to connect to for coupled calculations
-qp  : quick plot mode (ignore overlaps)
-rdep [ N ]  : read binary depletion file(s) from previous calculation and print new output according to inventory list. N restart files from domain decomposition.
-replay  : run simulation using random number seed from a previous run
-trackfile N  : write particle tracks in file
-tracks N  : draw N particle tracks in the geometry plots or invoke track plot animation
-version  : print version information and exit

Most of the input options are self-explanatory, the rest are described below.

Running parallel calculations

Serpent supports both MPI and OpenMP technologies for parallel computing. When single computer (or single calculation node in HPC cluster) is used in calculations then building with MPI support is optional. OpenMP starts multiple calculation threads under single process and memory reserved by process is shared between these threads. OpenMP parallelization is limited to single computer as single process can't be divided to multiple computers. MPI parallelization depends on multiple calculation processes which may or may not be on the same calculation node and which are communicating with each other.

Each process reserve their own memory which means in practise that there are copies of the same nuclear data and all other data tables per each process. The nuclear data is the most problematic one because a large amount of memory might be used for each copy. Other data tables are largely affected by the size of the neutron population used and basically doubling amount of MPI processes divides this type of memory usage of a single process in half and then the net effect is small. From these facts some best practises could be given.

In small cases using single computer with OpenMP parallelization is enough. In larger cases MPI+OpenMP hybrid approach is used and typically single MPI process is started on each calculation node with as many OpenMP threads as there are CPU cores available on single node. If the very last bit of calculation performance need to be squeezed out then depending on calculation node hardware one might start one MPI process per each CPU socket. In this case number of OpenMP processes should match to the number of CPU cores available in single processor. Threads should be pinned to the particular socket to avoid spreading of running threads between different processors leading to unnecessary data transfers. Problem with this approach is that each process contains copy of the same nuclear data and some memory is wasted.

Simultaneous multithreading (like Intel Hyper Threading) should increase performance in general. Delta tracking routine is a memory intensive approach and computing performance should increase because of frequent cache misses in CPU when tracking routine needs different cross sections during simulation.

Basic parallel run with OpenMP

sss2 -omp <amount of threads> input

Let's assume we have a computer with two sockets (two physical CPUs) with 12 CPU cores (amount shown by operating system) in each socket (physical CPU) totaling 24 cores:

sss2 -omp 24 input

Parallel run with MPI+OpenMP hybrid

mpirun -np <amount of processes> sss2 -omp <amount of threads per process> input

Now example with 2 computers with 2 sockets and 12 cores per socket (2 MPI processes with 24 OpenMP threads, total 48 cores):

mpirun -np 2 sss2 -omp 24 input

or possibly higher performance at the expense of used memory could be obtained by (4 MPI processes with 12 OpenMP threads, total 48 cores):

mpirun -np 4 --bind-to socket sss2 -omp 12 input

Parallel run with HPC job schedulers

HPC clusters or supercomputers are running job schedulers. Here is a basic job script example when SLURM is used with 16 calculation nodes. There are 16 cores in each node resulting total of 256 cores in calculation. One MPI process is started on each socket (32 MPI with 8 OpenMP threads in each). Adapt this example done for SLURM to your needs:

#!/bin/csh
###
### SLURM job script example for parallel Serpent run
###

## name of your job
#SBATCH -J exampleinput

## system error message output file
#SBATCH -e exampleinput.std.err.%j

## system message output file
#SBATCH -o exampleinput.std.out.%j

## send mail after job is finished
#SBATCH --mail-type=end
#SBATCH --mail-user=example.user@domain.end

## how long a job takes, wallclock time d-hh:mm:ss
#SBATCH -t 1-00:00:00

## name of queue 
#SBATCH -p phase2

## the number of processes (number of cores)
## number of nodes
#SBATCH -N 16

## number of cores, select most appropriate option
##SBATCH --ntasks-per-node=1
#SBATCH --ntasks-per-socket=1

## how many OpenMP threads are used by each MPI process
#SBATCH --cpus-per-task=8

## a per-process (soft) memory limit
## limit is specified in MB
## example: 1 GB is 1000
## amount of memory per CPU core
#SBATCH --mem-per-cpu=7900

## load modules
## select Serpent version to be used
module load serpent/openmpi-3.1.2-gcc/gcc-4.8.5/2.1.30

## change directory
cd /home/exampleuser/example

## run my MPI executable
## only name of input should require modification
srun sss2 -omp $SLURM_CPUS_PER_TASK input
#mpirun -np $SLURM_NTASKS --bind-to socket sss2 -omp $SLURM_CPUS_PER_TASK input

Running casematrix calculations

This section refers to input given using casematrix input card.

The general syntax is

sss2 -casematrix <case_name> <his_idx> <coe_idx> <input>

where <case_name> refers to the CASE_NAME input on the casematrix card. Only this casematrix card is run with this input.

<his_idx> indicates which history variation <HIS_BRhis_idx> should be run.

The possible settings for <his_idx> and <coe_idx> are:

<his_idx> <coe_idx> Additional command line option Description
-1 >0 Momentary variation (restart) <coe_idx> is calculated, only applicable for zero burnup variation branches without needing a restart file to be available.
>0 -1 Only the burnup calculation is run to create a restart file.
>0 0 The burnup calculation is run to create a restart file, and all momentary variations (restarts) are calculated.
>0 0 -coe All momentary variations (restarts) are calculated. In this case, the burnup calculation has to be already calculated for a restart file to be available.
>0 >0 Only the momentary variation (restart) <coe_idx> is calculated. In this case, the burnup calculation has to be already calculated for a restart file to be available.

Miscellaneous input options

Monte Carlo volume calculation routine

Incorrect material volumes can lead to a number of problems in burnup calculation and the normalization of reaction rates. To deal with these issues Serpent provides a Monte Carlo based routine for checking and calculating the volumes of complicated material zones. The volumes are evaluated by sampling a large number of random points or tracks in the geometry, and the estimate represents the exact volumes seen by the particles during the transport simulation. Checking the material volumes is also a good means to confirm that the geometry was properly set up.

The Monte Carlo based volume calculation routine is invoked by command line option -checkvolumes, followed by the number of random samples and the name of the input file. Two algorithms are available, based on random points (default, positive number of samples) or tracks (invoked by entering the number of samples as a negative value), respectively. The track-based algorithm runs slower, but may result in better statistics for optically thin regions. The calculation works also in OpenMP parallel mode.

The usage is best illustrated by an example:

sss2 -omp 24 -checkvolumes 10000000 bwr

...

Calculating material volumes by Monte Carlo...

Estimated calculation time: 0:00:09
Realized calculation time:  0:00:08

Volumes (2D problem, the values are in cm2) :

Material fuel1         : 5.9038E-01 5.9015E-01 (0.00562) : -0.00038   (100.0 % den.)
Material fuel2         : 2.3615E+00 2.3635E+00 (0.00366) :  0.00084   (100.0 % den.)
Material fuel3         : 3.5423E+00 3.5525E+00 (0.00281) :  0.00288   (100.0 % den.)
Material fuel4         : 1.1808E+00 1.1795E+00 (0.00436) : -0.00108   (100.0 % den.)
Material fuel5         : 1.1808E+01 1.1798E+01 (0.00154) : -0.00081   (100.0 % den.)
Material fuel6         : 2.8338E+01 2.8316E+01 (0.00090) : -0.00078   (100.0 % den.)
Material fuel7         : 5.9038E+00 5.9094E+00 (0.00202) :  0.00096   (100.0 % den.)
Material clad          : 1.6336E+01 1.6340E+01 (0.00111) :  0.00026   (100.0 % den.)
Material box           : 0.0000E+00 1.3505E+01 (0.00128) :      N/A * (100.0 % den.)
Material cool          :        N/A 9.5257E+01 (0.00036) :      N/A * (100.0 % den.)
Material moder         : 0.0000E+00 5.5451E+01 (0.00055) :      N/A * (100.0 % den.)

Volumes written in file "bwr.mvol"

The volumes are printed separately for each material. If automated depletion zone division is used, the volumes are also printed for each sub-zone. The first column of results shows the volume actually used in the calculation and the next column the Monte Carlo estimate together with the associated relative statistical error. The next column gives the difference between the used and estimated volume, accompanied by '*' if the difference is suspiciously large compared to statistical accuracy (note that the results are random variables so it is possible that the estimate is off by chance). The last column shows the relative density compared to the input value in the material card. The value can be below 100% if the multi-physics interface is used. It should also be noted that in 2D geometries the calculated volumes are actually cross-sectional areas in cm2.

The volume calculation routine also prints an output file [input].mvol, which for the previous example is:

% --- Material volumes:

% Produced Fri Sep 16 09:18:39 2016 by MC volume calculation routine by
% sampling 10000000 random points in the geometry.

set mvol

fuel1           0 5.90149E-01 % (0.006)
fuel2           0 2.36348E+00 % (0.004)
fuel3           0 3.55245E+00 % (0.003)
fuel4           0 1.17947E+00 % (0.004)
fuel5           0 1.17979E+01 % (0.002)
fuel6           0 2.83158E+01 % (0.001)
fuel7           0 5.90941E+00 % (0.002)
clad            0 1.63403E+01 % (0.001)
box             0 1.35046E+01 % (0.001)
cool            0 9.52565E+01 % (0.000)
moder           0 5.54508E+01 % (0.001)

The set mvol card is one of the options to define material volumes and the output from the volume checker can be copy-pasted into the input or the entire file linked using the include command. The volume calculation routine can be run automatically using the set mcvol input option.

Checking for holes in STL geometries

Some consistency checks can be performed on STL type geometries using the -checkstl command line option. A number of random points are sampled in universes containing the STL solids, and number of rays started from each point. If the surfaces are intact, each ray should give the same result. For example, the test performed for the Stanford critical bunny gives:

sss2 -checkstl 10000 1000 bunny
...
Testing STL geometries...

Testing universe 1 by sampling 10000 directions in 1000 points:

Consistency test passed in all random points.

If the success rate is not 100%, it is possible that the model still works, but a complete failure most likely means that the solid boundary is not water-tight.

Particle disperser routine

The command line option -disperse launches the automated particle disperser routine, which prompts the user with several questions and produces a particle distribution file that can be used with the pbed input card. For example:

sss2 -disperse

Random particle distribution file generator launched...

Enter volume type: 1 = sphere
                   2 = cylinder
                   3 = cube
                   4 = annular cylinder
                   5 = cuboid
                   6 = parallelepiped

1

Enter sphere radius (cm): 2.5
Enter number of particles (> 1) or packing fraction (< 1): 0.1
Enter particle radius (cm): 0.0455
Enter particle universe: 1

More particles? (y/n): n

Enter file name: part.inp

Use grow and shake algorithm? (y/n): n

Randomizing 16588 particles for initial sampling...

Overlapping particles:   4747 / 16588 pf = 0.07138 / 0.10000
Overlapping particles:   2541 / 16588 pf = 0.08468 / 0.10000
Overlapping particles:   1462 / 16588 pf = 0.09119 / 0.10000
Overlapping particles:    886 / 16588 pf = 0.09466 / 0.10000
Overlapping particles:    549 / 16588 pf = 0.09669 / 0.10000
Overlapping particles:    334 / 16588 pf = 0.09799 / 0.10000
Overlapping particles:    200 / 16588 pf = 0.09880 / 0.10000
Overlapping particles:    139 / 16588 pf = 0.09916 / 0.10000
Overlapping particles:    101 / 16588 pf = 0.09939 / 0.10000
Overlapping particles:     68 / 16588 pf = 0.09959 / 0.10000
Overlapping particles:     45 / 16588 pf = 0.09973 / 0.10000
Overlapping particles:     35 / 16588 pf = 0.09979 / 0.10000
Overlapping particles:     27 / 16588 pf = 0.09984 / 0.10000
Overlapping particles:     18 / 16588 pf = 0.09989 / 0.10000
Overlapping particles:     10 / 16588 pf = 0.09994 / 0.10000
Overlapping particles:      3 / 16588 pf = 0.09998 / 0.10000
Overlapping particles:      2 / 16588 pf = 0.09999 / 0.10000
Overlapping particles:      1 / 16588 pf = 0.10000 / 0.10000

Writing final distribution to file "part.inp"...

16588 particles, packing fraction = 0.10000

The "grow and shake" algorithm[3] is intended for high packing fractions.

Standard compositions

Serpent has a built-in list of more than 350 pre-defined material compositions. These materials cannot be used directly in the input, but the compositions can be printed and copy-pasted into the input file. The materials are numbered and the full list is printed with option -comp list:

sss2 -comp list

List of available material compositions:

  1  "A-150 Tissue-Equivalent Plastic (A150TEP)"
  2  "Acetone"
  3  "Acetylene"
  4  "Air (Dry, Near Sea Level)"
  5  "Alanine"
  6  "Aluminum"
  7  "Aluminum Oxide"
  8  "Aluminum, Alloy 2024-O"
...
353  "800H"

The output is printed in Serpent material card format with the mass density included. The usage of the -comp MAT option is illustrated by an example:

sss2 -comp 4

% --- "Air (Dry, Near Sea Level)" [PNNL-15870, Rev. 1]

mat m4 -1.20500E-03

 6000  -1.24000E-04
 7000  -7.55268E-01
 8000  -2.31781E-01
18000  -1.28270E-02

 6012  -1.22564E-04
 6013  -1.43645E-06
 7014  -7.52324E-01
 7015  -2.94416E-03
 8016  -2.31153E-01
 8017  -9.35803E-05
 8018  -5.34540E-04
18036  -3.88624E-05
18038  -7.70386E-06
18040  -1.27804E-02

The output includes both elemental compositions for photon transport and isotopic compositions for neutron transport calculations. Optional second parameter is the library ID, which is printed after each nuclide ZA:

sss2 -comp 269 06c

% --- "Steel, Boron Stainless" [PNNL-15870, Rev. 1]

mat m269 -7.87000E+00

 5000.06c  -1.00000E-02
 6000.06c  -3.96000E-04
14000.06c  -4.95000E-03
15000.06c  -2.28000E-04
16000.06c  -1.49000E-04
24000.06c  -1.88100E-01
25000.06c  -9.90000E-03
26000.06c  -6.94713E-01
28000.06c  -9.15750E-02

 5010.06c  -1.84309E-03
 5011.06c  -8.15691E-03
 6012.06c  -3.91413E-04
 6013.06c  -4.58738E-06
14028.06c  -4.54739E-03
14029.06c  -2.39265E-04
14030.06c  -1.63344E-04
15031.06c  -2.28000E-04
16032.06c  -1.41126E-04
16033.06c  -1.14910E-06
16034.06c  -6.70834E-06
16036.06c  -1.67133E-08
24050.06c  -7.85070E-03
24052.06c  -1.57439E-01
24053.06c  -1.81960E-02
24054.06c  -4.61478E-03
25055.06c  -9.90000E-03
26054.06c  -3.92204E-02
26056.06c  -6.38452E-01
26057.06c  -1.50084E-02
26058.06c  -2.03234E-03
28058.06c  -6.15363E-02
28060.06c  -2.45201E-02
28061.06c  -1.08366E-03
28062.06c  -3.51174E-03
28064.06c  -9.23214E-04

Note that data for all nuclides may not be found in the Serpent cross section libraries. The complete list of pre-defined material compositions is available here.

Elemental decomposition

The -elem command line option can be used to decompose natural elements in material cards into individual isotopes. The parameters include the element symbol and density or fraction (positive values for atomic and negative values for mass densities / fractions). Optional third parameter is the library ID, which is printed after each nuclide ZA. The usage is illustrated below with examples.

Decomposing natural zirconium with 97.5% mass fraction:

sss2 -elem Zr -0.975 06c

Isotopic composition for natural zirconium:

 40090.06c  -4.94385E-01
 40091.06c  -1.09014E-01
 40092.06c  -1.68461E-01
 40094.06c  -1.74438E-01
 40096.06c  -2.87019E-02

The isotopic fractions sum up to -0.975. Note that data for all nuclides may not be found in the Serpent cross section libraries. Similarly, decomposition of natural boron into atomic fractions:

sss2 -elem B 1.0 

Isotopic composition for natural boron:

  5010  1.99000E-01
  5011  8.01000E-01

The library ID is omitted.

Material decomposition

The -mix command line option processes all mixtures defined in the input and decomposes them into standard material compositions. The compositions are written in material input card format in file [input].mix.

For example, coolant can be defined as a mixture of two materials:

% --- Water: 

mat water  -0.76973  moder lwtr 1001
 1001.03c   0.66667
 8016.03c   0.33333

therm lwtr lwe7.10t

% --- Natural boron:

mat boron   1.00000  tmp 550
 5010.03c   0.19900
 5011.03c   0.80100

% --- Coolant:

mix cool
water      -0.99950
boron      -500E-6

Serpent decomposes mixture cool into a conventional material before running the transport simulation. The -mix command line option prints out the decomposed material composition:

% Material cool is a mixture of 2 components:

% -----------------------------------------
% Material             v. frac      m. frac
% -----------------------------------------
% water            9.99979E-01  9.99500E-01
% boron            2.14484E-05  5.00000E-04
% -----------------------------------------

mat cool  7.72309E-02 moder lwtr 1001

  1001.03c  5.14732E-02
  8016.03c  2.57362E-02
  5010.03c  4.26822E-06
  5011.03c  1.71801E-05

Material at given position

The -matpos command line option returns material, cell and universe information for a given point or points:

  • If the OPTIONS parameter is given last, it is followed by a list of coordinate triplets.
  • If the OPTIONS parameter is given before the input file, it is immediately followed by a file name from which the list of coordinates are read.

For example:

sss2 bwr -matpos 0.0 0.0 0.0   2.5 0.0 0.0   4.5 5.5 0.0

(...)

Printing materials at given positions....

           x            y            z               universe                  cell             material
 0.00000E+00  0.00000E+00  0.00000E+00                      0                     1                moder  
 2.50000E+00  0.00000E+00  0.00000E+00                      9                nst9c1                 cool  
 4.50000E+00  5.50000E+00  0.00000E+00                      5                nst5c1                fuel5  

OK.

References

  1. ^ Conlin, J. L. and Romano, P. "A Compact ENDF (ACE) Format Specification" Report LA-UR-19-29016 (2019)
  2. ^ Trkov, A., Herman, M. and Brown, D. A. "ENDF-6 Formats Manual." CSEWG Document ENDF-102 / BNL-90365-2009 Rev. 2 (2018)
  3. ^ Tobochnik, J. and Chapin, P. M., "Monte Carlo Simulation of Hard Spheres Near Random Closest Packing Using Spherical Boundary Conditions", Journal of Chemical Physics, 88, 5824 (1988)