Models#
One essential part of a numerical simulation is to define the models and conditions for it.
There are multiple features of Nassu that require some configuration like this.
These configurations are gathered in the models
field and define the behavior of the simulation in multiple ways.
Some of the features available in it are:
Numerical models
Boundary conditions
Initialization
Devices and precision to use
Note
For further details on theorical aspects of the models presented here, check the theory documentation.
Numerical models#
When running a simulation, one of the most important aspects is the numerical models that are used. Nassu combines LBM with other models to achieve its capabilities and each model has its own configurations.
simulations:
- name: example
models:
# Multiblock options
multiblock:
# Number of overlap nodes in Fine-to-coarse communication
# Defaults to 1
overlap_F2C: 2
# Custom overlaps of F2C for specific levels
# Use as [lvl_from]: overlap to use
# Defaults to {} (no custom)
custom_overlap_F2C:
1: 3
2: 3
5: 1
# Filter size for average of stress tensor in fine-to-coarse communication
# Defaults to 0 (no filter)
stress_filter_F2C: 1
# Large eddy simulation options
LES:
# Model to use (only Smagorinsky currently)
model: Smagorinsky
# Subgrid Smagorinsky constant
sgs_cte: 0.17
# LBM options
LBM:
# Global force to use
F: { x: 0, y: 0, z: 0 }
# Collision operator
# Options are: RBGK, RRBGK, HRRBGK
coll_oper: RRBGK
coll_oper_params:
# Constant sigma to use for HRRBGK, between 0.95 and 1.
# Defaults to 0.99
sigma_hrrbgk: 0.99
# Mode for HRRBGK
# Options are: dynamic, constant
# Defaults to constant
mode_hrrbgk: dynamic
# True to activate thermal model and variable theta
thermal_model: true
# Relaxation time (tau) value
tau: 0.5000008125
# Velocity set
# Options are: D2Q9 for 2D, D3Q15, D3Q19, D3Q27 for 3D
vel_set: D3Q27
# Immersed boundary method options
IBM:
# Dirac delta to use
# Options are: 3_points, 4_points
# Defaults to 3_points
dirac_delta: 3_points
# For all IBM interpolation or spread kernels, the minimun value of dirac for the operation
# to be done.
# Use high value to avoid running IBM on domain borders or multiblock transitions
# Use low value to allow for IBM to run everywhere
# Defaults to 0.99
min_dirac_sum: 0.99
# Number of time steps to use for force accomodation
# The force applied linearly increases from 0% to 100% in this interval
# Defaults to 0
forces_accomodate_time: 500
# Limiter to force spreading on IBM iteration. This is the limit between two iterations.
# A reasonable value is 1e-3 for limiting (force ~ rho*u^2, So u~0.03 yields this)
# Defaults to 1000 (in practice, no limit)
forces_spread_limit: 1e-3
# Wheter to reset forces between time steps or use the force from the previous
# tims step as reference
# Defaults to true
reset_forces: true
# IBM configurations for bodies
body_cfgs:
default: {}
# Configuration name to reference
building_cfg:
# Number of iterations
# Defaults to 5
n_iterations: 5
# Force factor to use for force spreading.
# The force is multiplied by this factor before being applied
# When `kinetic_energy_correction` is true, it's scaled to level, diving by 2**lvl
# Defaults to 1
forces_factor: 1
# Use kinetic energy correction over the traditional IBM velocity correction.
# Used for modeling of porous medium (to force a drag coefficient over a velocity)
# It makes `forces_factor` be rescaled to the level.
# Invalid to be used along wall model
# Defaults to false
kinetic_energy_correction: false
# Wall model to use
wall_model:
# Name of wall model
# Availables one are: EqTBL, EqLog
name: EqTBL
# Reference distance for tangential velocity interpolation
dist_ref: 2
# Shell distance for spreading forces
dist_shell: 0.25
# Step to start applying wall model. Before this step
# conventional IBM is applied
# Defaults to 1000
start_step: 1000
# Parameters required by EqTBL
params:
# Roughness length
z0: 0.0001
# Max error for TDMA
TDMA_max_error: 5e-06
# Max iterations for TDMA
TDMA_max_iters: 50
# Number of divisions in TDMA
# Must be an odd number
TDMA_n_div: 25
terrain_cfg:
n_iterations: 3
forces_factor: 1
wall_model:
name: EqLog
dist_ref: 2.5
dist_shell: 0.5
# Parameters required by EqLog
params:
z0: 0.0001
Note
The configurations for these model requires an understanding of how the solver works under the hood. So be careful when changing these values from the guideline ones.
Boundary conditions#
For CFD simulations, boundary conditions (BC) deserve a very very special topic. They can be the difference between a great result and a simulation diverging.
Here is how to configure the BCs for the simulations in Nassu
simulations:
- name: example
models:
BC:
# Periodic dimensions in the domain (x, y, z)
periodic_dims: [false, false, false]
# Wall model configurations
WM_cfg:
# Max error for TDMA
TDMA_max_error: 5e-06
# Max iterations for TDMA
TDMA_max_iters: 50
# Number of divisions in TDMA
# Must be an odd number
TDMA_n_div: 25
# Map of boundary conditions to use
BC_map:
# The boundary conditions are added from first to last
# So if there is one BC that conflicts nodes and order with order,
# The last one will be the one that stands
- # Boundary condition name
# Available ones are:
# HWBB
# RegularizedHWBB
# VelocityBounceBack
# UniformFlow
# Neumann
# RegularizedNeumannSlip
# RegularizedNeumannOutlet
BC: RegularizedNeumannOutlet
# Order in which apply the BC
# 0 are the first applied, then 1, then 2 and so on
order: 2
# Position of the BC (x, y, z)
# Options are: (N is the domain limit)
# E: at (N, _, _)
# W: at (0, _, _)
# N: at (_, N, _)
# S: at (_, 0, _)
# F: at (_, _, N)
# B: at (_, _, 0)
# It's possible to combine these positions, as in NW: at (0, N, _)
pos: E
# Normal position to consider in BC
# It points to outside the domain, so for a wall at the top (_, N, _)
# you would use N
# It's also possible to combine these directions, as in the position
# Some BCs, such as UniformFlow, don't require a wall normal
wall_normal: E
# Kwargs for the boundary condition. Depends on the BC
rho: 1.0
# Others BCs to apply
- BC: RegularizedNeumannSlip
order: 1
pos: F
wall_normal: F
- BC: RegularizedHWBB
order: 1
pos: B
wall_normal: B
- BC: RegularizedNeumannSlip
order: 0
pos: N
wall_normal: N
- BC: RegularizedNeumannSlip
order: 0
pos: S
wall_normal: S
- BC: Neumann
order: 0
# It's also possible to combine the positions and normals to
# combine the limitations of each name
pos: NF
wall_normal: N
- BC: Neumann
order: 0
pos: SF
wall_normal: S
For the boundary conditions, check the documentation on each one for what they apply and how.
SEM#
One special and important BC is the SEM (Synthetic eddy method) that applies a velocity profile with synthetic eddies to produce turbulence in the inlet. It enforces this condition on all nodes at the start of the domain, so at x=0. The height is obligatory in z direction and the “width” in y direction.
simulations:
- name: example
models:
# Synthetic eddy method configuration
SEM:
# Specifications for SEM eddies generation
eddies:
# Lengthscale to use, may specify different values for each dimension
lengthscale: {x: 14, y: 14, z: 14}
# Volume density of eddies (n_eddies=eddies_vol_density*(SEM_volume/eddy_volume))
eddies_vol_density: 15
# Seed for random numbers to generate the eddies
seed_rand: 0
# All eddies are generated in this limit in yz domain (x is 2*lenghtscale.x)
domain_limits_yz:
start: [16, 0]
end: [48, 96]
profile:
# Profile with velocity and R (rate od strain) for SEM
csv_profile_data: "fixture/SEM/example/real_profile.csv"
# Sums in z profile
z_offset: 0
# K multiplication factor
K: 1
# When z has variable dimension along y direction, it can be specified in this csv
csv_y_height: "fixture/SEM/example/y_heights.csv"
# Multiplier for profile length.
# Scales Z (z*height_mul) for csv_profile_data and [y, z_height] for csv_y_height
length_mul: 1
# Multiplier for velocity values.
# Scales velocity values (ux*vel_height) for csv_profile_data
# Scales Reynolds stress tensor (R*(height_mul^2)) for csv_profile_data
vel_mul: 1
Note
For more informations on SEM, check its specific documentation.
Initialization#
To initialize the simulation it’s possible to start with a constant velocity and density field, load a custom field or start from a checkpoint.
simulations:
- name: example
models:
initialization:
# .vtm file to load fields from, it uses linear interpolation for refinement
# The multiblock must have rho and u fields and at least the domain size
# S field is generated using finite differences of U field
# Defaults to None (no load from file, use constant field)
vtm_filename: ./fixture/vtms/macrs72x48x24.vtm
# For constant initialization
# Rho initial value
rho: 1
# Velocity initial value
u: { x: 0, y: 0, z: 0 }
checkpoint:
load:
# Start simulation from checkpoint. Defaults to False
checkpoint_start: true
# Reset simulation time step, start at 0 instead of checkpoint time step.
# Defaults to false.
reset_time_step: false
# Path to folder to load checkpoint from. If not specified, tries to load
# from the simulation output folder the last checkpoint saved
# Defaults to null
folderpath: null
Important
For checkpoint details, check the data section.
Devices and precision to use#
Nassu is configured to run in GPU devices, using NVIDIA CUDA library. Some options are available currently for the setup of the devices.
It’s also required to specify the floating point precision to use for the simulation.
simulations:
- name: example
models:
# Engine options
engine:
# Devices numbers to use
# Defaults to none, devices are chosen automatically in ascending order
devices_numbers: [0]
# Number of devices to use
# Defaults to one (no support for multidevices currently)
n_devices: 1
# Engine to use
# Options are: CUDA (previously we supported OpenCL too)
name: CUDA
# Numerical precision options
precision:
# The default precision, must be specified
# Options are: single | double
default: single
# All options below are: single | double | default
# Defaults to: default
# Precision to perform calculations
calculations: default
# Precision to store macroscopics fields
macroscopics: default
# Precision to use for populations in shared memory
populations: default
Note
The precision used for the populations impacts the block size that can be used, because it’s allocated in the GPU shared memory, which is very limited.
Important
Currently there is no support for multidevices.