
The aiida-lsmo plugin for AiiDA¶
Getting started¶
This plugin is a collection of work chains and calculation functions that combine the use of multiple codes (e.g., CP2K, DDEC, Raspa, Zeo++, …) to achieve advanced automated tasks.
Installation¶
Use the following commands to install the plugin:
git clone https://github.com/lsmo-epfl/aiida-lsmo .
cd aiida-lsmo
pip install -e .
Note
This will install also the related plugins (e.g., aiida-cp2k, aiida-raspa, …) if not present already, but the codes (e.g, CP2K, RASPA, …) need to be set up before using these work chains.
Usage¶
Consider that, for each work chain, at least one example is provided in the examples
directory: these examples are
usually quick and you can run them on your localhost in a couple of minutes.
A quick demo on how to submit a work chain:
verdi daemon start # make sure the daemon is running
cd examples
verdi run run_IsothermWorkChain_HKUST-1.py raspa@localhost zeopp@localhost
Note that in the running script, the work chain is imported using the WorkflowFactory
:
from aiida.plugins import WorkflowFactory
IsothermWorkChain = WorkflowFactory('lsmo.isotherm')
while a calculation function is imported with the CalculationFactory
:
from aiida.plugins import CalculationFactory
FFBuilder = CalculationFactory('lsmo.ff_builder')
After you run the work chain you can inspect the log, for example:
$ verdi process report
2019-11-22 16:54:52 [90962 | REPORT]: [266248|Cp2kMultistageWorkChain|setup_multistage]: Unit cell was NOT resized
2019-11-22 16:54:52 [90963 | REPORT]: [266248|Cp2kMultistageWorkChain|run_stage]: submitted Cp2kBaseWorkChain for stage_0/settings_0
2019-11-22 16:54:52 [90964 | REPORT]: [266252|Cp2kBaseWorkChain|run_calculation]: launching Cp2kCalculation<266253> iteration #1
2019-11-22 16:55:13 [90965 | REPORT]: [266252|Cp2kBaseWorkChain|inspect_calculation]: Cp2kCalculation<266253> completed successfully
2019-11-22 16:55:13 [90966 | REPORT]: [266252|Cp2kBaseWorkChain|results]: work chain completed after 1 iterations
2019-11-22 16:55:14 [90967 | REPORT]: [266252|Cp2kBaseWorkChain|on_terminated]: remote folders will not be cleaned
2019-11-22 16:55:14 [90968 | REPORT]: [266248|Cp2kMultistageWorkChain|inspect_and_update_settings_stage0]: Bandgaps spin1/spin2: -0.058 and -0.058 ev
2019-11-22 16:55:14 [90969 | REPORT]: [266248|Cp2kMultistageWorkChain|inspect_and_update_settings_stage0]: BAD SETTINGS: band gap is < 0.100 eV
2019-11-22 16:55:14 [90970 | REPORT]: [266248|Cp2kMultistageWorkChain|run_stage]: submitted Cp2kBaseWorkChain for stage_0/settings_1
2019-11-22 16:55:15 [90971 | REPORT]: [266259|Cp2kBaseWorkChain|run_calculation]: launching Cp2kCalculation<266260> iteration #1
2019-11-22 16:55:34 [90972 | REPORT]: [266259|Cp2kBaseWorkChain|inspect_calculation]: Cp2kCalculation<266260> completed successfully
2019-11-22 16:55:34 [90973 | REPORT]: [266259|Cp2kBaseWorkChain|results]: work chain completed after 1 iterations
2019-11-22 16:55:34 [90974 | REPORT]: [266259|Cp2kBaseWorkChain|on_terminated]: remote folders will not be cleaned
2019-11-22 16:55:35 [90975 | REPORT]: [266248|Cp2kMultistageWorkChain|inspect_and_update_settings_stage0]: Bandgaps spin1/spin2: 0.000 and 0.000 ev
2019-11-22 16:55:35 [90976 | REPORT]: [266248|Cp2kMultistageWorkChain|inspect_and_update_stage]: Structure updated for next stage
2019-11-22 16:55:35 [90977 | REPORT]: [266248|Cp2kMultistageWorkChain|run_stage]: submitted Cp2kBaseWorkChain for stage_1/settings_1
2019-11-22 16:55:35 [90978 | REPORT]: [266266|Cp2kBaseWorkChain|run_calculation]: launching Cp2kCalculation<266267> iteration #1
2019-11-22 16:55:53 [90979 | REPORT]: [266266|Cp2kBaseWorkChain|inspect_calculation]: Cp2kCalculation<266267> completed successfully
2019-11-22 16:55:53 [90980 | REPORT]: [266266|Cp2kBaseWorkChain|results]: work chain completed after 1 iterations
2019-11-22 16:55:54 [90981 | REPORT]: [266266|Cp2kBaseWorkChain|on_terminated]: remote folders will not be cleaned
2019-11-22 16:55:54 [90982 | REPORT]: [266248|Cp2kMultistageWorkChain|inspect_and_update_stage]: Structure updated for next stage
2019-11-22 16:55:54 [90983 | REPORT]: [266248|Cp2kMultistageWorkChain|inspect_and_update_stage]: All stages computed, finishing...
2019-11-22 16:55:55 [90984 | REPORT]: [266248|Cp2kMultistageWorkChain|results]: Outputs: Dict<266273> and StructureData<266271>
You can also inspect the inputs/outputs in a single glance with verdi node show
, for example:
$ verdi node show 266248
Property Value
----------- ------------------------------------
type Cp2kMultistageWorkChain
state Finished [0]
pk 266248
uuid f707d727-f7c2-4232-a90c-d9e2711e5fe6
label
description
ctime 2019-11-22 16:54:51.692140+00:00
mtime 2019-11-22 16:55:55.239555+00:00
computer [21] localhost
Inputs PK Type
--------------------- ------ -------------
cp2k_base
clean_workdir 266246 Bool
max_iterations 266245 Int
cp2k
code 265588 Code
parameters 266244 Dict
min_cell_size 266247 Float
protocol_modify 266243 Dict
protocol_tag 266241 Str
starting_settings_idx 266242 Int
structure 266240 StructureData
Outputs PK Type
--------------------- ------ -------------
last_input_parameters 266265 Dict
output_parameters 266273 Dict
output_structure 266271 StructureData
remote_folder 266268 RemoteData
Called PK Type
---------------------- ------ ----------------
CALL 266272 CalcFunctionNode
run_stage_1_settings_1 266266 WorkChainNode
run_stage_0_settings_1 266259 WorkChainNode
run_stage_0_settings_0 266252 WorkChainNode
CALL 266249 CalcFunctionNode
Log messages
----------------------------------------------
There are 11 log messages for this calculation
Run 'verdi process report 266248' to see them
Another good idea is to print the graph of your workflow with verdi node graph generate
,
to inspect all its internal steps:

LSMO calc functions and work chains¶
In the following section all the calc functions and work chains of the aiida-lsmo plugin are listed and documented.
Force Field Builder¶
The ff_builder()
calculation function allows to combine the force
field parameters (typically for a Lennard-Jones potential) for a
framework and the molecule(s), giving as an output the .def files required by Raspa.
To see the list of available parameterization for the frameworks and the available molecules, give a look to the file
ff_data.yaml.
What it can do:
Switch settings that are written in the .def files of Raspa, such as tail-corrections, truncation/shifting and mixing rules.
Decide to separate the interactions, so that framework-molecule interactions and molecule-molecule interactions are parametrized differently (e.g., TraPPE for molecule-mololecule and UFF, instead of UFF/TraPPE for framework-molecule).
What it currently can not do:
Deal with flexible molecules.
Take parameters from other files (e.g., YAML).
Generate .def files for a molecule, given just the geometry: it has to be included in the ff_data.yaml file.
Inputs details
Parameters Dict:
PARAMS_EXAMPLE = Dict( dict = { 'ff_framework': 'UFF', # See force fields available in ff_data.yaml as framework.keys() 'ff_molecules': { # See molecules available in ff_data.yaml as ff_data.keys( 'CO2': 'TraPPE', # See force fields available in ff_data.yaml as {molecule}.keys() 'N2': 'TraPPE' }, 'shifted': True, # If True shift despersion interactions, if False simply truncate them 'tail_corrections': False, # If True apply tail corrections based on homogeneous-liquid assumption 'mixing_rule': 'Lorentz-Berthelot', # Options: 'Lorentz-Berthelot' or 'Jorgensen' 'separate_interactions': True # If True use framework's force field for framework-molecule interactions })
Outputs details
Dictionary containing the .def files as SinglefileData. This output dictionary is ready to be used as a files input of the RaspaCalculation: you can find and example of usage of this CalcFunction in the IsothermWorkChain, or a minimal test usage in the examples.
Selectivity calculators¶
The calc_selectivity()
calculation function
computes the selectivity of two gas in a material, as the ratio between their Henry coefficients.
In the future this module will host also different metrics to assess selectivity, for specific applications.
Working Capacity calculators¶
The module calcfunctions/working_cap.py contains a collections of calculation functions to compute the working capacities for different compound (e.g., CH4, H2) at industrially reference/relevant conditions. The working capacity is the usable amount of a stored adsorbed compound between the loading and discharging temperature and pressure. These are post-processing calculation from the output_parameters of Isotherm or IsothermMultiTemp work chains, that needs to be run at specific conditions: see the header of the calc function to know them. Their inner working is very simple but they are collected in this repository to be used as a reference in our group. If you are investigating some different gas storage application, consider including a similar script here.
An example is calc_ch4_working_cap()
for methane storage.
Isotherm work chain¶
The IsothermWorkChain()
work function allows to compute a single-component
isotherm in a framework, from a few settings.
What it does, in order:
Run a geometry calculation (Zeo++) to assess the accessible probe-occubiable pore volume and the needed blocking spheres.
Stop if the structure is non-porous, i.e., not permeable to the molecule.
Get the parameters of the force field using the FFBuilder.
Get the number of unit cell replicas needed to have correct periodic boundary conditions at the given cutoff.
Compute the adsorption at zero loading (e.g., the Henry coefficient, kH) from a Widom insertion calculation using Raspa.
Stop if the kH is not more that a certain user-defined threshold: this can be used for screening purpose, or to intentionally compute only the kH using this work chain.
Given a min/max range, propose a list of pressures that sample the isotherm uniformly. However, the user can also specify a defined list of pressure and skip this automatic selection.
Compute the isotherm using Grand Canonical Monte Carlo (GCMC) sampling in series, and restarting each system from the previous one for a short and efficient equilibration.
What it can not do:
Compute isotherms at different temperatures (see IsothermMultiTemp work chain for this).
Compute multi-component isotherms, as it would complicate a lot the input, output and logic, and it is not trivial to assign the mixture composition of the bulk gas at different pressure, for studying a real case.
It is not currently possible to play too much with Monte Carlo probabilities and other advanced settings in Raspa.
Sample the isotherm uniformly in case of “type II” isotherms, i.e., like for water, having significant cooperative insertion.
Run the different pressures in parallel: this would be less efficient because you can not restart from the previous configuration, and not necessarily much faster considering that equilibrating the higher pressure calculation will be anyway the bottleneck.
- workchainaiida_lsmo.workchains.IsothermWorkChain
Workchain that computes volpo and blocking spheres: if accessible volpo>0 it also runs a raspa widom calculation for the Henry coefficient.
Inputs:
- geometric, Dict, optional – [Only used by IsothermMultiTempWorkChain] Already computed geometric properties
- metadata, Namespace
Namespace Ports
- call_link_label, str, optional, non_db – The label to use for the CALL link if the process is called by another process.
- description, str, optional, non_db – Description to set on the process node.
- label, str, optional, non_db – Label to set on the process node.
- store_provenance, bool, optional, non_db – If set to False provenance will not be stored in the database.
- molecule, (Str, Dict), required – Adsorbate molecule: settings to be read from the yaml.Advanced: input a Dict for non-standard settings.
- parameters, Dict, required – Parameters for the Isotherm workchain (see workchain.schema for default values).
- raspa_base, Namespace
Namespace Ports
- clean_workdir, Bool, optional – If True, work directories of all called calculation jobs will be cleaned at the end of execution.
- handler_overrides, Dict, optional – Mapping where keys are process handler names and the values are a boolean, where True will enable the corresponding handler and False will disable it. This overrides the default value set by the enabled keyword of the process_handler decorator with which the method is decorated.
- max_iterations, Int, optional – Maximum number of iterations the work chain will restart the process to finish successfully.
- metadata, Namespace
Namespace Ports
- call_link_label, str, optional, non_db – The label to use for the CALL link if the process is called by another process.
- description, str, optional, non_db – Description to set on the process node.
- label, str, optional, non_db – Label to set on the process node.
- store_provenance, bool, optional, non_db – If set to False provenance will not be stored in the database.
- raspa, Namespace
Namespace Ports
- block_pocket, Namespace – Zeo++ block pocket file
- code, Code, required – The Code to use for this job.
- file, Namespace – Additional input file(s)
- framework, Namespace – Input framework(s)
- metadata, Namespace
Namespace Ports
- call_link_label, str, optional, non_db – The label to use for the CALL link if the process is called by another process.
- computer, Computer, optional, non_db – When using a “local” code, set the computer on which the calculation should be run.
- description, str, optional, non_db – Description to set on the process node.
- dry_run, bool, optional, non_db – When set to True will prepare the calculation job for submission but not actually launch it.
- label, str, optional, non_db – Label to set on the process node.
- options, Namespace
Namespace Ports
- account, str, optional, non_db – Set the account to use in for the queue on the remote computer
- additional_retrieve_list, (list, tuple), optional, non_db – List of relative file paths that should be retrieved in addition to what the plugin specifies.
- append_text, str, optional, non_db – Set the calculation-specific append text, which is going to be appended in the scheduler-job script, just after the code execution
- custom_scheduler_commands, str, optional, non_db – Set a (possibly multiline) string with the commands that the user wants to manually set for the scheduler. The difference of this option with respect to the prepend_text is the position in the scheduler submission file where such text is inserted: with this option, the string is inserted before any non-scheduler command
- environment_variables, dict, optional, non_db – Set a dictionary of custom environment variables for this calculation
- import_sys_environment, bool, optional, non_db – If set to true, the submission script will load the system environment variables
- input_filename, str, optional, non_db – Filename to which the input for the code that is to be run is written.
- max_memory_kb, int, optional, non_db – Set the maximum memory (in KiloBytes) to be asked to the scheduler
- max_wallclock_seconds, int, optional, non_db – Set the wallclock in seconds asked to the scheduler
- mpirun_extra_params, (list, tuple), optional, non_db – Set the extra params to pass to the mpirun (or equivalent) command after the one provided in computer.mpirun_command. Example: mpirun -np 8 extra_params[0] extra_params[1] … exec.x
- output_filename, str, optional, non_db – Filename to which the content of stdout of the code that is to be run is written.
- parser_name, str, optional, non_db – Set a string for the output parser. Can be None if no output plugin is available or needed
- prepend_text, str, optional, non_db – Set the calculation-specific prepend text, which is going to be prepended in the scheduler-job script, just before the code execution
- priority, str, optional, non_db – Set the priority of the job to be queued
- qos, str, optional, non_db – Set the quality of service to use in for the queue on the remote computer
- queue_name, str, optional, non_db – Set the name of the queue on the remote computer
- resources, dict, required, non_db – Set the dictionary of resources to be used by the scheduler plugin, like the number of nodes, cpus etc. This dictionary is scheduler-plugin dependent. Look at the documentation of the scheduler for more details.
- scheduler_stderr, str, optional, non_db – Filename to which the content of stderr of the scheduler is written.
- scheduler_stdout, str, optional, non_db – Filename to which the content of stdout of the scheduler is written.
- stash, Namespace – Optional directives to stash files after the calculation job has completed.
Namespace Ports
- source_list, (tuple, list), optional, non_db – Sequence of relative filepaths representing files in the remote directory that should be stashed.
- stash_mode, str, optional, non_db – Mode with which to perform the stashing, should be value of `aiida.common.datastructures.StashMode.
- target_base, str, optional, non_db – The base location to where the files should be stashd. For example, for the copy stash mode, this should be an absolute filepath on the remote computer.
- submit_script_filename, str, optional, non_db – Filename to which the job submission script is written.
- withmpi, bool, optional, non_db – Set the calculation to use mpi
- store_provenance, bool, optional, non_db – If set to False provenance will not be stored in the database.
- parent_folder, RemoteData, optional – Remote folder used to continue the same simulation stating from the binary restarts.
- retrieved_parent_folder, FolderData, optional – To use an old calculation as a starting poing for a new one.
- settings, Dict, optional – Additional input parameters
- structure, CifData, required – Adsorbent framework CIF.
- zeopp, Namespace
Namespace Ports
- code, Code, required – The Code to use for this job.
- metadata, Namespace
Namespace Ports
- call_link_label, str, optional, non_db – The label to use for the CALL link if the process is called by another process.
- computer, Computer, optional, non_db – When using a “local” code, set the computer on which the calculation should be run.
- description, str, optional, non_db – Description to set on the process node.
- dry_run, bool, optional, non_db – When set to True will prepare the calculation job for submission but not actually launch it.
- label, str, optional, non_db – Label to set on the process node.
- options, Namespace
Namespace Ports
- account, str, optional, non_db – Set the account to use in for the queue on the remote computer
- additional_retrieve_list, (list, tuple), optional, non_db – List of relative file paths that should be retrieved in addition to what the plugin specifies.
- append_text, str, optional, non_db – Set the calculation-specific append text, which is going to be appended in the scheduler-job script, just after the code execution
- custom_scheduler_commands, str, optional, non_db – Set a (possibly multiline) string with the commands that the user wants to manually set for the scheduler. The difference of this option with respect to the prepend_text is the position in the scheduler submission file where such text is inserted: with this option, the string is inserted before any non-scheduler command
- environment_variables, dict, optional, non_db – Set a dictionary of custom environment variables for this calculation
- import_sys_environment, bool, optional, non_db – If set to true, the submission script will load the system environment variables
- input_filename, str, optional, non_db – Filename to which the input for the code that is to be run is written.
- max_memory_kb, int, optional, non_db – Set the maximum memory (in KiloBytes) to be asked to the scheduler
- max_wallclock_seconds, int, optional, non_db – Set the wallclock in seconds asked to the scheduler
- mpirun_extra_params, (list, tuple), optional, non_db – Set the extra params to pass to the mpirun (or equivalent) command after the one provided in computer.mpirun_command. Example: mpirun -np 8 extra_params[0] extra_params[1] … exec.x
- output_filename, str, optional, non_db – Filename to which the content of stdout of the code that is to be run is written.
- parser_name, str, optional, non_db
- prepend_text, str, optional, non_db – Set the calculation-specific prepend text, which is going to be prepended in the scheduler-job script, just before the code execution
- priority, str, optional, non_db – Set the priority of the job to be queued
- qos, str, optional, non_db – Set the quality of service to use in for the queue on the remote computer
- queue_name, str, optional, non_db – Set the name of the queue on the remote computer
- resources, dict, optional, non_db
- scheduler_stderr, str, optional, non_db – Filename to which the content of stderr of the scheduler is written.
- scheduler_stdout, str, optional, non_db – Filename to which the content of stdout of the scheduler is written.
- stash, Namespace – Optional directives to stash files after the calculation job has completed.
Namespace Ports
- source_list, (tuple, list), optional, non_db – Sequence of relative filepaths representing files in the remote directory that should be stashed.
- stash_mode, str, optional, non_db – Mode with which to perform the stashing, should be value of `aiida.common.datastructures.StashMode.
- target_base, str, optional, non_db – The base location to where the files should be stashd. For example, for the copy stash mode, this should be an absolute filepath on the remote computer.
- submit_script_filename, str, optional, non_db – Filename to which the job submission script is written.
- withmpi, bool, optional, non_db – Set the calculation to use mpi
- store_provenance, bool, optional, non_db – If set to False provenance will not be stored in the database.
Outputs:
- block, SinglefileData, optional – Blocked pockets fileoutput file.
- output_parameters, Dict, required – Results of the single temperature wc: keys can vay depending on is_porous and is_kh_enough booleans.
Outline:
setup(Initialize the parameters) run_zeopp(Perform Zeo++ block and VOLPO calculations.) if(should_run_widom) run_raspa_widom(Run a Widom calculation in Raspa.) if(should_run_gcmc) init_raspa_gcmc(Choose the pressures we want to sample, report some details, and update settings for GCMC) while(should_run_another_gcmc) run_raspa_gcmc(Run a GCMC calculation in Raspa @ T,P.) return_output_parameters(Merge all the parameters into output_parameters, depending on is_porous and is_kh_ehough.)
Inputs details
structure
(CifData
) is the framework with partial charges (provided as_atom_site_charge
column in the CIF file)molecule
can be provided both as aStr
orDict
. It contains information about the molecule force field and approximated spherical-probe radius for the geometry calculation. If provided as a string (e.g.,co2
,n2
) the work chain looks up at the corresponding dictionary inisotherm_data/isotherm_molecules.yaml
. The input dictionary reads as, for example:co2: name: CO2 # Raspa's MoleculeName forcefield: TraPPE # Raspa's MoleculeDefinition molsatdens: 21.2 # Density of the liquid phase of the molecule in (mol/l). Typically I run a simulation at 300K/200bar proberad: 1.525 # radius used for computing VOLPO and Block (Angs). Typically FF's sigma/2 singlebead: False # if true: RotationProbability=0 charged: True # if true: ChargeMethod=Ewald
parameters
(Dict
) modifies the default parameters:parameters = { "ff_framework": "UFF", # (str) Forcefield of the structure. "ff_separate_interactions": False, # (bool) Use "separate_interactions" in the FF builder. "ff_mixing_rule": "Lorentz-Berthelot", # (string) Choose 'Lorentz-Berthelot' or 'Jorgensen'. "ff_tail_corrections": True, # (bool) Apply tail corrections. "ff_shifted": False, # (bool) Shift or truncate the potential at cutoff. "ff_cutoff": 12.0, # (float) CutOff truncation for the VdW interactions (Angstrom). "temperature": 300, # (float) Temperature of the simulation. "temperature_list": None, # (list) To be used by IsothermMultiTempWorkChain. "zeopp_probe_scaling": 1.0, # (float), scaling probe's diameter: molecular_rad * scaling "zeopp_volpo_samples": int(1e5), # (int) Number of samples for VOLPO calculation (per UC volume). "zeopp_block_samples": int(100), # (int) Number of samples for BLOCK calculation (per A^3). "raspa_minKh": 1e-10, # (float) If Henry coefficient < raspa_minKh do not run the isotherm (mol/kg/Pa). "raspa_verbosity": 10, # (int) Print stats every: number of cycles / raspa_verbosity. "raspa_widom_cycles": int(1e5), # (int) Number of Widom cycles. "raspa_gcmc_init_cycles": int(1e3), # (int) Number of GCMC initialization cycles. "raspa_gcmc_prod_cycles": int(1e4), # (int) Number of GCMC production cycles. "pressure_precision": 0.1, # (float) Precision in the sampling of the isotherm: 0.1 ok, 0.05 for high resolution. "pressure_maxstep": 5, # (float) Max distance between pressure points (bar). "pressure_min": 0.001, # (float) Lower pressure to sample (bar). "pressure_max": 10 # (float) Upper pressure to sample (bar). }
In order to skip the automatic pressure selection of the workchain, provide a list of pressure points (bar) using the pressure_list
keyword (the other pressure inputs are then neglected).
Note that you can scale the probe radius to empirically account for some framework flexibility and avoid overblocking.
Setting zeopp_probe_scaling
to zero (or a small value) basically corresponds to skipping the permeability check and
skips the calculation of blocking spheres.
geometric
is not meant to be used by the user, but by the IsothermMultiTemp work chains.
Outputs details
output_parameters
(Dict
) whose length depends whetheris_porous
isTrue
(if not, only geometric outputs are reported in the dictionary), and whetheris_kh_enough
(ifFalse
, it prints only the output of the Widom calculation, otherwise it also reports the isotherm data). This is an example of a full isotherm withis_porous=True
andis_kh_enough=True
, for 6 pressure points at 298K{ "Density": 0.385817, "Density_unit": "g/cm^3", "Estimated_saturation_loading": 51.586704, "Estimated_saturation_loading_unit": "mol/kg", "Input_block": [ 1.865, 100 ], "Input_ha": "DEF", "Input_structure_filename": "19366N2.cif", "Input_volpo": [ 1.865, 1.865, 100000 ], "Number_of_blocking_spheres": 0, "POAV_A^3": 8626.94, "POAV_A^3_unit": "A^3", "POAV_Volume_fraction": 0.73173, "POAV_Volume_fraction_unit": null, "POAV_cm^3/g": 1.89657, "POAV_cm^3/g_unit": "cm^3/g", "PONAV_A^3": 0.0, "PONAV_A^3_unit": "A^3", "PONAV_Volume_fraction": 0.0, "PONAV_Volume_fraction_unit": null, "PONAV_cm^3/g": 0.0, "PONAV_cm^3/g_unit": "cm^3/g", "Unitcell_volume": 11789.8, "Unitcell_volume_unit": "A^3", "adsorption_energy_widom_average": -9.7886451805, "adsorption_energy_widom_dev": 0.0204010566, "adsorption_energy_widom_unit": "kJ/mol", "conversion_factor_molec_uc_to_cm3stp_cm3": 3.1569089445, "conversion_factor_molec_uc_to_gr_gr": 5.8556741651, "conversion_factor_molec_uc_to_mol_kg": 0.3650669679, "henry_coefficient_average": 6.72787e-06, "henry_coefficient_dev": 3.94078e-08, "henry_coefficient_unit": "mol/kg/Pa", "is_kh_enough": true, "is_porous": true, "isotherm": { "enthalpy_of_adsorption_average": [ -12.309803364014, ... -9.6064899852835 ], "enthalpy_of_adsorption_dev": [ 0.34443269062882, ... 0.2598580313121 ], "enthalpy_of_adsorption_unit": "kJ/mol", "loading_absolute_average": [ 0.65880897694654, ... 17.302504097082 ], "loading_absolute_dev": [ 0.041847687204507, ... 0.14638828764266 ], "loading_absolute_unit": "mol/kg", "pressure": [ 1.0, ... 65 ], "pressure_unit": "bar" }, "temperature": 298, "temperature_unit": "K" }
block
(SinglefileData
) file is outputted if blocking spheres are found and used for the isotherm. Therefore, this is ready to be used for a new, consistent, Raspa calculation.
IsothermMultiTemp work chain¶
The IsothermMultiTempWorkChain()
work chain can run in parallel the
Isotherm work chain at different temperatures. Since the
geometry initial calculation to get the pore volume and blocking spheres is not dependent on the temperature, this is
run only once. Inputs and outputs are very similar to the Isotherm work chain.
What it can do:
Compute the kH at every temperature and guess, for each temperature, the pressure points needed for an uniform sampling of the isotherm.
What it can not do:
Select specific pressure points (as
pressure_list
) that are different at different temperatures.Run an isobar curve (same pressure, different pressures) restarting each GCMC calculation from the previous system.
- workchainaiida_lsmo.workchains.IsothermMultiTempWorkChain
Run IsothermWorkChain for multiple temperatures: first compute geometric properties and then submit Widom+GCMC at different temperatures in parallel
Inputs:
- geometric, Dict, optional – [Only used by IsothermMultiTempWorkChain] Already computed geometric properties
- metadata, Namespace
Namespace Ports
- call_link_label, str, optional, non_db – The label to use for the CALL link if the process is called by another process.
- description, str, optional, non_db – Description to set on the process node.
- label, str, optional, non_db – Label to set on the process node.
- store_provenance, bool, optional, non_db – If set to False provenance will not be stored in the database.
- molecule, (Str, Dict), required – Adsorbate molecule: settings to be read from the yaml.Advanced: input a Dict for non-standard settings.
- parameters, Dict, required – Parameters for the Isotherm workchain (see workchain.schema for default values).
- raspa_base, Namespace
Namespace Ports
- clean_workdir, Bool, optional – If True, work directories of all called calculation jobs will be cleaned at the end of execution.
- handler_overrides, Dict, optional – Mapping where keys are process handler names and the values are a boolean, where True will enable the corresponding handler and False will disable it. This overrides the default value set by the enabled keyword of the process_handler decorator with which the method is decorated.
- max_iterations, Int, optional – Maximum number of iterations the work chain will restart the process to finish successfully.
- metadata, Namespace
Namespace Ports
- call_link_label, str, optional, non_db – The label to use for the CALL link if the process is called by another process.
- description, str, optional, non_db – Description to set on the process node.
- label, str, optional, non_db – Label to set on the process node.
- store_provenance, bool, optional, non_db – If set to False provenance will not be stored in the database.
- raspa, Namespace
Namespace Ports
- block_pocket, Namespace – Zeo++ block pocket file
- code, Code, required – The Code to use for this job.
- file, Namespace – Additional input file(s)
- framework, Namespace – Input framework(s)
- metadata, Namespace
Namespace Ports
- call_link_label, str, optional, non_db – The label to use for the CALL link if the process is called by another process.
- computer, Computer, optional, non_db – When using a “local” code, set the computer on which the calculation should be run.
- description, str, optional, non_db – Description to set on the process node.
- dry_run, bool, optional, non_db – When set to True will prepare the calculation job for submission but not actually launch it.
- label, str, optional, non_db – Label to set on the process node.
- options, Namespace
Namespace Ports
- account, str, optional, non_db – Set the account to use in for the queue on the remote computer
- additional_retrieve_list, (list, tuple), optional, non_db – List of relative file paths that should be retrieved in addition to what the plugin specifies.
- append_text, str, optional, non_db – Set the calculation-specific append text, which is going to be appended in the scheduler-job script, just after the code execution
- custom_scheduler_commands, str, optional, non_db – Set a (possibly multiline) string with the commands that the user wants to manually set for the scheduler. The difference of this option with respect to the prepend_text is the position in the scheduler submission file where such text is inserted: with this option, the string is inserted before any non-scheduler command
- environment_variables, dict, optional, non_db – Set a dictionary of custom environment variables for this calculation
- import_sys_environment, bool, optional, non_db – If set to true, the submission script will load the system environment variables
- input_filename, str, optional, non_db – Filename to which the input for the code that is to be run is written.
- max_memory_kb, int, optional, non_db – Set the maximum memory (in KiloBytes) to be asked to the scheduler
- max_wallclock_seconds, int, optional, non_db – Set the wallclock in seconds asked to the scheduler
- mpirun_extra_params, (list, tuple), optional, non_db – Set the extra params to pass to the mpirun (or equivalent) command after the one provided in computer.mpirun_command. Example: mpirun -np 8 extra_params[0] extra_params[1] … exec.x
- output_filename, str, optional, non_db – Filename to which the content of stdout of the code that is to be run is written.
- parser_name, str, optional, non_db – Set a string for the output parser. Can be None if no output plugin is available or needed
- prepend_text, str, optional, non_db – Set the calculation-specific prepend text, which is going to be prepended in the scheduler-job script, just before the code execution
- priority, str, optional, non_db – Set the priority of the job to be queued
- qos, str, optional, non_db – Set the quality of service to use in for the queue on the remote computer
- queue_name, str, optional, non_db – Set the name of the queue on the remote computer
- resources, dict, required, non_db – Set the dictionary of resources to be used by the scheduler plugin, like the number of nodes, cpus etc. This dictionary is scheduler-plugin dependent. Look at the documentation of the scheduler for more details.
- scheduler_stderr, str, optional, non_db – Filename to which the content of stderr of the scheduler is written.
- scheduler_stdout, str, optional, non_db – Filename to which the content of stdout of the scheduler is written.
- stash, Namespace – Optional directives to stash files after the calculation job has completed.
Namespace Ports
- source_list, (tuple, list), optional, non_db – Sequence of relative filepaths representing files in the remote directory that should be stashed.
- stash_mode, str, optional, non_db – Mode with which to perform the stashing, should be value of `aiida.common.datastructures.StashMode.
- target_base, str, optional, non_db – The base location to where the files should be stashd. For example, for the copy stash mode, this should be an absolute filepath on the remote computer.
- submit_script_filename, str, optional, non_db – Filename to which the job submission script is written.
- withmpi, bool, optional, non_db – Set the calculation to use mpi
- store_provenance, bool, optional, non_db – If set to False provenance will not be stored in the database.
- parent_folder, RemoteData, optional – Remote folder used to continue the same simulation stating from the binary restarts.
- retrieved_parent_folder, FolderData, optional – To use an old calculation as a starting poing for a new one.
- settings, Dict, optional – Additional input parameters
- structure, CifData, required – Adsorbent framework CIF.
- zeopp, Namespace
Namespace Ports
- code, Code, required – The Code to use for this job.
- metadata, Namespace
Namespace Ports
- call_link_label, str, optional, non_db – The label to use for the CALL link if the process is called by another process.
- computer, Computer, optional, non_db – When using a “local” code, set the computer on which the calculation should be run.
- description, str, optional, non_db – Description to set on the process node.
- dry_run, bool, optional, non_db – When set to True will prepare the calculation job for submission but not actually launch it.
- label, str, optional, non_db – Label to set on the process node.
- options, Namespace
Namespace Ports
- account, str, optional, non_db – Set the account to use in for the queue on the remote computer
- additional_retrieve_list, (list, tuple), optional, non_db – List of relative file paths that should be retrieved in addition to what the plugin specifies.
- append_text, str, optional, non_db – Set the calculation-specific append text, which is going to be appended in the scheduler-job script, just after the code execution
- custom_scheduler_commands, str, optional, non_db – Set a (possibly multiline) string with the commands that the user wants to manually set for the scheduler. The difference of this option with respect to the prepend_text is the position in the scheduler submission file where such text is inserted: with this option, the string is inserted before any non-scheduler command
- environment_variables, dict, optional, non_db – Set a dictionary of custom environment variables for this calculation
- import_sys_environment, bool, optional, non_db – If set to true, the submission script will load the system environment variables
- input_filename, str, optional, non_db – Filename to which the input for the code that is to be run is written.
- max_memory_kb, int, optional, non_db – Set the maximum memory (in KiloBytes) to be asked to the scheduler
- max_wallclock_seconds, int, optional, non_db – Set the wallclock in seconds asked to the scheduler
- mpirun_extra_params, (list, tuple), optional, non_db – Set the extra params to pass to the mpirun (or equivalent) command after the one provided in computer.mpirun_command. Example: mpirun -np 8 extra_params[0] extra_params[1] … exec.x
- output_filename, str, optional, non_db – Filename to which the content of stdout of the code that is to be run is written.
- parser_name, str, optional, non_db
- prepend_text, str, optional, non_db – Set the calculation-specific prepend text, which is going to be prepended in the scheduler-job script, just before the code execution
- priority, str, optional, non_db – Set the priority of the job to be queued
- qos, str, optional, non_db – Set the quality of service to use in for the queue on the remote computer
- queue_name, str, optional, non_db – Set the name of the queue on the remote computer
- resources, dict, optional, non_db
- scheduler_stderr, str, optional, non_db – Filename to which the content of stderr of the scheduler is written.
- scheduler_stdout, str, optional, non_db – Filename to which the content of stdout of the scheduler is written.
- stash, Namespace – Optional directives to stash files after the calculation job has completed.
Namespace Ports
- source_list, (tuple, list), optional, non_db – Sequence of relative filepaths representing files in the remote directory that should be stashed.
- stash_mode, str, optional, non_db – Mode with which to perform the stashing, should be value of `aiida.common.datastructures.StashMode.
- target_base, str, optional, non_db – The base location to where the files should be stashd. For example, for the copy stash mode, this should be an absolute filepath on the remote computer.
- submit_script_filename, str, optional, non_db – Filename to which the job submission script is written.
- withmpi, bool, optional, non_db – Set the calculation to use mpi
- store_provenance, bool, optional, non_db – If set to False provenance will not be stored in the database.
Outputs:
- block, SinglefileData, optional – Blocked pockets fileoutput file.
- output_parameters, Dict, required – Results of isotherms run at different temperatures.
Outline:
run_geometric(Perform Zeo++ block and VOLPO calculation with IsothermWC.) if(should_continue) run_isotherms(Compute isotherms at different temperatures.) collect_isotherms(Collect all the results in one Dict)
Inputs details
parameters
(Dict
), compared to the input of the Isotherm work chain, contains the keytemperature_list
and neglects the keytemperature
:"temperature_list": [278, 298.15, 318.0],
Outputs details
output_parameters
(Dict
) contains thetemperature
andisotherm
as lists. In this example 3 pressure points are computed at 77K, 198K and 298K:{ "Density": 0.731022, "Density_unit": "g/cm^3", "Estimated_saturation_loading": 22.1095656, "Estimated_saturation_loading_unit": "mol/kg", "Input_block": [ 1.48, 100 ], "Input_ha": "DEF", "Input_structure_filename": "tmpQD_OdI.cif", "Input_volpo": [ 1.48, 1.48, 100000 ], "Number_of_blocking_spheres": 0, "POAV_A^3": 1579.69, "POAV_A^3_unit": "A^3", "POAV_Volume_fraction": 0.45657, "POAV_Volume_fraction_unit": null, "POAV_cm^3/g": 0.624564, "POAV_cm^3/g_unit": "cm^3/g", "PONAV_A^3": 0.0, "PONAV_A^3_unit": "A^3", "PONAV_Volume_fraction": 0.0, "PONAV_Volume_fraction_unit": null, "PONAV_cm^3/g": 0.0, "PONAV_cm^3/g_unit": "cm^3/g", "Unitcell_volume": 3459.91, "Unitcell_volume_unit": "A^3", "adsorption_energy_widom_average": [ -6.501026119, -3.7417828535, -2.9538187687 ], "adsorption_energy_widom_dev": [ 0.0131402719, 0.0109470973, 0.009493264 ], "adsorption_energy_widom_unit": "kJ/mol", "conversion_factor_molec_uc_to_cm3stp_cm3": 10.757306634, "conversion_factor_molec_uc_to_gr_gr": 1.3130795208, "conversion_factor_molec_uc_to_mol_kg": 0.6565397604, "henry_coefficient_average": [ 0.000590302, 1.36478e-06, 4.59353e-07 ], "henry_coefficient_dev": [ 6.20272e-06, 2.92729e-09, 1.3813e-09 ], "henry_coefficient_unit": "mol/kg/Pa", "is_kh_enough": [ true, true, true ], "is_porous": true, "isotherm": [ { "enthalpy_of_adsorption_average": [ -4.8763191239929, -4.071414615084, -3.8884980003825 ], "enthalpy_of_adsorption_dev": [ 0.27048724983995, 0.17838206413742, 0.30520201541493 ], "enthalpy_of_adsorption_unit": "kJ/mol", "loading_absolute_average": [ 8.8763231830174, 13.809017193987, 24.592736102413 ], "loading_absolute_dev": [ 0.10377880404968, 0.057485479697981, 0.1444399097573 ], "loading_absolute_unit": "mol/kg", "pressure": [ 1.0, 5.0, 100 ], "pressure_unit": "bar" }, { "enthalpy_of_adsorption_average": [ -5.3762452088166, -5.304498349588, -5.1469837785704 ], "enthalpy_of_adsorption_dev": [ 0.16413676386221, 0.23624406142692, 0.16877234291986 ], "enthalpy_of_adsorption_unit": "kJ/mol", "loading_absolute_average": [ 0.13688033329639, 0.64822632568393, 8.2218063857542 ], "loading_absolute_dev": [ 0.0022470007645714, 0.015908634630445, 0.063314699465606 ], "loading_absolute_unit": "mol/kg", "pressure": [ 1.0, 5.0, 100 ], "pressure_unit": "bar" }, { "enthalpy_of_adsorption_average": [ -5.3995609987279, -5.5404431584811, -5.410077906097 ], "enthalpy_of_adsorption_dev": [ 0.095159861315507, 0.081469905963932, 0.1393537452296 ], "enthalpy_of_adsorption_unit": "kJ/mol", "loading_absolute_average": [ 0.04589212925196, 0.22723251444794, 3.8118903657499 ], "loading_absolute_dev": [ 0.0018452227888317, 0.0031557689853122, 0.047824194130595 ], "loading_absolute_unit": "mol/kg", "pressure": [ 1.0, 5.0, 100 ], "pressure_unit": "bar" } ], "temperature": [ 77, 198, 298 ], "temperature_unit": "K" }
IsothermCalcPE work chain¶
The IsothermCalcPEWorkChain
work chain takes as an input a structure
with partial charges, computes the isotherms for CO2 and N2 at
ambient temperature and models the process of carbon capture and compression for geological sequestration.
The final outcome informs about the performance of the adsorbent for this application, including the CO2 parasitic energy,
i.e., the energy that is required to separate and compress one kilogram of CO2, using that material.
Default input mixture is coal post-combustion flue gas, but also natural gas post-combustion and air mixtures are available.
- workchainaiida_lsmo.workchains.IsothermCalcPEWorkChain
Compute CO2 parassitic energy (PE) after running IsothermWorkChain for CO2 and N2 at 300K.
Inputs:
- geometric, Dict, optional – [Only used by IsothermMultiTempWorkChain] Already computed geometric properties
- metadata, Namespace
Namespace Ports
- call_link_label, str, optional, non_db – The label to use for the CALL link if the process is called by another process.
- description, str, optional, non_db – Description to set on the process node.
- label, str, optional, non_db – Label to set on the process node.
- store_provenance, bool, optional, non_db – If set to False provenance will not be stored in the database.
- parameters, Dict, optional – Parameters for Isotherm work chain
- pe_parameters, Dict, optional – Parameters for PE process modelling
- raspa_base, Namespace
Namespace Ports
- clean_workdir, Bool, optional – If True, work directories of all called calculation jobs will be cleaned at the end of execution.
- handler_overrides, Dict, optional – Mapping where keys are process handler names and the values are a boolean, where True will enable the corresponding handler and False will disable it. This overrides the default value set by the enabled keyword of the process_handler decorator with which the method is decorated.
- max_iterations, Int, optional – Maximum number of iterations the work chain will restart the process to finish successfully.
- metadata, Namespace
Namespace Ports
- call_link_label, str, optional, non_db – The label to use for the CALL link if the process is called by another process.
- description, str, optional, non_db – Description to set on the process node.
- label, str, optional, non_db – Label to set on the process node.
- store_provenance, bool, optional, non_db – If set to False provenance will not be stored in the database.
- raspa, Namespace
Namespace Ports
- block_pocket, Namespace – Zeo++ block pocket file
- code, Code, required – The Code to use for this job.
- file, Namespace – Additional input file(s)
- framework, Namespace – Input framework(s)
- metadata, Namespace
Namespace Ports
- call_link_label, str, optional, non_db – The label to use for the CALL link if the process is called by another process.
- computer, Computer, optional, non_db – When using a “local” code, set the computer on which the calculation should be run.
- description, str, optional, non_db – Description to set on the process node.
- dry_run, bool, optional, non_db – When set to True will prepare the calculation job for submission but not actually launch it.
- label, str, optional, non_db – Label to set on the process node.
- options, Namespace
Namespace Ports
- account, str, optional, non_db – Set the account to use in for the queue on the remote computer
- additional_retrieve_list, (list, tuple), optional, non_db – List of relative file paths that should be retrieved in addition to what the plugin specifies.
- append_text, str, optional, non_db – Set the calculation-specific append text, which is going to be appended in the scheduler-job script, just after the code execution
- custom_scheduler_commands, str, optional, non_db – Set a (possibly multiline) string with the commands that the user wants to manually set for the scheduler. The difference of this option with respect to the prepend_text is the position in the scheduler submission file where such text is inserted: with this option, the string is inserted before any non-scheduler command
- environment_variables, dict, optional, non_db – Set a dictionary of custom environment variables for this calculation
- import_sys_environment, bool, optional, non_db – If set to true, the submission script will load the system environment variables
- input_filename, str, optional, non_db – Filename to which the input for the code that is to be run is written.
- max_memory_kb, int, optional, non_db – Set the maximum memory (in KiloBytes) to be asked to the scheduler
- max_wallclock_seconds, int, optional, non_db – Set the wallclock in seconds asked to the scheduler
- mpirun_extra_params, (list, tuple), optional, non_db – Set the extra params to pass to the mpirun (or equivalent) command after the one provided in computer.mpirun_command. Example: mpirun -np 8 extra_params[0] extra_params[1] … exec.x
- output_filename, str, optional, non_db – Filename to which the content of stdout of the code that is to be run is written.
- parser_name, str, optional, non_db – Set a string for the output parser. Can be None if no output plugin is available or needed
- prepend_text, str, optional, non_db – Set the calculation-specific prepend text, which is going to be prepended in the scheduler-job script, just before the code execution
- priority, str, optional, non_db – Set the priority of the job to be queued
- qos, str, optional, non_db – Set the quality of service to use in for the queue on the remote computer
- queue_name, str, optional, non_db – Set the name of the queue on the remote computer
- resources, dict, required, non_db – Set the dictionary of resources to be used by the scheduler plugin, like the number of nodes, cpus etc. This dictionary is scheduler-plugin dependent. Look at the documentation of the scheduler for more details.
- scheduler_stderr, str, optional, non_db – Filename to which the content of stderr of the scheduler is written.
- scheduler_stdout, str, optional, non_db – Filename to which the content of stdout of the scheduler is written.
- stash, Namespace – Optional directives to stash files after the calculation job has completed.
Namespace Ports
- source_list, (tuple, list), optional, non_db – Sequence of relative filepaths representing files in the remote directory that should be stashed.
- stash_mode, str, optional, non_db – Mode with which to perform the stashing, should be value of `aiida.common.datastructures.StashMode.
- target_base, str, optional, non_db – The base location to where the files should be stashd. For example, for the copy stash mode, this should be an absolute filepath on the remote computer.
- submit_script_filename, str, optional, non_db – Filename to which the job submission script is written.
- withmpi, bool, optional, non_db – Set the calculation to use mpi
- store_provenance, bool, optional, non_db – If set to False provenance will not be stored in the database.
- parent_folder, RemoteData, optional – Remote folder used to continue the same simulation stating from the binary restarts.
- retrieved_parent_folder, FolderData, optional – To use an old calculation as a starting poing for a new one.
- settings, Dict, optional – Additional input parameters
- structure, CifData, required – Adsorbent framework CIF.
- zeopp, Namespace
Namespace Ports
- code, Code, required – The Code to use for this job.
- metadata, Namespace
Namespace Ports
- call_link_label, str, optional, non_db – The label to use for the CALL link if the process is called by another process.
- computer, Computer, optional, non_db – When using a “local” code, set the computer on which the calculation should be run.
- description, str, optional, non_db – Description to set on the process node.
- dry_run, bool, optional, non_db – When set to True will prepare the calculation job for submission but not actually launch it.
- label, str, optional, non_db – Label to set on the process node.
- options, Namespace
Namespace Ports
- account, str, optional, non_db – Set the account to use in for the queue on the remote computer
- additional_retrieve_list, (list, tuple), optional, non_db – List of relative file paths that should be retrieved in addition to what the plugin specifies.
- append_text, str, optional, non_db – Set the calculation-specific append text, which is going to be appended in the scheduler-job script, just after the code execution
- custom_scheduler_commands, str, optional, non_db – Set a (possibly multiline) string with the commands that the user wants to manually set for the scheduler. The difference of this option with respect to the prepend_text is the position in the scheduler submission file where such text is inserted: with this option, the string is inserted before any non-scheduler command
- environment_variables, dict, optional, non_db – Set a dictionary of custom environment variables for this calculation
- import_sys_environment, bool, optional, non_db – If set to true, the submission script will load the system environment variables
- input_filename, str, optional, non_db – Filename to which the input for the code that is to be run is written.
- max_memory_kb, int, optional, non_db – Set the maximum memory (in KiloBytes) to be asked to the scheduler
- max_wallclock_seconds, int, optional, non_db – Set the wallclock in seconds asked to the scheduler
- mpirun_extra_params, (list, tuple), optional, non_db – Set the extra params to pass to the mpirun (or equivalent) command after the one provided in computer.mpirun_command. Example: mpirun -np 8 extra_params[0] extra_params[1] … exec.x
- output_filename, str, optional, non_db – Filename to which the content of stdout of the code that is to be run is written.
- parser_name, str, optional, non_db
- prepend_text, str, optional, non_db – Set the calculation-specific prepend text, which is going to be prepended in the scheduler-job script, just before the code execution
- priority, str, optional, non_db – Set the priority of the job to be queued
- qos, str, optional, non_db – Set the quality of service to use in for the queue on the remote computer
- queue_name, str, optional, non_db – Set the name of the queue on the remote computer
- resources, dict, optional, non_db
- scheduler_stderr, str, optional, non_db – Filename to which the content of stderr of the scheduler is written.
- scheduler_stdout, str, optional, non_db – Filename to which the content of stdout of the scheduler is written.
- stash, Namespace – Optional directives to stash files after the calculation job has completed.
Namespace Ports
- source_list, (tuple, list), optional, non_db – Sequence of relative filepaths representing files in the remote directory that should be stashed.
- stash_mode, str, optional, non_db – Mode with which to perform the stashing, should be value of `aiida.common.datastructures.StashMode.
- target_base, str, optional, non_db – The base location to where the files should be stashd. For example, for the copy stash mode, this should be an absolute filepath on the remote computer.
- submit_script_filename, str, optional, non_db – Filename to which the job submission script is written.
- withmpi, bool, optional, non_db – Set the calculation to use mpi
- store_provenance, bool, optional, non_db – If set to False provenance will not be stored in the database.
Outputs:
- co2, Namespace
Namespace Ports
- block, SinglefileData, optional – Blocked pockets fileoutput file.
- output_parameters, Dict, required – Results of the single temperature wc: keys can vay depending on is_porous and is_kh_enough booleans.
- n2, Namespace
Namespace Ports
- block, SinglefileData, optional – Blocked pockets fileoutput file.
- output_parameters, Dict, required – Results of the single temperature wc: keys can vay depending on is_porous and is_kh_enough booleans.
- output_parameters, Dict, required – Output parmaters of a calc_PE calculations
Outline:
run_isotherms(Run Isotherm work chain for CO2 and N2.) run_calcpe(Expose isotherm outputs, prepare calc_pe, run it and return the output.)
CP2K multistage work chain¶
The Cp2kMultistageWorkChain
work chain in meant to automate
DFT optimizations in CP2K and guess some good parameters
for the simulation, but it is written in such a versatile fashion that it can be used for many other functions.
What it can do:
Given a protocol YAML with different settings, the work chains iterates until it converges the SCF calculation. The concept is to use general options for
settings_0
and more and more robust for the next ones.The protocol YAML contains also a number of stages, i.e., different
MOTION
settings, that are executed one after the other, restarting from the previous calculation. During the first stage,stage_0
, different settings are tested until the SCF converges at the last step ofstage_0
. If this dos not happening the work chain stops. Otherwise it continues runningstage_1
, and all the other stages that are included in the protocol.These stages can be used for running a robust cell optimization, i.e., combining first some MD steps to escape metastable geometries and later the final optimization, or ab-initio MD, first equilibrating the system with a shorter time constant for the thermostat, and then collecting statistics in the second stage.
Some default protocols are provided in
workchains/multistage_protocols
and they can be imported with simple tags such astest
,default
,robust_conv
. Otherwise, the user can take inspiration from these to write his own protocol and pass it to the work chain.Compute the band gap.
You can restart from a previous calculation, e.g., from an already computed wavefunction.
What it can not do:
Run CP2K calculations with k-points.
Run CP2K advanced calculations, e.g., other than
ENERGY
,GEO_OPT
,CELL_OPT
andMD
.
- workchainaiida_lsmo.workchains.Cp2kMultistageWorkChain
Submits Cp2kBase workchains for ENERGY, GEO_OPT, CELL_OPT and MD jobs iteratively The protocol_yaml file contains a series of settings_x and stage_x: the workchains starts running the settings_0/stage_0 calculation, and, in case of a failure, changes the settings untill the SCF of stage_0 converges. Then it uses the same settings to run the next stages (i.e., stage_1, etc.).
Inputs:
- cp2k_base, Namespace
Namespace Ports
- clean_workdir, Bool, optional – If True, work directories of all called calculation jobs will be cleaned at the end of execution.
- cp2k, Namespace
Namespace Ports
- basissets, Namespace – A dictionary of basissets to be used in the calculations: key is the atomic symbol, value is either a single basisset or a list of basissets. If multiple basissets for a single symbol are passed, it is mandatory to specify a KIND section with a BASIS_SET keyword matching the names (or aliases) of the basissets.
- code, Code, required – The Code to use for this job.
- file, Namespace – Additional input files.
- kpoints, KpointsData, optional – Input kpoint mesh.
- metadata, Namespace
Namespace Ports
- call_link_label, str, optional, non_db – The label to use for the CALL link if the process is called by another process.
- computer, Computer, optional, non_db – When using a “local” code, set the computer on which the calculation should be run.
- description, str, optional, non_db – Description to set on the process node.
- dry_run, bool, optional, non_db – When set to True will prepare the calculation job for submission but not actually launch it.
- label, str, optional, non_db – Label to set on the process node.
- options, Namespace
Namespace Ports
- account, str, optional, non_db – Set the account to use in for the queue on the remote computer
- additional_retrieve_list, (list, tuple), optional, non_db – List of relative file paths that should be retrieved in addition to what the plugin specifies.
- append_text, str, optional, non_db – Set the calculation-specific append text, which is going to be appended in the scheduler-job script, just after the code execution
- custom_scheduler_commands, str, optional, non_db – Set a (possibly multiline) string with the commands that the user wants to manually set for the scheduler. The difference of this option with respect to the prepend_text is the position in the scheduler submission file where such text is inserted: with this option, the string is inserted before any non-scheduler command
- environment_variables, dict, optional, non_db – Set a dictionary of custom environment variables for this calculation
- import_sys_environment, bool, optional, non_db – If set to true, the submission script will load the system environment variables
- input_filename, str, optional, non_db
- max_memory_kb, int, optional, non_db – Set the maximum memory (in KiloBytes) to be asked to the scheduler
- max_wallclock_seconds, int, optional, non_db – Set the wallclock in seconds asked to the scheduler
- mpirun_extra_params, (list, tuple), optional, non_db – Set the extra params to pass to the mpirun (or equivalent) command after the one provided in computer.mpirun_command. Example: mpirun -np 8 extra_params[0] extra_params[1] … exec.x
- output_filename, str, optional, non_db
- parser_name, str, optional, non_db – Parser of the calculation: the default is cp2k_advanced_parser to get the necessary info
- prepend_text, str, optional, non_db – Set the calculation-specific prepend text, which is going to be prepended in the scheduler-job script, just before the code execution
- priority, str, optional, non_db – Set the priority of the job to be queued
- qos, str, optional, non_db – Set the quality of service to use in for the queue on the remote computer
- queue_name, str, optional, non_db – Set the name of the queue on the remote computer
- resources, dict, required, non_db – Set the dictionary of resources to be used by the scheduler plugin, like the number of nodes, cpus etc. This dictionary is scheduler-plugin dependent. Look at the documentation of the scheduler for more details.
- scheduler_stderr, str, optional, non_db – Filename to which the content of stderr of the scheduler is written.
- scheduler_stdout, str, optional, non_db – Filename to which the content of stdout of the scheduler is written.
- stash, Namespace – Optional directives to stash files after the calculation job has completed.
Namespace Ports
- source_list, (tuple, list), optional, non_db – Sequence of relative filepaths representing files in the remote directory that should be stashed.
- stash_mode, str, optional, non_db – Mode with which to perform the stashing, should be value of `aiida.common.datastructures.StashMode.
- target_base, str, optional, non_db – The base location to where the files should be stashd. For example, for the copy stash mode, this should be an absolute filepath on the remote computer.
- submit_script_filename, str, optional, non_db – Filename to which the job submission script is written.
- withmpi, bool, optional, non_db
- store_provenance, bool, optional, non_db – If set to False provenance will not be stored in the database.
- parameters, Dict, optional – Specify custom CP2K settings to overwrite the input dictionary just before submitting the CalcJob
- parent_calc_folder, RemoteData, optional – Working directory of a previously ran calculation to restart from.
- pseudos, Namespace – A dictionary of pseudopotentials to be used in the calculations: key is the atomic symbol, value is either a single pseudopotential or a list of pseudopotentials. If multiple pseudos for a single symbol are passed, it is mandatory to specify a KIND section with a PSEUDOPOTENTIAL keyword matching the names (or aliases) of the pseudopotentials.
- settings, Dict, optional – Optional input parameters.
- handler_overrides, Dict, optional – Mapping where keys are process handler names and the values are a boolean, where True will enable the corresponding handler and False will disable it. This overrides the default value set by the enabled keyword of the process_handler decorator with which the method is decorated.
- max_iterations, Int, optional – Maximum number of iterations the work chain will restart the process to finish successfully.
- metadata, Namespace
Namespace Ports
- call_link_label, str, optional, non_db – The label to use for the CALL link if the process is called by another process.
- description, str, optional, non_db – Description to set on the process node.
- label, str, optional, non_db – Label to set on the process node.
- store_provenance, bool, optional, non_db – If set to False provenance will not be stored in the database.
- metadata, Namespace
Namespace Ports
- call_link_label, str, optional, non_db – The label to use for the CALL link if the process is called by another process.
- description, str, optional, non_db – Description to set on the process node.
- label, str, optional, non_db – Label to set on the process node.
- store_provenance, bool, optional, non_db – If set to False provenance will not be stored in the database.
- min_cell_size, Float, optional – To avoid using k-points, extend the cell so that min(perp_width)>min_cell_size
- parent_calc_folder, RemoteData, optional – Provide an initial parent folder that contains the wavefunction for restart
- protocol_modify, Dict, optional – Specify custom settings that overvrite the yaml settings
- protocol_tag, Str, optional – The tag of the protocol to be read from {tag}.yaml unless protocol_yaml input is specified
- protocol_yaml, SinglefileData, optional – Specify a custom yaml file with the multistage settings (and ignore protocol_tag)
- starting_settings_idx, Int, optional – If idx>0 is chosen, jumps directly to overwrite settings_0 with settings_{idx}
- structure, StructureData, optional – Input structure
Outputs:
- last_input_parameters, Dict, optional – CP2K input parameters used (and possibly working) used in the last stage
- output_parameters, Dict, optional – Output CP2K parameters of all the stages, merged together
- output_structure, StructureData, optional – Processed structure (missing if only ENERGY calculation is performed)
- remote_folder, RemoteData, required – Input files necessary to run the process will be stored in this folder node.
Outline:
setup_multistage(Setup initial parameters.) while(should_run_stage0) run_stage(Check for restart, prepare input, submit and direct output to context.) inspect_and_update_settings_stage0(Inspect the stage0/settings_{idx} calculation and check if it is needed to update the settings and resubmint the calculation.) inspect_and_update_stage(Update geometry, parent folder and the new &MOTION settings.) while(should_run_stage) run_stage(Check for restart, prepare input, submit and direct output to context.) inspect_and_update_stage(Update geometry, parent folder and the new &MOTION settings.) results(Gather final outputs of the workchain.)
- cp2k_base, Namespace
Inputs details
structure
(StructureData
, NOTE this is not aCifData
) is the system to investigate. It can be also a molecule in a box and not necessarily a 2D/3D framework.protocol_tag
(Str
) calls a default protocol. Currently available:
|
Main choice, uses PBE-D3(BJ) with 600Ry/DZVP basis set and GTH pseudopotential. First settings are with OT, and if not working it switches to diagonalization and smearing. As for the stages it runs a cell optimization, a short NPT MD and again cell optimization. |
|
Quick protocol for testing purpose. |
|
Similar to |
|
Same settings as |
protocol_yaml
(SinglefileData
) is used to specify a custom protocol through a YAML file. See the default YAML file as an example. Note that the dictionary need to contain the following keys:
|
An user friendly description of the protocol. |
|
Initial magnetization for metal atoms (affects spin multiplicity of calculation).
Use |
|
Dictionary of |
|
Dictionary of |
|
Any |
|
Settings updated in |
|
CP2K settings that are updated at every stage. |
Other keys may be add in future to introduce new functionalities to the Multistage work chain.
starting_settings_idx
(Int
) is used to start from a custom index of the settings. If for example you know that the material is conductive and needs for smearing, you can useInt(1)
to update directly the settings tosettings_1
that applies electron smearing: this is the case ofdefault
protocol.min_cell_size
(Float
) is used to extend the unit cell, so that the minimum perpendicular width of the cell is bigger than a certain specified value. This needed when a cell length is too narrow and the plane wave auxiliary basis set is not accurate enough at the Gamma point only. Also this may be needed for hybrid range-separated potentials that require a sufficient non-overlapping cutoff.
Note
Need to explain it further in Technicalities.
parent_calc_folder
(RemoteData
) is used to restart from a previously computed wave function.cp2k_base.cp2k.parameters
(Dict
) can be used to specify some cp2k parameters that will be always overwritten just before submitting every calculation.
Outputs details
output_structure
(StructureData
) is the final structure at the end of the last stage. It is not outputted in case of a single point calculation, since it does not update the geometry of the system.output_parameters
(Dict
), here it is an example for Aluminum, where thesettings_0
calculation is discarded because of a negative band gap, and therefore switched tosettings_1
which make the SCF converge and they are used for 2 stages:{ "cell_resized": "1x1x1", "dft_type": "RKS", "final_bandgap_spin1_au": 6.1299999999931e-06, "final_bandgap_spin2_au": 6.1299999999931e-06, "last_tag": "stage_1_settings_1_valid", "natoms": 4, "nsettings_discarded": 1, "nstages_valid": 2, "stage_info": { "bandgap_spin1_au": [ 0.0, 6.1299999999931e-06 ], "bandgap_spin2_au": [ 0.0, 6.1299999999931e-06 ], "final_edens_rspace": [ -3e-09, -3e-09 ], "nsteps": [ 1, 2 ], "opt_converged": [ true, false ] }, "step_info": { "cell_a_angs": [ 4.05, 4.05, 4.05, 4.05 ], "cell_alp_deg": [ 90.0, 90.0, 90.0, 90.0 ], "cell_b_angs": [ 4.05, 4.05, 4.05, 4.05 ], "cell_bet_deg": [ 90.0, 90.0, 90.0, 90.0 ], "cell_c_angs": [ 4.05, 4.05, 4.05, 4.05 ], "cell_gam_deg": [ 90.0, 90.0, 90.0, 90.0 ], "cell_vol_angs3": [ 66.409, 66.409, 66.409, 66.409 ], "dispersion_energy_au": [ -0.04894693184602, -0.04894693184602, -0.04894696543385, -0.04894705992872 ], "energy_au": [ -8.0811276714482, -8.0811276714483, -8.0811249649336, -8.0811173120933 ], "max_grad_au": [ null, 0.0, null, null ], "max_step_au": [ null, 0.0, null, null ], "pressure_bar": [ null, null, 58260.2982324, 58201.2710544 ], "rms_grad_au": [ null, 0.0, null, null ], "rms_step_au": [ null, 0.0, null, null ], "scf_converged": [ true, true, true, true ], "step": [ 0, 1, 1, 2 ] } }
last_input_parameters
(Dict
) reports the inputs that were used for the last CP2K calculation. They are possibly the ones that make the SCF converge, so the user can inspect them and use them for other direct CP2K calculations in AiiDA.
Usage
See examples provided with the plugin. The report provides very useful insight on what happened during the run. Here it is the example of Aluminum:
2019-11-22 16:54:52 [90962 | REPORT]: [266248|Cp2kMultistageWorkChain|setup_multistage]: Unit cell was NOT resized
2019-11-22 16:54:52 [90963 | REPORT]: [266248|Cp2kMultistageWorkChain|run_stage]: submitted Cp2kBaseWorkChain for stage_0/settings_0
2019-11-22 16:54:52 [90964 | REPORT]: [266252|Cp2kBaseWorkChain|run_calculation]: launching Cp2kCalculation<266253> iteration #1
2019-11-22 16:55:13 [90965 | REPORT]: [266252|Cp2kBaseWorkChain|inspect_calculation]: Cp2kCalculation<266253> completed successfully
2019-11-22 16:55:13 [90966 | REPORT]: [266252|Cp2kBaseWorkChain|results]: work chain completed after 1 iterations
2019-11-22 16:55:14 [90967 | REPORT]: [266252|Cp2kBaseWorkChain|on_terminated]: remote folders will not be cleaned
2019-11-22 16:55:14 [90968 | REPORT]: [266248|Cp2kMultistageWorkChain|inspect_and_update_settings_stage0]: Bandgaps spin1/spin2: -0.058 and -0.058 ev
2019-11-22 16:55:14 [90969 | REPORT]: [266248|Cp2kMultistageWorkChain|inspect_and_update_settings_stage0]: BAD SETTINGS: band gap is < 0.100 eV
2019-11-22 16:55:14 [90970 | REPORT]: [266248|Cp2kMultistageWorkChain|run_stage]: submitted Cp2kBaseWorkChain for stage_0/settings_1
2019-11-22 16:55:15 [90971 | REPORT]: [266259|Cp2kBaseWorkChain|run_calculation]: launching Cp2kCalculation<266260> iteration #1
2019-11-22 16:55:34 [90972 | REPORT]: [266259|Cp2kBaseWorkChain|inspect_calculation]: Cp2kCalculation<266260> completed successfully
2019-11-22 16:55:34 [90973 | REPORT]: [266259|Cp2kBaseWorkChain|results]: work chain completed after 1 iterations
2019-11-22 16:55:34 [90974 | REPORT]: [266259|Cp2kBaseWorkChain|on_terminated]: remote folders will not be cleaned
2019-11-22 16:55:35 [90975 | REPORT]: [266248|Cp2kMultistageWorkChain|inspect_and_update_settings_stage0]: Bandgaps spin1/spin2: 0.000 and 0.000 ev
2019-11-22 16:55:35 [90976 | REPORT]: [266248|Cp2kMultistageWorkChain|inspect_and_update_stage]: Structure updated for next stage
2019-11-22 16:55:35 [90977 | REPORT]: [266248|Cp2kMultistageWorkChain|run_stage]: submitted Cp2kBaseWorkChain for stage_1/settings_1
2019-11-22 16:55:35 [90978 | REPORT]: [266266|Cp2kBaseWorkChain|run_calculation]: launching Cp2kCalculation<266267> iteration #1
2019-11-22 16:55:53 [90979 | REPORT]: [266266|Cp2kBaseWorkChain|inspect_calculation]: Cp2kCalculation<266267> completed successfully
2019-11-22 16:55:53 [90980 | REPORT]: [266266|Cp2kBaseWorkChain|results]: work chain completed after 1 iterations
2019-11-22 16:55:54 [90981 | REPORT]: [266266|Cp2kBaseWorkChain|on_terminated]: remote folders will not be cleaned
2019-11-22 16:55:54 [90982 | REPORT]: [266248|Cp2kMultistageWorkChain|inspect_and_update_stage]: Structure updated for next stage
2019-11-22 16:55:54 [90983 | REPORT]: [266248|Cp2kMultistageWorkChain|inspect_and_update_stage]: All stages computed, finishing...
2019-11-22 16:55:55 [90984 | REPORT]: [266248|Cp2kMultistageWorkChain|results]: Outputs: Dict<266273> and StructureData<266271>
Cp2kMultistageDdec work chain¶
The Cp2kMultistageDdecWorkChain()
work chain combines together the CP2K
Multistage workchain and the DDEC calculation, with the scope of
optimizing the geometry of a structure and compute its partial charge using the DDEC protocol.
- workchainaiida_lsmo.workchains.Cp2kMultistageDdecWorkChain
A workchain that combines: Cp2kMultistageWorkChain + Cp2kDdecWorkChain
Inputs:
- cp2k_base, Namespace
Namespace Ports
- clean_workdir, Bool, optional – If True, work directories of all called calculation jobs will be cleaned at the end of execution.
- cp2k, Namespace
Namespace Ports
- basissets, Namespace – A dictionary of basissets to be used in the calculations: key is the atomic symbol, value is either a single basisset or a list of basissets. If multiple basissets for a single symbol are passed, it is mandatory to specify a KIND section with a BASIS_SET keyword matching the names (or aliases) of the basissets.
- code, Code, required – The Code to use for this job.
- file, Namespace – Additional input files.
- kpoints, KpointsData, optional – Input kpoint mesh.
- metadata, Namespace
Namespace Ports
- call_link_label, str, optional, non_db – The label to use for the CALL link if the process is called by another process.
- computer, Computer, optional, non_db – When using a “local” code, set the computer on which the calculation should be run.
- description, str, optional, non_db – Description to set on the process node.
- dry_run, bool, optional, non_db – When set to True will prepare the calculation job for submission but not actually launch it.
- label, str, optional, non_db – Label to set on the process node.
- options, Namespace
Namespace Ports
- account, str, optional, non_db – Set the account to use in for the queue on the remote computer
- additional_retrieve_list, (list, tuple), optional, non_db – List of relative file paths that should be retrieved in addition to what the plugin specifies.
- append_text, str, optional, non_db – Set the calculation-specific append text, which is going to be appended in the scheduler-job script, just after the code execution
- custom_scheduler_commands, str, optional, non_db – Set a (possibly multiline) string with the commands that the user wants to manually set for the scheduler. The difference of this option with respect to the prepend_text is the position in the scheduler submission file where such text is inserted: with this option, the string is inserted before any non-scheduler command
- environment_variables, dict, optional, non_db – Set a dictionary of custom environment variables for this calculation
- import_sys_environment, bool, optional, non_db – If set to true, the submission script will load the system environment variables
- input_filename, str, optional, non_db
- max_memory_kb, int, optional, non_db – Set the maximum memory (in KiloBytes) to be asked to the scheduler
- max_wallclock_seconds, int, optional, non_db – Set the wallclock in seconds asked to the scheduler
- mpirun_extra_params, (list, tuple), optional, non_db – Set the extra params to pass to the mpirun (or equivalent) command after the one provided in computer.mpirun_command. Example: mpirun -np 8 extra_params[0] extra_params[1] … exec.x
- output_filename, str, optional, non_db
- parser_name, str, optional, non_db – Parser of the calculation: the default is cp2k_advanced_parser to get the necessary info
- prepend_text, str, optional, non_db – Set the calculation-specific prepend text, which is going to be prepended in the scheduler-job script, just before the code execution
- priority, str, optional, non_db – Set the priority of the job to be queued
- qos, str, optional, non_db – Set the quality of service to use in for the queue on the remote computer
- queue_name, str, optional, non_db – Set the name of the queue on the remote computer
- resources, dict, required, non_db – Set the dictionary of resources to be used by the scheduler plugin, like the number of nodes, cpus etc. This dictionary is scheduler-plugin dependent. Look at the documentation of the scheduler for more details.
- scheduler_stderr, str, optional, non_db – Filename to which the content of stderr of the scheduler is written.
- scheduler_stdout, str, optional, non_db – Filename to which the content of stdout of the scheduler is written.
- stash, Namespace – Optional directives to stash files after the calculation job has completed.
Namespace Ports
- source_list, (tuple, list), optional, non_db – Sequence of relative filepaths representing files in the remote directory that should be stashed.
- stash_mode, str, optional, non_db – Mode with which to perform the stashing, should be value of `aiida.common.datastructures.StashMode.
- target_base, str, optional, non_db – The base location to where the files should be stashd. For example, for the copy stash mode, this should be an absolute filepath on the remote computer.
- submit_script_filename, str, optional, non_db – Filename to which the job submission script is written.
- withmpi, bool, optional, non_db
- store_provenance, bool, optional, non_db – If set to False provenance will not be stored in the database.
- parameters, Dict, optional – Specify custom CP2K settings to overwrite the input dictionary just before submitting the CalcJob
- parent_calc_folder, RemoteData, optional – Working directory of a previously ran calculation to restart from.
- pseudos, Namespace – A dictionary of pseudopotentials to be used in the calculations: key is the atomic symbol, value is either a single pseudopotential or a list of pseudopotentials. If multiple pseudos for a single symbol are passed, it is mandatory to specify a KIND section with a PSEUDOPOTENTIAL keyword matching the names (or aliases) of the pseudopotentials.
- settings, Dict, optional – Optional input parameters.
- handler_overrides, Dict, optional – Mapping where keys are process handler names and the values are a boolean, where True will enable the corresponding handler and False will disable it. This overrides the default value set by the enabled keyword of the process_handler decorator with which the method is decorated.
- max_iterations, Int, optional – Maximum number of iterations the work chain will restart the process to finish successfully.
- metadata, Namespace
Namespace Ports
- call_link_label, str, optional, non_db – The label to use for the CALL link if the process is called by another process.
- description, str, optional, non_db – Description to set on the process node.
- label, str, optional, non_db – Label to set on the process node.
- store_provenance, bool, optional, non_db – If set to False provenance will not be stored in the database.
- ddec, Namespace
Namespace Ports
- charge_density_folder, RemoteData, optional – Use a remote folder (for restarts and similar)
- code, Code, required – The Code to use for this job.
- metadata, Namespace
Namespace Ports
- call_link_label, str, optional, non_db – The label to use for the CALL link if the process is called by another process.
- computer, Computer, optional, non_db – When using a “local” code, set the computer on which the calculation should be run.
- description, str, optional, non_db – Description to set on the process node.
- dry_run, bool, optional, non_db – When set to True will prepare the calculation job for submission but not actually launch it.
- label, str, optional, non_db – Label to set on the process node.
- options, Namespace
Namespace Ports
- account, str, optional, non_db – Set the account to use in for the queue on the remote computer
- additional_retrieve_list, (list, tuple), optional, non_db – List of relative file paths that should be retrieved in addition to what the plugin specifies.
- append_text, str, optional, non_db – Set the calculation-specific append text, which is going to be appended in the scheduler-job script, just after the code execution
- custom_scheduler_commands, str, optional, non_db – Set a (possibly multiline) string with the commands that the user wants to manually set for the scheduler. The difference of this option with respect to the prepend_text is the position in the scheduler submission file where such text is inserted: with this option, the string is inserted before any non-scheduler command
- environment_variables, dict, optional, non_db – Set a dictionary of custom environment variables for this calculation
- import_sys_environment, bool, optional, non_db – If set to true, the submission script will load the system environment variables
- input_filename, str, optional, non_db – Filename to which the input for the code that is to be run is written.
- max_memory_kb, int, optional, non_db – Set the maximum memory (in KiloBytes) to be asked to the scheduler
- max_wallclock_seconds, int, optional, non_db – Set the wallclock in seconds asked to the scheduler
- mpirun_extra_params, (list, tuple), optional, non_db – Set the extra params to pass to the mpirun (or equivalent) command after the one provided in computer.mpirun_command. Example: mpirun -np 8 extra_params[0] extra_params[1] … exec.x
- output_filename, str, optional, non_db – Filename to which the content of stdout of the code that is to be run is written.
- parser_name, str, optional, non_db – Set a string for the output parser. Can be None if no output plugin is available or needed
- prepend_text, str, optional, non_db – Set the calculation-specific prepend text, which is going to be prepended in the scheduler-job script, just before the code execution
- priority, str, optional, non_db – Set the priority of the job to be queued
- qos, str, optional, non_db – Set the quality of service to use in for the queue on the remote computer
- queue_name, str, optional, non_db – Set the name of the queue on the remote computer
- resources, dict, required, non_db – Set the dictionary of resources to be used by the scheduler plugin, like the number of nodes, cpus etc. This dictionary is scheduler-plugin dependent. Look at the documentation of the scheduler for more details.
- scheduler_stderr, str, optional, non_db – Filename to which the content of stderr of the scheduler is written.
- scheduler_stdout, str, optional, non_db – Filename to which the content of stdout of the scheduler is written.
- stash, Namespace – Optional directives to stash files after the calculation job has completed.
Namespace Ports
- source_list, (tuple, list), optional, non_db – Sequence of relative filepaths representing files in the remote directory that should be stashed.
- stash_mode, str, optional, non_db – Mode with which to perform the stashing, should be value of `aiida.common.datastructures.StashMode.
- target_base, str, optional, non_db – The base location to where the files should be stashd. For example, for the copy stash mode, this should be an absolute filepath on the remote computer.
- submit_script_filename, str, optional, non_db – Filename to which the job submission script is written.
- withmpi, bool, optional, non_db – Set the calculation to use mpi
- store_provenance, bool, optional, non_db – If set to False provenance will not be stored in the database.
- parameters, Dict, required – Input parameters such as net charge, protocol, atomic densities path, …
- metadata, Namespace
Namespace Ports
- call_link_label, str, optional, non_db – The label to use for the CALL link if the process is called by another process.
- description, str, optional, non_db – Description to set on the process node.
- label, str, optional, non_db – Label to set on the process node.
- store_provenance, bool, optional, non_db – If set to False provenance will not be stored in the database.
- min_cell_size, Float, optional – To avoid using k-points, extend the cell so that min(perp_width)>min_cell_size
- parent_calc_folder, RemoteData, optional – Provide an initial parent folder that contains the wavefunction for restart
- protocol_modify, Dict, optional – Specify custom settings that overvrite the yaml settings
- protocol_tag, Str, optional – The tag of the protocol to be read from {tag}.yaml unless protocol_yaml input is specified
- protocol_yaml, SinglefileData, optional – Specify a custom yaml file with the multistage settings (and ignore protocol_tag)
- starting_settings_idx, Int, optional – If idx>0 is chosen, jumps directly to overwrite settings_0 with settings_{idx}
- structure, StructureData, optional – Input structure
Outputs:
- last_input_parameters, Dict, optional – CP2K input parameters used (and possibly working) used in the last stage
- output_parameters, Dict, optional – Output CP2K parameters of all the stages, merged together
- remote_folder, RemoteData, required – Input files necessary to run the process will be stored in this folder node.
- structure_ddec, CifData, required – structure with DDEC charges
Outline:
run_cp2kmultistage(Run CP2K-Multistage) run_cp2kddec(Pass the Cp2kMultistageWorkChain outputs as inputs for Cp2kDdecWorkChain: cp2k_base (metadata), cp2k_params, structure and WFN.) return_results(Return exposed outputs and print the pk of the CifData w/DDEC)
- cp2k_base, Namespace
ZeoppMultistageDdec work chain¶
The ZeoppMultistageDdecWorkChain()
work chain, is similar to Cp2kMultistageDdec
but it runs a geometry characterization of the structure
using Zeo++ (NetworkCalculation) before and after, with the scope of assessing the structural changes due to the cell/geometry
optimization.
- workchainaiida_lsmo.workchains.ZeoppMultistageDdecWorkChain
A workchain that combines: Zeopp + Cp2kMultistageWorkChain + Cp2kDdecWorkChain + Zeopp
Inputs:
- cp2k_base, Namespace
Namespace Ports
- clean_workdir, Bool, optional – If True, work directories of all called calculation jobs will be cleaned at the end of execution.
- cp2k, Namespace
Namespace Ports
- basissets, Namespace – A dictionary of basissets to be used in the calculations: key is the atomic symbol, value is either a single basisset or a list of basissets. If multiple basissets for a single symbol are passed, it is mandatory to specify a KIND section with a BASIS_SET keyword matching the names (or aliases) of the basissets.
- code, Code, required – The Code to use for this job.
- file, Namespace – Additional input files.
- kpoints, KpointsData, optional – Input kpoint mesh.
- metadata, Namespace
Namespace Ports
- call_link_label, str, optional, non_db – The label to use for the CALL link if the process is called by another process.
- computer, Computer, optional, non_db – When using a “local” code, set the computer on which the calculation should be run.
- description, str, optional, non_db – Description to set on the process node.
- dry_run, bool, optional, non_db – When set to True will prepare the calculation job for submission but not actually launch it.
- label, str, optional, non_db – Label to set on the process node.
- options, Namespace
Namespace Ports
- account, str, optional, non_db – Set the account to use in for the queue on the remote computer
- additional_retrieve_list, (list, tuple), optional, non_db – List of relative file paths that should be retrieved in addition to what the plugin specifies.
- append_text, str, optional, non_db – Set the calculation-specific append text, which is going to be appended in the scheduler-job script, just after the code execution
- custom_scheduler_commands, str, optional, non_db – Set a (possibly multiline) string with the commands that the user wants to manually set for the scheduler. The difference of this option with respect to the prepend_text is the position in the scheduler submission file where such text is inserted: with this option, the string is inserted before any non-scheduler command
- environment_variables, dict, optional, non_db – Set a dictionary of custom environment variables for this calculation
- import_sys_environment, bool, optional, non_db – If set to true, the submission script will load the system environment variables
- input_filename, str, optional, non_db
- max_memory_kb, int, optional, non_db – Set the maximum memory (in KiloBytes) to be asked to the scheduler
- max_wallclock_seconds, int, optional, non_db – Set the wallclock in seconds asked to the scheduler
- mpirun_extra_params, (list, tuple), optional, non_db – Set the extra params to pass to the mpirun (or equivalent) command after the one provided in computer.mpirun_command. Example: mpirun -np 8 extra_params[0] extra_params[1] … exec.x
- output_filename, str, optional, non_db
- parser_name, str, optional, non_db – Parser of the calculation: the default is cp2k_advanced_parser to get the necessary info
- prepend_text, str, optional, non_db – Set the calculation-specific prepend text, which is going to be prepended in the scheduler-job script, just before the code execution
- priority, str, optional, non_db – Set the priority of the job to be queued
- qos, str, optional, non_db – Set the quality of service to use in for the queue on the remote computer
- queue_name, str, optional, non_db – Set the name of the queue on the remote computer
- resources, dict, required, non_db – Set the dictionary of resources to be used by the scheduler plugin, like the number of nodes, cpus etc. This dictionary is scheduler-plugin dependent. Look at the documentation of the scheduler for more details.
- scheduler_stderr, str, optional, non_db – Filename to which the content of stderr of the scheduler is written.
- scheduler_stdout, str, optional, non_db – Filename to which the content of stdout of the scheduler is written.
- stash, Namespace – Optional directives to stash files after the calculation job has completed.
Namespace Ports
- source_list, (tuple, list), optional, non_db – Sequence of relative filepaths representing files in the remote directory that should be stashed.
- stash_mode, str, optional, non_db – Mode with which to perform the stashing, should be value of `aiida.common.datastructures.StashMode.
- target_base, str, optional, non_db – The base location to where the files should be stashd. For example, for the copy stash mode, this should be an absolute filepath on the remote computer.
- submit_script_filename, str, optional, non_db – Filename to which the job submission script is written.
- withmpi, bool, optional, non_db
- store_provenance, bool, optional, non_db – If set to False provenance will not be stored in the database.
- parameters, Dict, optional – Specify custom CP2K settings to overwrite the input dictionary just before submitting the CalcJob
- parent_calc_folder, RemoteData, optional – Working directory of a previously ran calculation to restart from.
- pseudos, Namespace – A dictionary of pseudopotentials to be used in the calculations: key is the atomic symbol, value is either a single pseudopotential or a list of pseudopotentials. If multiple pseudos for a single symbol are passed, it is mandatory to specify a KIND section with a PSEUDOPOTENTIAL keyword matching the names (or aliases) of the pseudopotentials.
- settings, Dict, optional – Optional input parameters.
- handler_overrides, Dict, optional – Mapping where keys are process handler names and the values are a boolean, where True will enable the corresponding handler and False will disable it. This overrides the default value set by the enabled keyword of the process_handler decorator with which the method is decorated.
- max_iterations, Int, optional – Maximum number of iterations the work chain will restart the process to finish successfully.
- metadata, Namespace
Namespace Ports
- call_link_label, str, optional, non_db – The label to use for the CALL link if the process is called by another process.
- description, str, optional, non_db – Description to set on the process node.
- label, str, optional, non_db – Label to set on the process node.
- store_provenance, bool, optional, non_db – If set to False provenance will not be stored in the database.
- ddec, Namespace
Namespace Ports
- charge_density_folder, RemoteData, optional – Use a remote folder (for restarts and similar)
- code, Code, required – The Code to use for this job.
- metadata, Namespace
Namespace Ports
- call_link_label, str, optional, non_db – The label to use for the CALL link if the process is called by another process.
- computer, Computer, optional, non_db – When using a “local” code, set the computer on which the calculation should be run.
- description, str, optional, non_db – Description to set on the process node.
- dry_run, bool, optional, non_db – When set to True will prepare the calculation job for submission but not actually launch it.
- label, str, optional, non_db – Label to set on the process node.
- options, Namespace
Namespace Ports
- account, str, optional, non_db – Set the account to use in for the queue on the remote computer
- additional_retrieve_list, (list, tuple), optional, non_db – List of relative file paths that should be retrieved in addition to what the plugin specifies.
- append_text, str, optional, non_db – Set the calculation-specific append text, which is going to be appended in the scheduler-job script, just after the code execution
- custom_scheduler_commands, str, optional, non_db – Set a (possibly multiline) string with the commands that the user wants to manually set for the scheduler. The difference of this option with respect to the prepend_text is the position in the scheduler submission file where such text is inserted: with this option, the string is inserted before any non-scheduler command
- environment_variables, dict, optional, non_db – Set a dictionary of custom environment variables for this calculation
- import_sys_environment, bool, optional, non_db – If set to true, the submission script will load the system environment variables
- input_filename, str, optional, non_db – Filename to which the input for the code that is to be run is written.
- max_memory_kb, int, optional, non_db – Set the maximum memory (in KiloBytes) to be asked to the scheduler
- max_wallclock_seconds, int, optional, non_db – Set the wallclock in seconds asked to the scheduler
- mpirun_extra_params, (list, tuple), optional, non_db – Set the extra params to pass to the mpirun (or equivalent) command after the one provided in computer.mpirun_command. Example: mpirun -np 8 extra_params[0] extra_params[1] … exec.x
- output_filename, str, optional, non_db – Filename to which the content of stdout of the code that is to be run is written.
- parser_name, str, optional, non_db – Set a string for the output parser. Can be None if no output plugin is available or needed
- prepend_text, str, optional, non_db – Set the calculation-specific prepend text, which is going to be prepended in the scheduler-job script, just before the code execution
- priority, str, optional, non_db – Set the priority of the job to be queued
- qos, str, optional, non_db – Set the quality of service to use in for the queue on the remote computer
- queue_name, str, optional, non_db – Set the name of the queue on the remote computer
- resources, dict, required, non_db – Set the dictionary of resources to be used by the scheduler plugin, like the number of nodes, cpus etc. This dictionary is scheduler-plugin dependent. Look at the documentation of the scheduler for more details.
- scheduler_stderr, str, optional, non_db – Filename to which the content of stderr of the scheduler is written.
- scheduler_stdout, str, optional, non_db – Filename to which the content of stdout of the scheduler is written.
- stash, Namespace – Optional directives to stash files after the calculation job has completed.
Namespace Ports
- source_list, (tuple, list), optional, non_db – Sequence of relative filepaths representing files in the remote directory that should be stashed.
- stash_mode, str, optional, non_db – Mode with which to perform the stashing, should be value of `aiida.common.datastructures.StashMode.
- target_base, str, optional, non_db – The base location to where the files should be stashd. For example, for the copy stash mode, this should be an absolute filepath on the remote computer.
- submit_script_filename, str, optional, non_db – Filename to which the job submission script is written.
- withmpi, bool, optional, non_db – Set the calculation to use mpi
- store_provenance, bool, optional, non_db – If set to False provenance will not be stored in the database.
- parameters, Dict, required – Input parameters such as net charge, protocol, atomic densities path, …
- metadata, Namespace
Namespace Ports
- call_link_label, str, optional, non_db – The label to use for the CALL link if the process is called by another process.
- description, str, optional, non_db – Description to set on the process node.
- label, str, optional, non_db – Label to set on the process node.
- store_provenance, bool, optional, non_db – If set to False provenance will not be stored in the database.
- min_cell_size, Float, optional – To avoid using k-points, extend the cell so that min(perp_width)>min_cell_size
- parent_calc_folder, RemoteData, optional – Provide an initial parent folder that contains the wavefunction for restart
- protocol_modify, Dict, optional – Specify custom settings that overvrite the yaml settings
- protocol_tag, Str, optional – The tag of the protocol to be read from {tag}.yaml unless protocol_yaml input is specified
- protocol_yaml, SinglefileData, optional – Specify a custom yaml file with the multistage settings (and ignore protocol_tag)
- starting_settings_idx, Int, optional – If idx>0 is chosen, jumps directly to overwrite settings_0 with settings_{idx}
- structure, CifData, required – input structure
- zeopp, Namespace
Namespace Ports
- atomic_radii, SinglefileData, optional – atomic radii file
- code, Code, required – The Code to use for this job.
- metadata, Namespace
Namespace Ports
- call_link_label, str, optional, non_db – The label to use for the CALL link if the process is called by another process.
- computer, Computer, optional, non_db – When using a “local” code, set the computer on which the calculation should be run.
- description, str, optional, non_db – Description to set on the process node.
- dry_run, bool, optional, non_db – When set to True will prepare the calculation job for submission but not actually launch it.
- label, str, optional, non_db – Label to set on the process node.
- options, Namespace
Namespace Ports
- account, str, optional, non_db – Set the account to use in for the queue on the remote computer
- additional_retrieve_list, (list, tuple), optional, non_db – List of relative file paths that should be retrieved in addition to what the plugin specifies.
- append_text, str, optional, non_db – Set the calculation-specific append text, which is going to be appended in the scheduler-job script, just after the code execution
- custom_scheduler_commands, str, optional, non_db – Set a (possibly multiline) string with the commands that the user wants to manually set for the scheduler. The difference of this option with respect to the prepend_text is the position in the scheduler submission file where such text is inserted: with this option, the string is inserted before any non-scheduler command
- environment_variables, dict, optional, non_db – Set a dictionary of custom environment variables for this calculation
- import_sys_environment, bool, optional, non_db – If set to true, the submission script will load the system environment variables
- input_filename, str, optional, non_db – Filename to which the input for the code that is to be run is written.
- max_memory_kb, int, optional, non_db – Set the maximum memory (in KiloBytes) to be asked to the scheduler
- max_wallclock_seconds, int, optional, non_db – Set the wallclock in seconds asked to the scheduler
- mpirun_extra_params, (list, tuple), optional, non_db – Set the extra params to pass to the mpirun (or equivalent) command after the one provided in computer.mpirun_command. Example: mpirun -np 8 extra_params[0] extra_params[1] … exec.x
- output_filename, str, optional, non_db – Filename to which the content of stdout of the code that is to be run is written.
- parser_name, str, optional, non_db
- prepend_text, str, optional, non_db – Set the calculation-specific prepend text, which is going to be prepended in the scheduler-job script, just before the code execution
- priority, str, optional, non_db – Set the priority of the job to be queued
- qos, str, optional, non_db – Set the quality of service to use in for the queue on the remote computer
- queue_name, str, optional, non_db – Set the name of the queue on the remote computer
- resources, dict, optional, non_db
- scheduler_stderr, str, optional, non_db – Filename to which the content of stderr of the scheduler is written.
- scheduler_stdout, str, optional, non_db – Filename to which the content of stdout of the scheduler is written.
- stash, Namespace – Optional directives to stash files after the calculation job has completed.
Namespace Ports
- source_list, (tuple, list), optional, non_db – Sequence of relative filepaths representing files in the remote directory that should be stashed.
- stash_mode, str, optional, non_db – Mode with which to perform the stashing, should be value of `aiida.common.datastructures.StashMode.
- target_base, str, optional, non_db – The base location to where the files should be stashd. For example, for the copy stash mode, this should be an absolute filepath on the remote computer.
- submit_script_filename, str, optional, non_db – Filename to which the job submission script is written.
- withmpi, bool, optional, non_db – Set the calculation to use mpi
- store_provenance, bool, optional, non_db – If set to False provenance will not be stored in the database.
- parameters, NetworkParameters, optional – command line parameters for zeo++
Outputs:
- last_input_parameters, Dict, optional – CP2K input parameters used (and possibly working) used in the last stage
- output_parameters, Dict, optional – Output CP2K parameters of all the stages, merged together
- remote_folder, RemoteData, required – Input files necessary to run the process will be stored in this folder node.
- structure_ddec, CifData, required – structure with DDEC charges
- zeopp_after_opt, Namespace
Namespace Ports
- output_parameters, Dict, required – key-value pairs parsed from zeo++ output file(s).
- zeopp_before_opt, Namespace
Namespace Ports
- output_parameters, Dict, required – key-value pairs parsed from zeo++ output file(s).
Outline:
run_zeopp_before(Run Zeo++ for the original structure) run_multistageddec(Run MultistageDdec work chain) run_zeopp_after(Run Zeo++ for the oprimized structure) return_results(Return exposed outputs)
- cp2k_base, Namespace
SimAnnealing work chain¶
The SimAnnealingWorkChain()
work chain
is designed to find the minimum energy configuration for a given number of gas molecules in the pores of a framework.
It runs several NVT Monte-Carlo simulations in RASPA at decreasing temperature in order to let the system move to its global minimum (simulated annealing),
and then performs a geometry optimization for the final fine tuning of the adsorbate position(s).
- workchainaiida_lsmo.workchains.SimAnnealingWorkChain
A work chain to compute the minimum energy geometry of a molecule inside a framework, using simulated annealing, i.e., decreasing the temperature of a Monte Carlo simulation and finally running and energy minimization step.
Inputs:
- metadata, Namespace
Namespace Ports
- call_link_label, str, optional, non_db – The label to use for the CALL link if the process is called by another process.
- description, str, optional, non_db – Description to set on the process node.
- label, str, optional, non_db – Label to set on the process node.
- store_provenance, bool, optional, non_db – If set to False provenance will not be stored in the database.
- molecule, (Str, Dict), required – Adsorbate molecule: settings to be read from the yaml.Advanced: input a Dict for non-standard settings.
- parameters, Dict, required – Parameters for the SimAnnealing workchain: will be merged with default ones.
- raspa_base, Namespace
Namespace Ports
- clean_workdir, Bool, optional – If True, work directories of all called calculation jobs will be cleaned at the end of execution.
- handler_overrides, Dict, optional – Mapping where keys are process handler names and the values are a boolean, where True will enable the corresponding handler and False will disable it. This overrides the default value set by the enabled keyword of the process_handler decorator with which the method is decorated.
- max_iterations, Int, optional – Maximum number of iterations the work chain will restart the process to finish successfully.
- metadata, Namespace
Namespace Ports
- call_link_label, str, optional, non_db – The label to use for the CALL link if the process is called by another process.
- description, str, optional, non_db – Description to set on the process node.
- label, str, optional, non_db – Label to set on the process node.
- store_provenance, bool, optional, non_db – If set to False provenance will not be stored in the database.
- raspa, Namespace
Namespace Ports
- block_pocket, Namespace – Zeo++ block pocket file
- code, Code, required – The Code to use for this job.
- file, Namespace – Additional input file(s)
- framework, Namespace – Input framework(s)
- metadata, Namespace
Namespace Ports
- call_link_label, str, optional, non_db – The label to use for the CALL link if the process is called by another process.
- computer, Computer, optional, non_db – When using a “local” code, set the computer on which the calculation should be run.
- description, str, optional, non_db – Description to set on the process node.
- dry_run, bool, optional, non_db – When set to True will prepare the calculation job for submission but not actually launch it.
- label, str, optional, non_db – Label to set on the process node.
- options, Namespace
Namespace Ports
- account, str, optional, non_db – Set the account to use in for the queue on the remote computer
- additional_retrieve_list, (list, tuple), optional, non_db – List of relative file paths that should be retrieved in addition to what the plugin specifies.
- append_text, str, optional, non_db – Set the calculation-specific append text, which is going to be appended in the scheduler-job script, just after the code execution
- custom_scheduler_commands, str, optional, non_db – Set a (possibly multiline) string with the commands that the user wants to manually set for the scheduler. The difference of this option with respect to the prepend_text is the position in the scheduler submission file where such text is inserted: with this option, the string is inserted before any non-scheduler command
- environment_variables, dict, optional, non_db – Set a dictionary of custom environment variables for this calculation
- import_sys_environment, bool, optional, non_db – If set to true, the submission script will load the system environment variables
- input_filename, str, optional, non_db – Filename to which the input for the code that is to be run is written.
- max_memory_kb, int, optional, non_db – Set the maximum memory (in KiloBytes) to be asked to the scheduler
- max_wallclock_seconds, int, optional, non_db – Set the wallclock in seconds asked to the scheduler
- mpirun_extra_params, (list, tuple), optional, non_db – Set the extra params to pass to the mpirun (or equivalent) command after the one provided in computer.mpirun_command. Example: mpirun -np 8 extra_params[0] extra_params[1] … exec.x
- output_filename, str, optional, non_db – Filename to which the content of stdout of the code that is to be run is written.
- parser_name, str, optional, non_db – Set a string for the output parser. Can be None if no output plugin is available or needed
- prepend_text, str, optional, non_db – Set the calculation-specific prepend text, which is going to be prepended in the scheduler-job script, just before the code execution
- priority, str, optional, non_db – Set the priority of the job to be queued
- qos, str, optional, non_db – Set the quality of service to use in for the queue on the remote computer
- queue_name, str, optional, non_db – Set the name of the queue on the remote computer
- resources, dict, required, non_db – Set the dictionary of resources to be used by the scheduler plugin, like the number of nodes, cpus etc. This dictionary is scheduler-plugin dependent. Look at the documentation of the scheduler for more details.
- scheduler_stderr, str, optional, non_db – Filename to which the content of stderr of the scheduler is written.
- scheduler_stdout, str, optional, non_db – Filename to which the content of stdout of the scheduler is written.
- stash, Namespace – Optional directives to stash files after the calculation job has completed.
Namespace Ports
- source_list, (tuple, list), optional, non_db – Sequence of relative filepaths representing files in the remote directory that should be stashed.
- stash_mode, str, optional, non_db – Mode with which to perform the stashing, should be value of `aiida.common.datastructures.StashMode.
- target_base, str, optional, non_db – The base location to where the files should be stashd. For example, for the copy stash mode, this should be an absolute filepath on the remote computer.
- submit_script_filename, str, optional, non_db – Filename to which the job submission script is written.
- withmpi, bool, optional, non_db – Set the calculation to use mpi
- store_provenance, bool, optional, non_db – If set to False provenance will not be stored in the database.
- parent_folder, RemoteData, optional – Remote folder used to continue the same simulation stating from the binary restarts.
- retrieved_parent_folder, FolderData, optional – To use an old calculation as a starting poing for a new one.
- settings, Dict, optional – Additional input parameters
- structure, CifData, required – Adsorbent framework CIF.
Outputs:
- loaded_molecule, CifData, required – CIF containing the final postition of the molecule.
- loaded_structure, CifData, required – CIF containing the loaded structure.
- output_parameters, Dict, optional – Information about the final configuration.
Outline:
setup(Initialize the parameters) while(should_run_nvt) run_raspa_nvt(Run a NVT calculation in Raspa.) run_raspa_min(Run a Energy Minimization in Raspa.) return_results(Return molecule position and energy info.)
- metadata, Namespace
Inputs details
parameters
(Dict
) modifies the default parameters:PARAMETERS_DEFAULT = { "ff_framework": "UFF", # (str) Forcefield of the structure. "ff_separate_interactions": False, # (bool) Use "separate_interactions" in the FF builder. "ff_mixing_rule": "Lorentz-Berthelot", # (string) Choose 'Lorentz-Berthelot' or 'Jorgensen'. "ff_tail_corrections": True, # (bool) Apply tail corrections. "ff_shifted": False, # (bool) Shift or truncate the potential at cutoff. "ff_cutoff": 12.0, # (float) CutOff truncation for the VdW interactions (Angstrom). "temperature_list": [300, 250, 200, 250, 100, 50], # (list) List of decreasing temperatures for the annealing. "mc_steps": int(1e3), # (int) Number of MC cycles. "number_of_molecules": 1 # (int) Number of molecules loaded in the framework. }
Outputs details
output_parameters
(Dict
), example:{ "description": [ "NVT simulation at 300 K", "NVT simulation at 250 K", "NVT simulation at 200 K", "NVT simulation at 250 K", "NVT simulation at 100 K", "NVT simulation at 50 K", "Final energy minimization" ], "energy_ads/ads_coulomb_final": [ -0.00095657162276787, ... 3.5423777787399e-06 ], "energy_ads/ads_tot_final": [ -0.00095657162276787, ... 3.5423777787399e-06 ], "energy_ads/ads_vdw_final": [ 0.0, ... 0.0 ], "energy_host/ads_coulomb_final": [ -12.696035310164, ... -15.592788991158 ], "energy_host/ads_tot_final": [ -30.545798720022, ... -36.132005060753 ], "energy_host/ads_vdw_final": [ -17.849763409859, ... -20.539216069678 ], "energy_unit": "kJ/mol", "number_of_molecules": 1 }
In particular:
host/ads
describes the interaction between host and (all) adsorbates,ads/ads
the interaction between adsorbatesThe binding energy is the final value of
energy_host/ads_coulomb_final
(you still need to divide bynumber_of_molecules
)energy_ads/ads_coulomb_final
andenergy_ads/ads_tot_final
may be non-zero even for a single molecule due to rounding errors in Ewald summation
Cp2kBindingEnergy work chain¶
The Cp2kBindingEnergyWorkChain()
work chain
takes as an input a CIF structure and the initial position of a molecule in its pore,
optimizes the molecule’s geometry keeping the framework rigid and computes the BSSE corrected interactions energy.
The work chain is similar to CP2K’s MulstistageWorkChain in reading the settings from YAML protocol,
and resubmitting the calculation with updated settings in case of failure,
but the only step is an hard-coded GEO_OPT
simulation with 200 max steps.
NOTE:
It is better to start with the settings of a previous working MulstistageWorkChain, if already available. Otherwise, it may run for 200 steps before realizing that the settings are not good an switch them.
No restart is allowed, since the system is changing the number of atoms for the BSSE calculation: therefore, the wave function is recomputed 5 times from scratch. This needs to be fixed in the future.
If
structure
andmolecule
StructureData
do not have the same size for the unit cell, the work chain will complain and stop.
- workchainaiida_lsmo.workchains.Cp2kBindingEnergyWorkChain
Submits Cp2kBase work chain for structure + molecule system, first optimizing the geometry of the molecule and later computing the BSSE corrected interaction energy. This work chain is inspired to Cp2kMultistage, and shares some logics and data from it.
Inputs:
- cp2k_base, Namespace
Namespace Ports
- clean_workdir, Bool, optional – If True, work directories of all called calculation jobs will be cleaned at the end of execution.
- cp2k, Namespace
Namespace Ports
- basissets, Namespace – A dictionary of basissets to be used in the calculations: key is the atomic symbol, value is either a single basisset or a list of basissets. If multiple basissets for a single symbol are passed, it is mandatory to specify a KIND section with a BASIS_SET keyword matching the names (or aliases) of the basissets.
- code, Code, required – The Code to use for this job.
- file, Namespace – Additional input files.
- kpoints, KpointsData, optional – Input kpoint mesh.
- metadata, Namespace
Namespace Ports
- call_link_label, str, optional, non_db – The label to use for the CALL link if the process is called by another process.
- computer, Computer, optional, non_db – When using a “local” code, set the computer on which the calculation should be run.
- description, str, optional, non_db – Description to set on the process node.
- dry_run, bool, optional, non_db – When set to True will prepare the calculation job for submission but not actually launch it.
- label, str, optional, non_db – Label to set on the process node.
- options, Namespace
Namespace Ports
- account, str, optional, non_db – Set the account to use in for the queue on the remote computer
- additional_retrieve_list, (list, tuple), optional, non_db – List of relative file paths that should be retrieved in addition to what the plugin specifies.
- append_text, str, optional, non_db – Set the calculation-specific append text, which is going to be appended in the scheduler-job script, just after the code execution
- custom_scheduler_commands, str, optional, non_db – Set a (possibly multiline) string with the commands that the user wants to manually set for the scheduler. The difference of this option with respect to the prepend_text is the position in the scheduler submission file where such text is inserted: with this option, the string is inserted before any non-scheduler command
- environment_variables, dict, optional, non_db – Set a dictionary of custom environment variables for this calculation
- import_sys_environment, bool, optional, non_db – If set to true, the submission script will load the system environment variables
- input_filename, str, optional, non_db
- max_memory_kb, int, optional, non_db – Set the maximum memory (in KiloBytes) to be asked to the scheduler
- max_wallclock_seconds, int, optional, non_db – Set the wallclock in seconds asked to the scheduler
- mpirun_extra_params, (list, tuple), optional, non_db – Set the extra params to pass to the mpirun (or equivalent) command after the one provided in computer.mpirun_command. Example: mpirun -np 8 extra_params[0] extra_params[1] … exec.x
- output_filename, str, optional, non_db
- prepend_text, str, optional, non_db – Set the calculation-specific prepend text, which is going to be prepended in the scheduler-job script, just before the code execution
- priority, str, optional, non_db – Set the priority of the job to be queued
- qos, str, optional, non_db – Set the quality of service to use in for the queue on the remote computer
- queue_name, str, optional, non_db – Set the name of the queue on the remote computer
- resources, dict, required, non_db – Set the dictionary of resources to be used by the scheduler plugin, like the number of nodes, cpus etc. This dictionary is scheduler-plugin dependent. Look at the documentation of the scheduler for more details.
- scheduler_stderr, str, optional, non_db – Filename to which the content of stderr of the scheduler is written.
- scheduler_stdout, str, optional, non_db – Filename to which the content of stdout of the scheduler is written.
- stash, Namespace – Optional directives to stash files after the calculation job has completed.
Namespace Ports
- source_list, (tuple, list), optional, non_db – Sequence of relative filepaths representing files in the remote directory that should be stashed.
- stash_mode, str, optional, non_db – Mode with which to perform the stashing, should be value of `aiida.common.datastructures.StashMode.
- target_base, str, optional, non_db – The base location to where the files should be stashd. For example, for the copy stash mode, this should be an absolute filepath on the remote computer.
- submit_script_filename, str, optional, non_db – Filename to which the job submission script is written.
- withmpi, bool, optional, non_db
- store_provenance, bool, optional, non_db – If set to False provenance will not be stored in the database.
- parameters, Dict, optional – Specify custom CP2K settings to overwrite the input dictionary just before submitting the CalcJob
- parent_calc_folder, RemoteData, optional – Working directory of a previously ran calculation to restart from.
- pseudos, Namespace – A dictionary of pseudopotentials to be used in the calculations: key is the atomic symbol, value is either a single pseudopotential or a list of pseudopotentials. If multiple pseudos for a single symbol are passed, it is mandatory to specify a KIND section with a PSEUDOPOTENTIAL keyword matching the names (or aliases) of the pseudopotentials.
- settings, Dict, optional – Optional input parameters.
- handler_overrides, Dict, optional – Mapping where keys are process handler names and the values are a boolean, where True will enable the corresponding handler and False will disable it. This overrides the default value set by the enabled keyword of the process_handler decorator with which the method is decorated.
- max_iterations, Int, optional – Maximum number of iterations the work chain will restart the process to finish successfully.
- metadata, Namespace
Namespace Ports
- call_link_label, str, optional, non_db – The label to use for the CALL link if the process is called by another process.
- description, str, optional, non_db – Description to set on the process node.
- label, str, optional, non_db – Label to set on the process node.
- store_provenance, bool, optional, non_db – If set to False provenance will not be stored in the database.
- metadata, Namespace
Namespace Ports
- call_link_label, str, optional, non_db – The label to use for the CALL link if the process is called by another process.
- description, str, optional, non_db – Description to set on the process node.
- label, str, optional, non_db – Label to set on the process node.
- store_provenance, bool, optional, non_db – If set to False provenance will not be stored in the database.
- molecule, StructureData, required – Input molecule in the unit cell of the structure.
- protocol_modify, Dict, optional – Specify custom settings that overvrite the yaml settings
- protocol_tag, Str, optional – The tag of the protocol tag.yaml. NOTE: only the settings are read, stage is set to GEO_OPT.
- protocol_yaml, SinglefileData, optional – Specify a custom yaml file. NOTE: only the settings are read, stage is set to GEO_OPT.
- starting_settings_idx, Int, optional – If idx>0 is chosen, jumps directly to overwrite settings_0 with settings_{idx}
- structure, StructureData, required – Input structure that contains the molecule.
Outputs:
- loaded_molecule, StructureData, required – Molecule geometry in the unit cell.
- loaded_structure, StructureData, required – Geometry of the system with both fragments.
- output_parameters, Dict, required – Info regarding the binding energy of the system.
- remote_folder, RemoteData, required – Input files necessary to run the process will be stored in this folder node.
Outline:
setup(Setup initial parameters.) while(should_run_geo_opt) run_geo_opt(Prepare inputs, submit and direct output to context.) inspect_and_update_settings_geo_opt(Inspect the settings_{idx} calculation and check if it is needed to update the settings and resubmint the calculation.) run_bsse(Update parameters and run BSSE calculation. BSSE assumes that the molecule has no charge and unit multiplicity: this can be customized from builder.cp2k_base.cp2k.parameters.) results(Gather final outputs of the workchain.)
- cp2k_base, Namespace
Inputs details
Look at the inputs details of the Multistage work chain for more information about the choice of the protocol (i.e., DFT settings).
Outputs details
output_parameters
(Dict
), example:{ "binding_energy_bsse": -1.7922110202537, "binding_energy_corr": -23.072114381515, "binding_energy_dispersion": -18.318476834858, "binding_energy_raw": -24.864325401768, "binding_energy_unit": "kJ/mol", "motion_opt_converged": false, "motion_step_info": { "dispersion_energy_au": [ -0.1611999344803, ... -0.16105256797101 ], "energy_au": [ -829.9150365907, ... -829.91870835924 ], "max_grad_au": [ null, 0.0082746554, ... 0.0030823925 ], "max_step_au": [ null, 0.0604411557, ... 0.0215865148 ], "rms_grad_au": [ null, 0.000915767, ... 0.0003886735 ], "rms_step_au": [ null, 0.0071240711, ... 0.0026174255 ], "scf_converged": [ true, ... true ] } }
BindingSite work chain¶
The BindingSiteWorkChain()
work chain
simply combines SimAnnealingWorkChain()
and Cp2kBindingEnergyWorkChain()
.
The outputs from the two workchain are collected under the ff
and dft
namespaces, respectively.
- workchainaiida_lsmo.workchains.BindingSiteWorkChain
A workchain that combines SimAnnealing & Cp2kBindingEnergy
Inputs:
- cp2k_base, Namespace
Namespace Ports
- clean_workdir, Bool, optional – If True, work directories of all called calculation jobs will be cleaned at the end of execution.
- cp2k, Namespace
Namespace Ports
- basissets, Namespace – A dictionary of basissets to be used in the calculations: key is the atomic symbol, value is either a single basisset or a list of basissets. If multiple basissets for a single symbol are passed, it is mandatory to specify a KIND section with a BASIS_SET keyword matching the names (or aliases) of the basissets.
- code, Code, required – The Code to use for this job.
- file, Namespace – Additional input files.
- kpoints, KpointsData, optional – Input kpoint mesh.
- metadata, Namespace
Namespace Ports
- call_link_label, str, optional, non_db – The label to use for the CALL link if the process is called by another process.
- computer, Computer, optional, non_db – When using a “local” code, set the computer on which the calculation should be run.
- description, str, optional, non_db – Description to set on the process node.
- dry_run, bool, optional, non_db – When set to True will prepare the calculation job for submission but not actually launch it.
- label, str, optional, non_db – Label to set on the process node.
- options, Namespace
Namespace Ports
- account, str, optional, non_db – Set the account to use in for the queue on the remote computer
- additional_retrieve_list, (list, tuple), optional, non_db – List of relative file paths that should be retrieved in addition to what the plugin specifies.
- append_text, str, optional, non_db – Set the calculation-specific append text, which is going to be appended in the scheduler-job script, just after the code execution
- custom_scheduler_commands, str, optional, non_db – Set a (possibly multiline) string with the commands that the user wants to manually set for the scheduler. The difference of this option with respect to the prepend_text is the position in the scheduler submission file where such text is inserted: with this option, the string is inserted before any non-scheduler command
- environment_variables, dict, optional, non_db – Set a dictionary of custom environment variables for this calculation
- import_sys_environment, bool, optional, non_db – If set to true, the submission script will load the system environment variables
- input_filename, str, optional, non_db
- max_memory_kb, int, optional, non_db – Set the maximum memory (in KiloBytes) to be asked to the scheduler
- max_wallclock_seconds, int, optional, non_db – Set the wallclock in seconds asked to the scheduler
- mpirun_extra_params, (list, tuple), optional, non_db – Set the extra params to pass to the mpirun (or equivalent) command after the one provided in computer.mpirun_command. Example: mpirun -np 8 extra_params[0] extra_params[1] … exec.x
- output_filename, str, optional, non_db
- prepend_text, str, optional, non_db – Set the calculation-specific prepend text, which is going to be prepended in the scheduler-job script, just before the code execution
- priority, str, optional, non_db – Set the priority of the job to be queued
- qos, str, optional, non_db – Set the quality of service to use in for the queue on the remote computer
- queue_name, str, optional, non_db – Set the name of the queue on the remote computer
- resources, dict, required, non_db – Set the dictionary of resources to be used by the scheduler plugin, like the number of nodes, cpus etc. This dictionary is scheduler-plugin dependent. Look at the documentation of the scheduler for more details.
- scheduler_stderr, str, optional, non_db – Filename to which the content of stderr of the scheduler is written.
- scheduler_stdout, str, optional, non_db – Filename to which the content of stdout of the scheduler is written.
- stash, Namespace – Optional directives to stash files after the calculation job has completed.
Namespace Ports
- source_list, (tuple, list), optional, non_db – Sequence of relative filepaths representing files in the remote directory that should be stashed.
- stash_mode, str, optional, non_db – Mode with which to perform the stashing, should be value of `aiida.common.datastructures.StashMode.
- target_base, str, optional, non_db – The base location to where the files should be stashd. For example, for the copy stash mode, this should be an absolute filepath on the remote computer.
- submit_script_filename, str, optional, non_db – Filename to which the job submission script is written.
- withmpi, bool, optional, non_db
- store_provenance, bool, optional, non_db – If set to False provenance will not be stored in the database.
- parameters, Dict, optional – Specify custom CP2K settings to overwrite the input dictionary just before submitting the CalcJob
- parent_calc_folder, RemoteData, optional – Working directory of a previously ran calculation to restart from.
- pseudos, Namespace – A dictionary of pseudopotentials to be used in the calculations: key is the atomic symbol, value is either a single pseudopotential or a list of pseudopotentials. If multiple pseudos for a single symbol are passed, it is mandatory to specify a KIND section with a PSEUDOPOTENTIAL keyword matching the names (or aliases) of the pseudopotentials.
- settings, Dict, optional – Optional input parameters.
- handler_overrides, Dict, optional – Mapping where keys are process handler names and the values are a boolean, where True will enable the corresponding handler and False will disable it. This overrides the default value set by the enabled keyword of the process_handler decorator with which the method is decorated.
- max_iterations, Int, optional – Maximum number of iterations the work chain will restart the process to finish successfully.
- metadata, Namespace
Namespace Ports
- call_link_label, str, optional, non_db – The label to use for the CALL link if the process is called by another process.
- description, str, optional, non_db – Description to set on the process node.
- label, str, optional, non_db – Label to set on the process node.
- store_provenance, bool, optional, non_db – If set to False provenance will not be stored in the database.
- metadata, Namespace
Namespace Ports
- call_link_label, str, optional, non_db – The label to use for the CALL link if the process is called by another process.
- description, str, optional, non_db – Description to set on the process node.
- label, str, optional, non_db – Label to set on the process node.
- store_provenance, bool, optional, non_db – If set to False provenance will not be stored in the database.
- molecule, (Str, Dict), required – Adsorbate molecule: settings to be read from the yaml.Advanced: input a Dict for non-standard settings.
- parameters, Dict, required – Parameters for the SimAnnealing workchain: will be merged with default ones.
- protocol_modify, Dict, optional – Specify custom settings that overvrite the yaml settings
- protocol_tag, Str, optional – The tag of the protocol tag.yaml. NOTE: only the settings are read, stage is set to GEO_OPT.
- protocol_yaml, SinglefileData, optional – Specify a custom yaml file. NOTE: only the settings are read, stage is set to GEO_OPT.
- raspa_base, Namespace
Namespace Ports
- clean_workdir, Bool, optional – If True, work directories of all called calculation jobs will be cleaned at the end of execution.
- handler_overrides, Dict, optional – Mapping where keys are process handler names and the values are a boolean, where True will enable the corresponding handler and False will disable it. This overrides the default value set by the enabled keyword of the process_handler decorator with which the method is decorated.
- max_iterations, Int, optional – Maximum number of iterations the work chain will restart the process to finish successfully.
- metadata, Namespace
Namespace Ports
- call_link_label, str, optional, non_db – The label to use for the CALL link if the process is called by another process.
- description, str, optional, non_db – Description to set on the process node.
- label, str, optional, non_db – Label to set on the process node.
- store_provenance, bool, optional, non_db – If set to False provenance will not be stored in the database.
- raspa, Namespace
Namespace Ports
- block_pocket, Namespace – Zeo++ block pocket file
- code, Code, required – The Code to use for this job.
- file, Namespace – Additional input file(s)
- framework, Namespace – Input framework(s)
- metadata, Namespace
Namespace Ports
- call_link_label, str, optional, non_db – The label to use for the CALL link if the process is called by another process.
- computer, Computer, optional, non_db – When using a “local” code, set the computer on which the calculation should be run.
- description, str, optional, non_db – Description to set on the process node.
- dry_run, bool, optional, non_db – When set to True will prepare the calculation job for submission but not actually launch it.
- label, str, optional, non_db – Label to set on the process node.
- options, Namespace
Namespace Ports
- account, str, optional, non_db – Set the account to use in for the queue on the remote computer
- additional_retrieve_list, (list, tuple), optional, non_db – List of relative file paths that should be retrieved in addition to what the plugin specifies.
- append_text, str, optional, non_db – Set the calculation-specific append text, which is going to be appended in the scheduler-job script, just after the code execution
- custom_scheduler_commands, str, optional, non_db – Set a (possibly multiline) string with the commands that the user wants to manually set for the scheduler. The difference of this option with respect to the prepend_text is the position in the scheduler submission file where such text is inserted: with this option, the string is inserted before any non-scheduler command
- environment_variables, dict, optional, non_db – Set a dictionary of custom environment variables for this calculation
- import_sys_environment, bool, optional, non_db – If set to true, the submission script will load the system environment variables
- input_filename, str, optional, non_db – Filename to which the input for the code that is to be run is written.
- max_memory_kb, int, optional, non_db – Set the maximum memory (in KiloBytes) to be asked to the scheduler
- max_wallclock_seconds, int, optional, non_db – Set the wallclock in seconds asked to the scheduler
- mpirun_extra_params, (list, tuple), optional, non_db – Set the extra params to pass to the mpirun (or equivalent) command after the one provided in computer.mpirun_command. Example: mpirun -np 8 extra_params[0] extra_params[1] … exec.x
- output_filename, str, optional, non_db – Filename to which the content of stdout of the code that is to be run is written.
- parser_name, str, optional, non_db – Set a string for the output parser. Can be None if no output plugin is available or needed
- prepend_text, str, optional, non_db – Set the calculation-specific prepend text, which is going to be prepended in the scheduler-job script, just before the code execution
- priority, str, optional, non_db – Set the priority of the job to be queued
- qos, str, optional, non_db – Set the quality of service to use in for the queue on the remote computer
- queue_name, str, optional, non_db – Set the name of the queue on the remote computer
- resources, dict, required, non_db – Set the dictionary of resources to be used by the scheduler plugin, like the number of nodes, cpus etc. This dictionary is scheduler-plugin dependent. Look at the documentation of the scheduler for more details.
- scheduler_stderr, str, optional, non_db – Filename to which the content of stderr of the scheduler is written.
- scheduler_stdout, str, optional, non_db – Filename to which the content of stdout of the scheduler is written.
- stash, Namespace – Optional directives to stash files after the calculation job has completed.
Namespace Ports
- source_list, (tuple, list), optional, non_db – Sequence of relative filepaths representing files in the remote directory that should be stashed.
- stash_mode, str, optional, non_db – Mode with which to perform the stashing, should be value of `aiida.common.datastructures.StashMode.
- target_base, str, optional, non_db – The base location to where the files should be stashd. For example, for the copy stash mode, this should be an absolute filepath on the remote computer.
- submit_script_filename, str, optional, non_db – Filename to which the job submission script is written.
- withmpi, bool, optional, non_db – Set the calculation to use mpi
- store_provenance, bool, optional, non_db – If set to False provenance will not be stored in the database.
- parent_folder, RemoteData, optional – Remote folder used to continue the same simulation stating from the binary restarts.
- retrieved_parent_folder, FolderData, optional – To use an old calculation as a starting poing for a new one.
- settings, Dict, optional – Additional input parameters
- starting_settings_idx, Int, optional – If idx>0 is chosen, jumps directly to overwrite settings_0 with settings_{idx}
- structure, CifData, required – Adsorbent framework CIF.
Outputs:
- dft, Namespace
Namespace Ports
- loaded_molecule, StructureData, required – Molecule geometry in the unit cell.
- loaded_structure, StructureData, required – Geometry of the system with both fragments.
- output_parameters, Dict, required – Info regarding the binding energy of the system.
- remote_folder, RemoteData, required – Input files necessary to run the process will be stored in this folder node.
- ff, Namespace
Namespace Ports
- loaded_molecule, CifData, required – CIF containing the final postition of the molecule.
- loaded_structure, CifData, required – CIF containing the loaded structure.
- output_parameters, Dict, optional – Information about the final configuration.
Outline:
run_sim_annealing(Run SimAnnealing) run_cp2k_binding_energy(Pass the ouptput molecule's geometry to Cp2kBindingEnergy.) return_results(Return exposed outputs and info.)
- cp2k_base, Namespace
SinglecompWidom work chain¶
The SinglecompWidomWorkChain()
work chain
allows to compute the Henry’s coefficient of a molecule via the Widom insertions algorithm.
The user can specify a list of temperatures to perform these calculations, and the results from the output_parameters
Dict will be presented as lists as well, one for each temperature.
Blocking spheres are computed for the molecule before the calculation.
- workchainaiida_lsmo.workchains.SinglecompWidomWorkChain
Computes widom insertion for a framework/box at different temperatures.
Inputs:
- metadata, Namespace
Namespace Ports
- call_link_label, str, optional, non_db – The label to use for the CALL link if the process is called by another process.
- description, str, optional, non_db – Description to set on the process node.
- label, str, optional, non_db – Label to set on the process node.
- store_provenance, bool, optional, non_db – If set to False provenance will not be stored in the database.
- molecule, (Str, Dict, CifData), required – Adsorbate molecule: settings to be read from the yaml.Advanced: input a Dict for non-standard settings.
- parameters, Dict, required – Main parameters and settings for the calculations, to overwrite PARAMETERS_DEFAULT.
- raspa_base, Namespace
Namespace Ports
- clean_workdir, Bool, optional – If True, work directories of all called calculation jobs will be cleaned at the end of execution.
- handler_overrides, Dict, optional – Mapping where keys are process handler names and the values are a boolean, where True will enable the corresponding handler and False will disable it. This overrides the default value set by the enabled keyword of the process_handler decorator with which the method is decorated.
- max_iterations, Int, optional – Maximum number of iterations the work chain will restart the process to finish successfully.
- metadata, Namespace
Namespace Ports
- call_link_label, str, optional, non_db – The label to use for the CALL link if the process is called by another process.
- description, str, optional, non_db – Description to set on the process node.
- label, str, optional, non_db – Label to set on the process node.
- store_provenance, bool, optional, non_db – If set to False provenance will not be stored in the database.
- raspa, Namespace
Namespace Ports
- block_pocket, Namespace – Zeo++ block pocket file
- code, Code, required – The Code to use for this job.
- file, Namespace – Additional input file(s)
- framework, Namespace – Input framework(s)
- metadata, Namespace
Namespace Ports
- call_link_label, str, optional, non_db – The label to use for the CALL link if the process is called by another process.
- computer, Computer, optional, non_db – When using a “local” code, set the computer on which the calculation should be run.
- description, str, optional, non_db – Description to set on the process node.
- dry_run, bool, optional, non_db – When set to True will prepare the calculation job for submission but not actually launch it.
- label, str, optional, non_db – Label to set on the process node.
- options, Namespace
Namespace Ports
- account, str, optional, non_db – Set the account to use in for the queue on the remote computer
- additional_retrieve_list, (list, tuple), optional, non_db – List of relative file paths that should be retrieved in addition to what the plugin specifies.
- append_text, str, optional, non_db – Set the calculation-specific append text, which is going to be appended in the scheduler-job script, just after the code execution
- custom_scheduler_commands, str, optional, non_db – Set a (possibly multiline) string with the commands that the user wants to manually set for the scheduler. The difference of this option with respect to the prepend_text is the position in the scheduler submission file where such text is inserted: with this option, the string is inserted before any non-scheduler command
- environment_variables, dict, optional, non_db – Set a dictionary of custom environment variables for this calculation
- import_sys_environment, bool, optional, non_db – If set to true, the submission script will load the system environment variables
- input_filename, str, optional, non_db – Filename to which the input for the code that is to be run is written.
- max_memory_kb, int, optional, non_db – Set the maximum memory (in KiloBytes) to be asked to the scheduler
- max_wallclock_seconds, int, optional, non_db – Set the wallclock in seconds asked to the scheduler
- mpirun_extra_params, (list, tuple), optional, non_db – Set the extra params to pass to the mpirun (or equivalent) command after the one provided in computer.mpirun_command. Example: mpirun -np 8 extra_params[0] extra_params[1] … exec.x
- output_filename, str, optional, non_db – Filename to which the content of stdout of the code that is to be run is written.
- parser_name, str, optional, non_db – Set a string for the output parser. Can be None if no output plugin is available or needed
- prepend_text, str, optional, non_db – Set the calculation-specific prepend text, which is going to be prepended in the scheduler-job script, just before the code execution
- priority, str, optional, non_db – Set the priority of the job to be queued
- qos, str, optional, non_db – Set the quality of service to use in for the queue on the remote computer
- queue_name, str, optional, non_db – Set the name of the queue on the remote computer
- resources, dict, required, non_db – Set the dictionary of resources to be used by the scheduler plugin, like the number of nodes, cpus etc. This dictionary is scheduler-plugin dependent. Look at the documentation of the scheduler for more details.
- scheduler_stderr, str, optional, non_db – Filename to which the content of stderr of the scheduler is written.
- scheduler_stdout, str, optional, non_db – Filename to which the content of stdout of the scheduler is written.
- stash, Namespace – Optional directives to stash files after the calculation job has completed.
Namespace Ports
- source_list, (tuple, list), optional, non_db – Sequence of relative filepaths representing files in the remote directory that should be stashed.
- stash_mode, str, optional, non_db – Mode with which to perform the stashing, should be value of `aiida.common.datastructures.StashMode.
- target_base, str, optional, non_db – The base location to where the files should be stashd. For example, for the copy stash mode, this should be an absolute filepath on the remote computer.
- submit_script_filename, str, optional, non_db – Filename to which the job submission script is written.
- withmpi, bool, optional, non_db – Set the calculation to use mpi
- store_provenance, bool, optional, non_db – If set to False provenance will not be stored in the database.
- parent_folder, RemoteData, optional – Remote folder used to continue the same simulation stating from the binary restarts.
- retrieved_parent_folder, FolderData, optional – To use an old calculation as a starting poing for a new one.
- settings, Dict, optional – Additional input parameters
- structure, CifData, optional – Adsorbent framework CIF or None for a box.
- zeopp, Namespace
Namespace Ports
- code, Code, required – The Code to use for this job.
- metadata, Namespace
Namespace Ports
- call_link_label, str, optional, non_db – The label to use for the CALL link if the process is called by another process.
- computer, Computer, optional, non_db – When using a “local” code, set the computer on which the calculation should be run.
- description, str, optional, non_db – Description to set on the process node.
- dry_run, bool, optional, non_db – When set to True will prepare the calculation job for submission but not actually launch it.
- label, str, optional, non_db – Label to set on the process node.
- options, Namespace
Namespace Ports
- account, str, optional, non_db – Set the account to use in for the queue on the remote computer
- additional_retrieve_list, (list, tuple), optional, non_db – List of relative file paths that should be retrieved in addition to what the plugin specifies.
- append_text, str, optional, non_db – Set the calculation-specific append text, which is going to be appended in the scheduler-job script, just after the code execution
- custom_scheduler_commands, str, optional, non_db – Set a (possibly multiline) string with the commands that the user wants to manually set for the scheduler. The difference of this option with respect to the prepend_text is the position in the scheduler submission file where such text is inserted: with this option, the string is inserted before any non-scheduler command
- environment_variables, dict, optional, non_db – Set a dictionary of custom environment variables for this calculation
- import_sys_environment, bool, optional, non_db – If set to true, the submission script will load the system environment variables
- input_filename, str, optional, non_db – Filename to which the input for the code that is to be run is written.
- max_memory_kb, int, optional, non_db – Set the maximum memory (in KiloBytes) to be asked to the scheduler
- max_wallclock_seconds, int, optional, non_db – Set the wallclock in seconds asked to the scheduler
- mpirun_extra_params, (list, tuple), optional, non_db – Set the extra params to pass to the mpirun (or equivalent) command after the one provided in computer.mpirun_command. Example: mpirun -np 8 extra_params[0] extra_params[1] … exec.x
- output_filename, str, optional, non_db – Filename to which the content of stdout of the code that is to be run is written.
- parser_name, str, optional, non_db
- prepend_text, str, optional, non_db – Set the calculation-specific prepend text, which is going to be prepended in the scheduler-job script, just before the code execution
- priority, str, optional, non_db – Set the priority of the job to be queued
- qos, str, optional, non_db – Set the quality of service to use in for the queue on the remote computer
- queue_name, str, optional, non_db – Set the name of the queue on the remote computer
- resources, dict, optional, non_db
- scheduler_stderr, str, optional, non_db – Filename to which the content of stderr of the scheduler is written.
- scheduler_stdout, str, optional, non_db – Filename to which the content of stdout of the scheduler is written.
- stash, Namespace – Optional directives to stash files after the calculation job has completed.
Namespace Ports
- source_list, (tuple, list), optional, non_db – Sequence of relative filepaths representing files in the remote directory that should be stashed.
- stash_mode, str, optional, non_db – Mode with which to perform the stashing, should be value of `aiida.common.datastructures.StashMode.
- target_base, str, optional, non_db – The base location to where the files should be stashd. For example, for the copy stash mode, this should be an absolute filepath on the remote computer.
- submit_script_filename, str, optional, non_db – Filename to which the job submission script is written.
- withmpi, bool, optional, non_db – Set the calculation to use mpi
- store_provenance, bool, optional, non_db – If set to False provenance will not be stored in the database.
Outputs:
- block, SinglefileData, optional – Blocked pockets fileoutput file.
- output_parameters, Dict, required – Main results of the work chain.
Outline:
setup(Initialize parameters) if(should_run_zeopp) run_zeopp(It performs the full zeopp calculation for all components.) inspect_zeopp_calc(Asserts whether all widom calculations are finished ok and expose block file.) run_raspa_widom(Run parallel Widom calculation in RASPA, at all temperature specified in the conditions setting.) return_output_parameters(Merge all the parameters into output_parameters, depending on is_porous and is_kh_ehough.)
- metadata, Namespace
Inputs details
structure
(CifData
), if missing the calculation will be performed for an empty box, which is convenient to get thewidom_rosenbluth_factor_average
for flexible molecules.molecule
(see IsothermWorkChain)parameters
(Dict
) modifies the default parameters:"ff_framework": "UFF", # str, Forcefield of the structure (used also as a definition of ff.rad for zeopp) "ff_shifted": False, # bool, Shift or truncate at cutoff "ff_tail_corrections": True, # bool, Apply tail corrections "ff_mixing_rule": 'Lorentz-Berthelot', # str, Mixing rule for the forcefield "ff_separate_interactions": False, # bool, if true use only ff_framework for framework-molecule interactions "ff_cutoff": 12.0, # float, CutOff truncation for the VdW interactions (Angstrom) "zeopp_probe_scaling": 1.0, # float, scaling probe's diameter: use 0.0 for skipping block calc "zeopp_block_samples": int(1000), # int, Number of samples for BLOCK calculation (per A^3) "raspa_verbosity": 10, # int, Print stats every: number of cycles / raspa_verbosity "raspa_widom_cycles": int(1e5), # int, Number of widom cycles "temperatures": [300, 400]
Outputs details
output_parameters
(Dict
), example:{ "adsorption_energy_widom_average": [ -34.9698999639, -34.8262538296, -34.6772296828 ], "adsorption_energy_widom_dev": [ 0.0166320673, 0.0129078639, 0.015868202 ], "adsorption_energy_widom_unit": "kJ/mol", "henry_coefficient_average": [ 0.00783847, 0.0025542, 0.000964045 ], "henry_coefficient_dev": [ 0.000100367, 1.78042e-05, 4.69145e-06 ], "henry_coefficient_unit": "mol/kg/Pa", "temperatures": [ 273, 293, 313 ], "temperatures_unit": "K", "widom_rosenbluth_factor_average": [ 21180.0, 7407.21, 2986.58 ], "widom_rosenbluth_factor_dev": [ 271.198648, 51.63243, 14.533953 ], "widom_rosenbluth_factor_unit": "-" }
MulticompGcmc work chain¶
The MulticompGcmcWorkChain()
work chain
performs in parallel GCMC calcultions, at all the conditions of temperature and pressure specified,
for a given mixture of molecules.
Blocking spheres are computed for each molecule before the calculation.
- workchainaiida_lsmo.workchains.MulticompGcmcWorkChain
Compute multicomponent GCMC in crystalline materials (or empty box), for a mixture of componentes and at specific temperature/pressure conditions.
Inputs:
- conditions, Dict, required – Composition of the mixture, list of temperature and pressure conditions.
- metadata, Namespace
Namespace Ports
- call_link_label, str, optional, non_db – The label to use for the CALL link if the process is called by another process.
- description, str, optional, non_db – Description to set on the process node.
- label, str, optional, non_db – Label to set on the process node.
- store_provenance, bool, optional, non_db – If set to False provenance will not be stored in the database.
- parameters, Dict, required – Main parameters and settings for the calculations, to overwrite PARAMETERS_DEFAULT.
- raspa_base, Namespace
Namespace Ports
- clean_workdir, Bool, optional – If True, work directories of all called calculation jobs will be cleaned at the end of execution.
- handler_overrides, Dict, optional – Mapping where keys are process handler names and the values are a boolean, where True will enable the corresponding handler and False will disable it. This overrides the default value set by the enabled keyword of the process_handler decorator with which the method is decorated.
- max_iterations, Int, optional – Maximum number of iterations the work chain will restart the process to finish successfully.
- metadata, Namespace
Namespace Ports
- call_link_label, str, optional, non_db – The label to use for the CALL link if the process is called by another process.
- description, str, optional, non_db – Description to set on the process node.
- label, str, optional, non_db – Label to set on the process node.
- store_provenance, bool, optional, non_db – If set to False provenance will not be stored in the database.
- raspa, Namespace
Namespace Ports
- block_pocket, Namespace – Zeo++ block pocket file
- code, Code, required – The Code to use for this job.
- file, Namespace – Additional input file(s)
- framework, Namespace – Input framework(s)
- metadata, Namespace
Namespace Ports
- call_link_label, str, optional, non_db – The label to use for the CALL link if the process is called by another process.
- computer, Computer, optional, non_db – When using a “local” code, set the computer on which the calculation should be run.
- description, str, optional, non_db – Description to set on the process node.
- dry_run, bool, optional, non_db – When set to True will prepare the calculation job for submission but not actually launch it.
- label, str, optional, non_db – Label to set on the process node.
- options, Namespace
Namespace Ports
- account, str, optional, non_db – Set the account to use in for the queue on the remote computer
- additional_retrieve_list, (list, tuple), optional, non_db – List of relative file paths that should be retrieved in addition to what the plugin specifies.
- append_text, str, optional, non_db – Set the calculation-specific append text, which is going to be appended in the scheduler-job script, just after the code execution
- custom_scheduler_commands, str, optional, non_db – Set a (possibly multiline) string with the commands that the user wants to manually set for the scheduler. The difference of this option with respect to the prepend_text is the position in the scheduler submission file where such text is inserted: with this option, the string is inserted before any non-scheduler command
- environment_variables, dict, optional, non_db – Set a dictionary of custom environment variables for this calculation
- import_sys_environment, bool, optional, non_db – If set to true, the submission script will load the system environment variables
- input_filename, str, optional, non_db – Filename to which the input for the code that is to be run is written.
- max_memory_kb, int, optional, non_db – Set the maximum memory (in KiloBytes) to be asked to the scheduler
- max_wallclock_seconds, int, optional, non_db – Set the wallclock in seconds asked to the scheduler
- mpirun_extra_params, (list, tuple), optional, non_db – Set the extra params to pass to the mpirun (or equivalent) command after the one provided in computer.mpirun_command. Example: mpirun -np 8 extra_params[0] extra_params[1] … exec.x
- output_filename, str, optional, non_db – Filename to which the content of stdout of the code that is to be run is written.
- parser_name, str, optional, non_db – Set a string for the output parser. Can be None if no output plugin is available or needed
- prepend_text, str, optional, non_db – Set the calculation-specific prepend text, which is going to be prepended in the scheduler-job script, just before the code execution
- priority, str, optional, non_db – Set the priority of the job to be queued
- qos, str, optional, non_db – Set the quality of service to use in for the queue on the remote computer
- queue_name, str, optional, non_db – Set the name of the queue on the remote computer
- resources, dict, required, non_db – Set the dictionary of resources to be used by the scheduler plugin, like the number of nodes, cpus etc. This dictionary is scheduler-plugin dependent. Look at the documentation of the scheduler for more details.
- scheduler_stderr, str, optional, non_db – Filename to which the content of stderr of the scheduler is written.
- scheduler_stdout, str, optional, non_db – Filename to which the content of stdout of the scheduler is written.
- stash, Namespace – Optional directives to stash files after the calculation job has completed.
Namespace Ports
- source_list, (tuple, list), optional, non_db – Sequence of relative filepaths representing files in the remote directory that should be stashed.
- stash_mode, str, optional, non_db – Mode with which to perform the stashing, should be value of `aiida.common.datastructures.StashMode.
- target_base, str, optional, non_db – The base location to where the files should be stashd. For example, for the copy stash mode, this should be an absolute filepath on the remote computer.
- submit_script_filename, str, optional, non_db – Filename to which the job submission script is written.
- withmpi, bool, optional, non_db – Set the calculation to use mpi
- store_provenance, bool, optional, non_db – If set to False provenance will not be stored in the database.
- parent_folder, RemoteData, optional – Remote folder used to continue the same simulation stating from the binary restarts.
- retrieved_parent_folder, FolderData, optional – To use an old calculation as a starting poing for a new one.
- settings, Dict, optional – Additional input parameters
- structure, CifData, optional – Adsorbent framework CIF.
- zeopp, Namespace
Namespace Ports
- code, Code, required – The Code to use for this job.
- metadata, Namespace
Namespace Ports
- call_link_label, str, optional, non_db – The label to use for the CALL link if the process is called by another process.
- computer, Computer, optional, non_db – When using a “local” code, set the computer on which the calculation should be run.
- description, str, optional, non_db – Description to set on the process node.
- dry_run, bool, optional, non_db – When set to True will prepare the calculation job for submission but not actually launch it.
- label, str, optional, non_db – Label to set on the process node.
- options, Namespace
Namespace Ports
- account, str, optional, non_db – Set the account to use in for the queue on the remote computer
- additional_retrieve_list, (list, tuple), optional, non_db – List of relative file paths that should be retrieved in addition to what the plugin specifies.
- append_text, str, optional, non_db – Set the calculation-specific append text, which is going to be appended in the scheduler-job script, just after the code execution
- custom_scheduler_commands, str, optional, non_db – Set a (possibly multiline) string with the commands that the user wants to manually set for the scheduler. The difference of this option with respect to the prepend_text is the position in the scheduler submission file where such text is inserted: with this option, the string is inserted before any non-scheduler command
- environment_variables, dict, optional, non_db – Set a dictionary of custom environment variables for this calculation
- import_sys_environment, bool, optional, non_db – If set to true, the submission script will load the system environment variables
- input_filename, str, optional, non_db – Filename to which the input for the code that is to be run is written.
- max_memory_kb, int, optional, non_db – Set the maximum memory (in KiloBytes) to be asked to the scheduler
- max_wallclock_seconds, int, optional, non_db – Set the wallclock in seconds asked to the scheduler
- mpirun_extra_params, (list, tuple), optional, non_db – Set the extra params to pass to the mpirun (or equivalent) command after the one provided in computer.mpirun_command. Example: mpirun -np 8 extra_params[0] extra_params[1] … exec.x
- output_filename, str, optional, non_db – Filename to which the content of stdout of the code that is to be run is written.
- parser_name, str, optional, non_db
- prepend_text, str, optional, non_db – Set the calculation-specific prepend text, which is going to be prepended in the scheduler-job script, just before the code execution
- priority, str, optional, non_db – Set the priority of the job to be queued
- qos, str, optional, non_db – Set the quality of service to use in for the queue on the remote computer
- queue_name, str, optional, non_db – Set the name of the queue on the remote computer
- resources, dict, optional, non_db
- scheduler_stderr, str, optional, non_db – Filename to which the content of stderr of the scheduler is written.
- scheduler_stdout, str, optional, non_db – Filename to which the content of stdout of the scheduler is written.
- stash, Namespace – Optional directives to stash files after the calculation job has completed.
Namespace Ports
- source_list, (tuple, list), optional, non_db – Sequence of relative filepaths representing files in the remote directory that should be stashed.
- stash_mode, str, optional, non_db – Mode with which to perform the stashing, should be value of `aiida.common.datastructures.StashMode.
- target_base, str, optional, non_db – The base location to where the files should be stashd. For example, for the copy stash mode, this should be an absolute filepath on the remote computer.
- submit_script_filename, str, optional, non_db – Filename to which the job submission script is written.
- withmpi, bool, optional, non_db – Set the calculation to use mpi
- store_provenance, bool, optional, non_db – If set to False provenance will not be stored in the database.
Outputs:
- block_files, Namespace – Generated block pocket files.
- output_parameters, Dict, required – Main results of the work chain.
Outline:
setup(Initialize parameters) if(should_run_zeopp) run_zeopp(It performs the full zeopp calculation for all components.) inspect_zeopp_calc(Asserts whether all widom calculations are finished ok. If so, manage zeopp results.) run_raspa_gcmc(Summits Raspa GCMC calculation for every condition (i.e, [temp, press] combination).) return_output_parameters(Merge all the parameters into output_parameters, depending on is_porous and is_kh_ehough.)
Inputs details
parameters
(Dict
) modifies the default parameters:"ff_framework": "UFF", # str, Forcefield of the structure (used also as a definition of ff.rad for zeopp) "ff_shifted": False, # bool, Shift or truncate at cutoff "ff_tail_corrections": True, # bool, Apply tail corrections "ff_mixing_rule": 'Lorentz-Berthelot', # str, Mixing rule for the forcefield "ff_separate_interactions": False, # bool, if true use only ff_framework for framework-molecule interactions "ff_cutoff": 12.0, # float, CutOff truncation for the VdW interactions (Angstrom) "zeopp_probe_scaling": 1.0, # float, scaling probe's diameter: use 0.0 for skipping block calc "zeopp_block_samples": int(1000), # int, Number of samples for BLOCK calculation (per A^3) "raspa_verbosity": 10, # int, Print stats every: number of cycles / raspa_verbosity "raspa_gcmc_init_cycles": int(1e5), # int, Number of GCMC initialization cycles "raspa_gcmc_prod_cycles": int(1e5), # int, Number of GCMC production cycles
conditions
(Dict
), example:'molfraction': { 'co': 0.2, 'ethene': 0.3, 'ethane': 0.5, }, 'temp_press': [ [200, 0.1], [300, 0.5], [400, 0.7], ]
Outputs details
output_parameters
(Dict
), example:"Input_block": { "C2H4": [ 1.647, 10 ], "C2H6": [ 1.683, 10 ], "CO": [ 1.584, 10 ] }, "Number_of_blocking_spheres": { "C2H4": 0, "C2H6": 0, "CO": 0 }, "composition": { "C2H4": 0.3, "C2H6": 0.5, "CO": 0.2 }, "enthalpy_of_adsorption_average": { "C2H4": -18.893613196644, "C2H6": -23.953846166638, "CO": -18.67727295403 }, "enthalpy_of_adsorption_dev": { "C2H4": 8.3425044773141, "C2H6": 7.6573330506431, "CO": 12.788154764577 }, "enthalpy_of_adsorption_unit": "kJ/mol", "loading_absolute_average": { "C2H4": [ 1.674941508006, 0.4649745087, 0.225254317548 ], "C2H6": [ 8.558630790138, 1.716272575446, 0.634431885204 ], "CO": [ 0.153958226214, 0.044430897498, 0.018598980348 ] }, "loading_absolute_dev": { "C2H4": [ 0.37769902489165, 0.14568834858221, 0.060107934935359 ], "C2H6": [ 0.20733617827358, 0.1483606861503, 0.1634276608098 ], "CO": [ 0.098979061794361, 0.033704465508572, 0.014016046900518 ] }, "loading_absolute_unit": "mol/kg", "pressures": [ 0.1, 0.5, 1.0 ], "pressures_unit": "bar", "temperatures": [ 200, 300, 400 ], "temperatures_unit": "K"
MulticompAdsDes work chain¶
The MulticompAdsDesWorkChain()
work chain
is similar to MulticompGcmc, but it performs one simulation at given adsorption temperature, pressure and composition,
and a second one at given temperature and pressure for desorption. For the desorption mixure of the gas reservoir,
the workchains uses the composition previously obtained at adsorption conditions inside the framework.
Note that this is an approximation - in order to arrive at the appropriate mixture for the gas reservoir at desorption, one should iterate, taking as the next desorption condition trial the difference between the mixture inside the framework at adsorption and the mixture inside the framework at desorption. The approximation may induce artifacts such as negative working capacity for certain components, which are in any case a warning sign that that the desorption (partial) pressure is not low enough to evacuate the component from the framework.
- workchainaiida_lsmo.workchains.MulticompAdsDesWorkChain
Compute Adsorption/Desorption in crystalline materials, for a mixture of componentes and at specific temperature/pressure conditions.
Inputs:
- conditions, Dict, required – Composition of the mixture, adsorption and desorption temperature and pressure.
- metadata, Namespace
Namespace Ports
- call_link_label, str, optional, non_db – The label to use for the CALL link if the process is called by another process.
- description, str, optional, non_db – Description to set on the process node.
- label, str, optional, non_db – Label to set on the process node.
- store_provenance, bool, optional, non_db – If set to False provenance will not be stored in the database.
- parameters, Dict, required – Main parameters and settings for the calculations, to overwrite PARAMETERS_DEFAULT.
- raspa_base, Namespace
Namespace Ports
- clean_workdir, Bool, optional – If True, work directories of all called calculation jobs will be cleaned at the end of execution.
- handler_overrides, Dict, optional – Mapping where keys are process handler names and the values are a boolean, where True will enable the corresponding handler and False will disable it. This overrides the default value set by the enabled keyword of the process_handler decorator with which the method is decorated.
- max_iterations, Int, optional – Maximum number of iterations the work chain will restart the process to finish successfully.
- metadata, Namespace
Namespace Ports
- call_link_label, str, optional, non_db – The label to use for the CALL link if the process is called by another process.
- description, str, optional, non_db – Description to set on the process node.
- label, str, optional, non_db – Label to set on the process node.
- store_provenance, bool, optional, non_db – If set to False provenance will not be stored in the database.
- raspa, Namespace
Namespace Ports
- block_pocket, Namespace – Zeo++ block pocket file
- code, Code, required – The Code to use for this job.
- file, Namespace – Additional input file(s)
- framework, Namespace – Input framework(s)
- metadata, Namespace
Namespace Ports
- call_link_label, str, optional, non_db – The label to use for the CALL link if the process is called by another process.
- computer, Computer, optional, non_db – When using a “local” code, set the computer on which the calculation should be run.
- description, str, optional, non_db – Description to set on the process node.
- dry_run, bool, optional, non_db – When set to True will prepare the calculation job for submission but not actually launch it.
- label, str, optional, non_db – Label to set on the process node.
- options, Namespace
Namespace Ports
- account, str, optional, non_db – Set the account to use in for the queue on the remote computer
- additional_retrieve_list, (list, tuple), optional, non_db – List of relative file paths that should be retrieved in addition to what the plugin specifies.
- append_text, str, optional, non_db – Set the calculation-specific append text, which is going to be appended in the scheduler-job script, just after the code execution
- custom_scheduler_commands, str, optional, non_db – Set a (possibly multiline) string with the commands that the user wants to manually set for the scheduler. The difference of this option with respect to the prepend_text is the position in the scheduler submission file where such text is inserted: with this option, the string is inserted before any non-scheduler command
- environment_variables, dict, optional, non_db – Set a dictionary of custom environment variables for this calculation
- import_sys_environment, bool, optional, non_db – If set to true, the submission script will load the system environment variables
- input_filename, str, optional, non_db – Filename to which the input for the code that is to be run is written.
- max_memory_kb, int, optional, non_db – Set the maximum memory (in KiloBytes) to be asked to the scheduler
- max_wallclock_seconds, int, optional, non_db – Set the wallclock in seconds asked to the scheduler
- mpirun_extra_params, (list, tuple), optional, non_db – Set the extra params to pass to the mpirun (or equivalent) command after the one provided in computer.mpirun_command. Example: mpirun -np 8 extra_params[0] extra_params[1] … exec.x
- output_filename, str, optional, non_db – Filename to which the content of stdout of the code that is to be run is written.
- parser_name, str, optional, non_db – Set a string for the output parser. Can be None if no output plugin is available or needed
- prepend_text, str, optional, non_db – Set the calculation-specific prepend text, which is going to be prepended in the scheduler-job script, just before the code execution
- priority, str, optional, non_db – Set the priority of the job to be queued
- qos, str, optional, non_db – Set the quality of service to use in for the queue on the remote computer
- queue_name, str, optional, non_db – Set the name of the queue on the remote computer
- resources, dict, required, non_db – Set the dictionary of resources to be used by the scheduler plugin, like the number of nodes, cpus etc. This dictionary is scheduler-plugin dependent. Look at the documentation of the scheduler for more details.
- scheduler_stderr, str, optional, non_db – Filename to which the content of stderr of the scheduler is written.
- scheduler_stdout, str, optional, non_db – Filename to which the content of stdout of the scheduler is written.
- stash, Namespace – Optional directives to stash files after the calculation job has completed.
Namespace Ports
- source_list, (tuple, list), optional, non_db – Sequence of relative filepaths representing files in the remote directory that should be stashed.
- stash_mode, str, optional, non_db – Mode with which to perform the stashing, should be value of `aiida.common.datastructures.StashMode.
- target_base, str, optional, non_db – The base location to where the files should be stashd. For example, for the copy stash mode, this should be an absolute filepath on the remote computer.
- submit_script_filename, str, optional, non_db – Filename to which the job submission script is written.
- withmpi, bool, optional, non_db – Set the calculation to use mpi
- store_provenance, bool, optional, non_db – If set to False provenance will not be stored in the database.
- parent_folder, RemoteData, optional – Remote folder used to continue the same simulation stating from the binary restarts.
- retrieved_parent_folder, FolderData, optional – To use an old calculation as a starting poing for a new one.
- settings, Dict, optional – Additional input parameters
- structure, CifData, required – Adsorbent framework CIF.
- zeopp, Namespace
Namespace Ports
- code, Code, required – The Code to use for this job.
- metadata, Namespace
Namespace Ports
- call_link_label, str, optional, non_db – The label to use for the CALL link if the process is called by another process.
- computer, Computer, optional, non_db – When using a “local” code, set the computer on which the calculation should be run.
- description, str, optional, non_db – Description to set on the process node.
- dry_run, bool, optional, non_db – When set to True will prepare the calculation job for submission but not actually launch it.
- label, str, optional, non_db – Label to set on the process node.
- options, Namespace
Namespace Ports
- account, str, optional, non_db – Set the account to use in for the queue on the remote computer
- additional_retrieve_list, (list, tuple), optional, non_db – List of relative file paths that should be retrieved in addition to what the plugin specifies.
- append_text, str, optional, non_db – Set the calculation-specific append text, which is going to be appended in the scheduler-job script, just after the code execution
- custom_scheduler_commands, str, optional, non_db – Set a (possibly multiline) string with the commands that the user wants to manually set for the scheduler. The difference of this option with respect to the prepend_text is the position in the scheduler submission file where such text is inserted: with this option, the string is inserted before any non-scheduler command
- environment_variables, dict, optional, non_db – Set a dictionary of custom environment variables for this calculation
- import_sys_environment, bool, optional, non_db – If set to true, the submission script will load the system environment variables
- input_filename, str, optional, non_db – Filename to which the input for the code that is to be run is written.
- max_memory_kb, int, optional, non_db – Set the maximum memory (in KiloBytes) to be asked to the scheduler
- max_wallclock_seconds, int, optional, non_db – Set the wallclock in seconds asked to the scheduler
- mpirun_extra_params, (list, tuple), optional, non_db – Set the extra params to pass to the mpirun (or equivalent) command after the one provided in computer.mpirun_command. Example: mpirun -np 8 extra_params[0] extra_params[1] … exec.x
- output_filename, str, optional, non_db – Filename to which the content of stdout of the code that is to be run is written.
- parser_name, str, optional, non_db
- prepend_text, str, optional, non_db – Set the calculation-specific prepend text, which is going to be prepended in the scheduler-job script, just before the code execution
- priority, str, optional, non_db – Set the priority of the job to be queued
- qos, str, optional, non_db – Set the quality of service to use in for the queue on the remote computer
- queue_name, str, optional, non_db – Set the name of the queue on the remote computer
- resources, dict, optional, non_db
- scheduler_stderr, str, optional, non_db – Filename to which the content of stderr of the scheduler is written.
- scheduler_stdout, str, optional, non_db – Filename to which the content of stdout of the scheduler is written.
- stash, Namespace – Optional directives to stash files after the calculation job has completed.
Namespace Ports
- source_list, (tuple, list), optional, non_db – Sequence of relative filepaths representing files in the remote directory that should be stashed.
- stash_mode, str, optional, non_db – Mode with which to perform the stashing, should be value of `aiida.common.datastructures.StashMode.
- target_base, str, optional, non_db – The base location to where the files should be stashd. For example, for the copy stash mode, this should be an absolute filepath on the remote computer.
- submit_script_filename, str, optional, non_db – Filename to which the job submission script is written.
- withmpi, bool, optional, non_db – Set the calculation to use mpi
- store_provenance, bool, optional, non_db – If set to False provenance will not be stored in the database.
Outputs:
- block_files, Namespace – Generated block pocket files.
- output_parameters, Dict, required – Main results of the work chain.
Outline:
setup(Initialize parameters) if(should_run_zeopp) run_zeopp(It performs the full zeopp calculation for all components.) inspect_zeopp_calc(Asserts whether all widom calculations are finished ok. If so, manage zeopp results.) run_raspa_gcmc_ads(Submit Raspa GCMC with adsorption T, P and composition.) run_raspa_gcmc_des(Submit Raspa GCMC with adsorption T, P and composition.) return_output_parameters(Merge all the parameters into output_parameters, depending on is_porous and is_kh_ehough.)
Inputs details
parameters
(Dict
) modifies the default parameters:"ff_framework": "UFF", # str, Forcefield of the structure (used also as a definition of ff.rad for zeopp) "ff_shifted": False, # bool, Shift or truncate at cutoff "ff_tail_corrections": True, # bool, Apply tail corrections "ff_mixing_rule": 'Lorentz-Berthelot', # str, Mixing rule for the forcefield "ff_separate_interactions": False, # bool, if true use only ff_framework for framework-molecule interactions "ff_cutoff": 12.0, # float, CutOff truncation for the VdW interactions (Angstrom) "zeopp_probe_scaling": 1.0, # float, scaling probe's diameter: use 0.0 for skipping block calc "zeopp_block_samples": int(1000), # int, Number of samples for BLOCK calculation (per A^3) "raspa_verbosity": 10, # int, Print stats every: number of cycles / raspa_verbosity "raspa_gcmc_init_cycles": int(1e5), # int, Number of GCMC initialization cycles "raspa_gcmc_prod_cycles": int(1e5), # int, Number of GCMC production cycles
conditions
(Dict
), example:'molfraction': { 'xenon': 0.2, 'krypton': 0.8 }, 'adsorption': { 'temperature': 298, #K 'pressure': 1, #bar }, 'desorption': { 'temperature': 308, 'pressure': 0.1, },
Outputs details
output_parameters
(Dict
), example:"Input_block": { "Kr": [ 1.647, 10 ], "Xe": [ 1.7865, 10 ] }, "Number_of_blocking_spheres": { "Kr": 0, "Xe": 0 }, "composition": { "Kr": [ 0.8, 0.37188099808061 ], "Xe": [ 0.2, 0.62811900191939 ] }, "loading_absolute_average": { "Kr": [ 0.80078943165, 0.042364344126 ], "Xe": [ 1.352559181974, 0.684029166132 ] }, "loading_absolute_dev": { "Kr": [ 0.18747637335777, 0.02392208478975 ], "Xe": [ 0.20357386402562, 0.20233235593516 ] }, "loading_absolute_unit": "mol/kg", "pressures": [ 1, 0.1 ], "pressures_unit": "bar", "temperatures": [ 298, 308 ], "temperatures_unit": "K", "working_capacity": { "Kr": 0.758425087524, "Xe": 0.668530015842 }, "working_capacity_unit": "mol/kg"
IsothermInflection work chain¶
The IsothermInflectionWorkChain()
work chain
is designed to compute those isotherms that may have hysteresis between adsorption and desorption.
The work chain computes in parallel the uptake via GCMC at all pressure points from both the bare and the saturated
framework. The saturated framework is obtaining by running a “quasi-NVT” simulation, initializated with a number
of molecules equal to 90% * pore volume * fluid density
. “Quasi-NVT” is defined as a GCMC calculations where the
swap move is only rarely attempted.
Note that this work chain may run many calculations in parallel.
- workchainaiida_lsmo.workchains.IsothermInflectionWorkChain
A work chain to compute single component isotherms at adsorption and desorption: GCMC calculations are run in parallell at all pressures, starting from the empty framework and the saturated system. This workchain is useful to spot adsorption hysteresis.
Inputs:
- metadata, Namespace
Namespace Ports
- call_link_label, str, optional, non_db – The label to use for the CALL link if the process is called by another process.
- description, str, optional, non_db – Description to set on the process node.
- label, str, optional, non_db – Label to set on the process node.
- store_provenance, bool, optional, non_db – If set to False provenance will not be stored in the database.
- molecule, (Str, Dict), required – Adsorbate molecule: settings to be read from the yaml.Advanced: input a Dict for non-standard settings.
- parameters, Dict, required – Parameters for the Isotherm workchain (see workchain.schema for default values).
- raspa_base, Namespace
Namespace Ports
- clean_workdir, Bool, optional – If True, work directories of all called calculation jobs will be cleaned at the end of execution.
- handler_overrides, Dict, optional – Mapping where keys are process handler names and the values are a boolean, where True will enable the corresponding handler and False will disable it. This overrides the default value set by the enabled keyword of the process_handler decorator with which the method is decorated.
- max_iterations, Int, optional – Maximum number of iterations the work chain will restart the process to finish successfully.
- metadata, Namespace
Namespace Ports
- call_link_label, str, optional, non_db – The label to use for the CALL link if the process is called by another process.
- description, str, optional, non_db – Description to set on the process node.
- label, str, optional, non_db – Label to set on the process node.
- store_provenance, bool, optional, non_db – If set to False provenance will not be stored in the database.
- raspa, Namespace
Namespace Ports
- block_pocket, Namespace – Zeo++ block pocket file
- code, Code, required – The Code to use for this job.
- file, Namespace – Additional input file(s)
- framework, Namespace – Input framework(s)
- metadata, Namespace
Namespace Ports
- call_link_label, str, optional, non_db – The label to use for the CALL link if the process is called by another process.
- computer, Computer, optional, non_db – When using a “local” code, set the computer on which the calculation should be run.
- description, str, optional, non_db – Description to set on the process node.
- dry_run, bool, optional, non_db – When set to True will prepare the calculation job for submission but not actually launch it.
- label, str, optional, non_db – Label to set on the process node.
- options, Namespace
Namespace Ports
- account, str, optional, non_db – Set the account to use in for the queue on the remote computer
- additional_retrieve_list, (list, tuple), optional, non_db – List of relative file paths that should be retrieved in addition to what the plugin specifies.
- append_text, str, optional, non_db – Set the calculation-specific append text, which is going to be appended in the scheduler-job script, just after the code execution
- custom_scheduler_commands, str, optional, non_db – Set a (possibly multiline) string with the commands that the user wants to manually set for the scheduler. The difference of this option with respect to the prepend_text is the position in the scheduler submission file where such text is inserted: with this option, the string is inserted before any non-scheduler command
- environment_variables, dict, optional, non_db – Set a dictionary of custom environment variables for this calculation
- import_sys_environment, bool, optional, non_db – If set to true, the submission script will load the system environment variables
- input_filename, str, optional, non_db – Filename to which the input for the code that is to be run is written.
- max_memory_kb, int, optional, non_db – Set the maximum memory (in KiloBytes) to be asked to the scheduler
- max_wallclock_seconds, int, optional, non_db – Set the wallclock in seconds asked to the scheduler
- mpirun_extra_params, (list, tuple), optional, non_db – Set the extra params to pass to the mpirun (or equivalent) command after the one provided in computer.mpirun_command. Example: mpirun -np 8 extra_params[0] extra_params[1] … exec.x
- output_filename, str, optional, non_db – Filename to which the content of stdout of the code that is to be run is written.
- parser_name, str, optional, non_db – Set a string for the output parser. Can be None if no output plugin is available or needed
- prepend_text, str, optional, non_db – Set the calculation-specific prepend text, which is going to be prepended in the scheduler-job script, just before the code execution
- priority, str, optional, non_db – Set the priority of the job to be queued
- qos, str, optional, non_db – Set the quality of service to use in for the queue on the remote computer
- queue_name, str, optional, non_db – Set the name of the queue on the remote computer
- resources, dict, required, non_db – Set the dictionary of resources to be used by the scheduler plugin, like the number of nodes, cpus etc. This dictionary is scheduler-plugin dependent. Look at the documentation of the scheduler for more details.
- scheduler_stderr, str, optional, non_db – Filename to which the content of stderr of the scheduler is written.
- scheduler_stdout, str, optional, non_db – Filename to which the content of stdout of the scheduler is written.
- stash, Namespace – Optional directives to stash files after the calculation job has completed.
Namespace Ports
- source_list, (tuple, list), optional, non_db – Sequence of relative filepaths representing files in the remote directory that should be stashed.
- stash_mode, str, optional, non_db – Mode with which to perform the stashing, should be value of `aiida.common.datastructures.StashMode.
- target_base, str, optional, non_db – The base location to where the files should be stashd. For example, for the copy stash mode, this should be an absolute filepath on the remote computer.
- submit_script_filename, str, optional, non_db – Filename to which the job submission script is written.
- withmpi, bool, optional, non_db – Set the calculation to use mpi
- store_provenance, bool, optional, non_db – If set to False provenance will not be stored in the database.
- parent_folder, RemoteData, optional – Remote folder used to continue the same simulation stating from the binary restarts.
- retrieved_parent_folder, FolderData, optional – To use an old calculation as a starting poing for a new one.
- settings, Dict, optional – Additional input parameters
- structure, CifData, required – Adsorbent framework CIF.
- zeopp, Namespace
Namespace Ports
- code, Code, required – The Code to use for this job.
- metadata, Namespace
Namespace Ports
- call_link_label, str, optional, non_db – The label to use for the CALL link if the process is called by another process.
- computer, Computer, optional, non_db – When using a “local” code, set the computer on which the calculation should be run.
- description, str, optional, non_db – Description to set on the process node.
- dry_run, bool, optional, non_db – When set to True will prepare the calculation job for submission but not actually launch it.
- label, str, optional, non_db – Label to set on the process node.
- options, Namespace
Namespace Ports
- account, str, optional, non_db – Set the account to use in for the queue on the remote computer
- additional_retrieve_list, (list, tuple), optional, non_db – List of relative file paths that should be retrieved in addition to what the plugin specifies.
- append_text, str, optional, non_db – Set the calculation-specific append text, which is going to be appended in the scheduler-job script, just after the code execution
- custom_scheduler_commands, str, optional, non_db – Set a (possibly multiline) string with the commands that the user wants to manually set for the scheduler. The difference of this option with respect to the prepend_text is the position in the scheduler submission file where such text is inserted: with this option, the string is inserted before any non-scheduler command
- environment_variables, dict, optional, non_db – Set a dictionary of custom environment variables for this calculation
- import_sys_environment, bool, optional, non_db – If set to true, the submission script will load the system environment variables
- input_filename, str, optional, non_db – Filename to which the input for the code that is to be run is written.
- max_memory_kb, int, optional, non_db – Set the maximum memory (in KiloBytes) to be asked to the scheduler
- max_wallclock_seconds, int, optional, non_db – Set the wallclock in seconds asked to the scheduler
- mpirun_extra_params, (list, tuple), optional, non_db – Set the extra params to pass to the mpirun (or equivalent) command after the one provided in computer.mpirun_command. Example: mpirun -np 8 extra_params[0] extra_params[1] … exec.x
- output_filename, str, optional, non_db – Filename to which the content of stdout of the code that is to be run is written.
- parser_name, str, optional, non_db
- prepend_text, str, optional, non_db – Set the calculation-specific prepend text, which is going to be prepended in the scheduler-job script, just before the code execution
- priority, str, optional, non_db – Set the priority of the job to be queued
- qos, str, optional, non_db – Set the quality of service to use in for the queue on the remote computer
- queue_name, str, optional, non_db – Set the name of the queue on the remote computer
- resources, dict, optional, non_db
- scheduler_stderr, str, optional, non_db – Filename to which the content of stderr of the scheduler is written.
- scheduler_stdout, str, optional, non_db – Filename to which the content of stdout of the scheduler is written.
- stash, Namespace – Optional directives to stash files after the calculation job has completed.
Namespace Ports
- source_list, (tuple, list), optional, non_db – Sequence of relative filepaths representing files in the remote directory that should be stashed.
- stash_mode, str, optional, non_db – Mode with which to perform the stashing, should be value of `aiida.common.datastructures.StashMode.
- target_base, str, optional, non_db – The base location to where the files should be stashd. For example, for the copy stash mode, this should be an absolute filepath on the remote computer.
- submit_script_filename, str, optional, non_db – Filename to which the job submission script is written.
- withmpi, bool, optional, non_db – Set the calculation to use mpi
- store_provenance, bool, optional, non_db – If set to False provenance will not be stored in the database.
Outputs:
- block, SinglefileData, optional – Blocked pockets fileoutput file.
- output_parameters, Dict, required – Results of the single temperature wc: keys can vay depending on is_porous.
Outline:
setup(Initialize the parameters) run_zeopp(Perform Zeo++ block and VOLPO calculations.) if(should_run_widom) run_raspa_widom_and_sat(Run a Widom calculation in Raspa.) run_raspa_gcmc_from_dil_sat(For each pressure point, run GCMC calculation from both diluted and saturated initial conditions.) return_output_parameters(Merge all the parameters into output_parameters, depending on is_porous and is_kh_ehough.)
- metadata, Namespace
Inputs details
parameters
(Dict
) modifies the default parameters:"ff_framework": "UFF", # (str) Forcefield of the structure. "ff_separate_interactions": False, # (bool) Use "separate_interactions" in the FF builder. "ff_mixing_rule": "Lorentz-Berthelot", # (string) Choose 'Lorentz-Berthelot' or 'Jorgensen'. "ff_tail_corrections": True, # (bool) Apply tail corrections. "ff_shifted": False, # (bool) Shift or truncate the potential at cutoff. "ff_cutoff": 12.0, # (float) CutOff truncation for the VdW interactions (Angstrom). "temperature": 300, # (float) Temperature of the simulation. "zeopp_probe_scaling": 1.0, # float, scaling probe's diameter: use 0.0 for skipping block calc "zeopp_volpo_samples": int(1e5), # (int) Number of samples for VOLPO calculation (per UC volume). "zeopp_block_samples": int(100), # (int) Number of samples for BLOCK calculation (per A^3). "raspa_verbosity": 10, # (int) Print stats every: number of cycles / raspa_verbosity. "raspa_widom_cycles": int(1e5), # (int) Number of Widom cycles. "raspa_gcmc_init_cycles": int(1e3), # (int) Number of GCMC initialization cycles. "raspa_gcmc_prod_cycles": int(1e4), # (int) Number of GCMC production cycles. "pressure_min": 0.001, # (float) Min pressure in P/P0 TODO: MIN selected from the henry coefficient! "pressure_max": 1.0, # (float) Max pressure in P/P0 "pressure_num": 20, # (int) Number of pressure points considered, eqispaced in a log plot
Note that if no pressure_list
(list-)parameter is provided, pressure points are computed from min/max/num.
molecule
(Dict
), example:'name': 'Ar', 'forcefield': 'HIRSCHFELDER', "ff_cutoff": 8, 'molsatdens': 35.4, # NOTE: very important to define the initial amount of molecules! 'proberad': 1.7, 'singlebead': True, 'charged': False, 'pressure_zero': 1, # Saturation pressure @ T (bar)
Outputs details
output_parameters
(Dict
), example:"Density": 0.380639, "Density_unit": "g/cm^3", "Estimated_saturation_loading": 77.944428, "Estimated_saturation_loading_unit": "mol/kg", "Input_block": [ 1.7, 100 ], "Input_ha": "DEF", "Input_structure_filename": "Graphite_20A.cif", "Input_volpo": [ 1.7, 1.7, 10000 ], "Number_of_blocking_spheres": 0, "POAV_A^3": 175.659, "POAV_A^3_unit": "A^3", "POAV_Volume_fraction": 0.8381, "POAV_Volume_fraction_unit": null, "POAV_cm^3/g": 2.20182, "POAV_cm^3/g_unit": "cm^3/g", "PONAV_A^3": 0.0, "PONAV_A^3_unit": "A^3", "PONAV_Volume_fraction": 0.0, "PONAV_Volume_fraction_unit": null, "PONAV_cm^3/g": 0.0, "PONAV_cm^3/g_unit": "cm^3/g", "Unitcell_volume": 209.592, "Unitcell_volume_unit": "A^3", "adsorption_energy_widom_average": -10.349783334, "adsorption_energy_widom_dev": 0.0203871821, "adsorption_energy_widom_unit": "kJ/mol", "henry_coefficient_average": 0.387019, "henry_coefficient_dev": 0.0244542, "henry_coefficient_unit": "mol/kg/Pa", "is_porous": true, "isotherm": { "conversion_factor_molec_uc_to_cm3stp_cm3": 177.5796535584, "conversion_factor_molec_uc_to_mg_g": 831.5477157974, "conversion_factor_molec_uc_to_mol_kg": 20.814711284, "enthalpy_of_adsorption_average_from_dil": [ -10.813400929552, -7.4639574135508, -9.9392383993082, null ], "enthalpy_of_adsorption_average_from_sat": [ null, null, -14.825644658402, null ], "enthalpy_of_adsorption_dev_from_dil": [ 4.8201465665611, 3.0478392822994, 5.5941478469815, null ], "enthalpy_of_adsorption_dev_from_sat": [ null, null, 7.2192916198981, null ], "enthalpy_of_adsorption_unit": "kJ/mol", "loading_absolute_average_from_dil": [ 27.31930856025, 29.798489351576, 62.091027143015, 72.617323992055 ], "loading_absolute_average_from_sat": [ 29.151746536085, 30.534438070785, 72.907243184345, 77.211428125155 ], "loading_absolute_dev_from_dil": [ 0.94383239273726, 1.8944736272227, 18.62478172839, 2.3400142831813 ], "loading_absolute_dev_from_sat": [ 0.32025900270015, 0.26491967913899, 1.1170544140921, 0.40142657506021 ], "loading_absolute_unit": "mol/kg", "pressure": [ 0.001, 0.01, 0.1, 1.0 ], "pressure_unit": "bar" "temperature": 87, "temperature_unit": "K" }
CP2K Phonopy work chain¶
The Cp2kPhonopyWorkChain
computes the displacements and the forces that
are needed to compute the phonons of a structure. The final output is the SingleFile phonopy_params.yaml
which contains
all these info and can be loaded using the Phonopy API .
Note that, to keep the design of the work chain simple, the final outputs are created within the work chain, and have
therefore broken provenance with respect to the structure and the calculations.
Inputs details
structure
(StructureData
, NOTE this is not aCifData
) is the system to investigate.cp2kcalc
(Str
), allows to specify the UUID of theCp2kCalcuation
to be used as reference for the wave function and settings. If not specified, the work chain will look for the lastCp2kCalcuation
ancestor of the StructureData. The reasoning behind requiring a previous calcualtions, is that, for computing phonons, one needs to have done before a pretty accurate optimization, and to use exactly the same settings to compute the forces for the displacements.mode
(Str
), to specifyserial
(default) orparallel
, how the CP2KENERGY_FORCE
calculations will be performed. Note that the number of calculations are 6 times the number of the atoms, which can spread a large number of simultaneous sub-jobs or a very long chain of calculations to wait for.max_displacements
(Int
), for debug purpose, specify a max number of displacements, to test the work chain.
Outputs details
initial_forces
(List
) provides the force values for the initial structure, to check that the forces are low enough to perform a meaningful phonons calculation.phonopy_params
(SinglefileData
), YAML file containing the displacements and the relative forces, to be loaded by the Phonopy API . Note, because of a possible bug you may still need to specify explicitly that you are using CP2K units, i.e.,phonon = phonopy.load("phonopy_params.yaml",factor=CP2KToTHz)
.
Technicalities¶
Unit cell expansion¶
With periodic boundary conditions the lengths of the simulation box should bigger than twice the cutoff value. Therefore, for an orthogonal cells one should multiply the cell until its length meets this criterion in every direction.
In case of non-orthogona cells however, one should not speak in terms of “lengths” but instead in terms of “perpendicular lengths”, as shown in the figure for the two-dimensional case. While in the orthogonal case one can simplify pwa = b and pwb = a, in a tilted unit cell we have to compute pwa and pwb and then evaluate if the cell needs to be expanded, and the multiplication coefficients.

Perpendicular widths in orthogonal and tilted 2D cells.¶
This explains why we need so much math in the function check_resize_unit_cell()
,
to compute the Raspa input “UnitCells”.
Note that if you do not multiply correctly the unit cell, Raspa will complain in the output:
WARNING: INAPPROPRIATE NUMBER OF UNIT CELLS USED
which typically results in a lower uptake then the correct one: if the cell is smaller than twice the cutoff, less interactions are computed because each particle sees some artificial vacuum beyond the unit cell. This results in weaker average interactions and therefore lower uptake at a given pressure/temperature.
Isotherm’s pressures selection¶
In the Isotherm work chain we use the function choose_pressure_points()
,
which can automatically select the pressure points for an adequate sampling of the isotherm curve.
The method, presented in our publication and resumed in the figure,
is based on a preliminary estimation of the Henry coefficient and pore volume. From these, a Langmuir isotherm is
derived and used as a proxy to determine the pressure points.
The input values the user has to specify are pressure_min
, pressure_max
, pressure_maxstep
and pressure_precision
.
This last is the A coefficient in the figure: 0.1 is the default value, but we recommend to test around 0.05 for a
more accurate sampling, i.e., a higher resolution of the isotherm curve in the low pressure region.

Note
This method works only for sampling Type I isotherms: it fails to correctly sample inflection curve in case of strong cooperative adsorption, e.g., a typical water isotherm.
Development¶
Running the tests¶
aiida-lsmo
uses the aiida-testing package in order to be able to run full integration tests of work chains without the need to have all the computational software (zeo++, raspa, cp2k, …) installed.
As long as you are not changing the inputs for the simulation codes (as produced by AiiDA), you can run the entire test suite as follows:
pip install -e .[testing]
pytest
Updating the test data¶
If you are changing the inputs for one or more of the simulation codes, you will need to
Install the corresponding code on your machine
Add an
.aiida-testing-config.yml
file to the top-level directory of the repository with a content likemock_code: # code-label: absolute path cp2k-7.1: /path/to/cp2k-7.1-Linux-x86_64.popt zeopp-0.3: /path/to/zeoplusplus/network raspa-e968334: /path/to/RASPA2/src/.libs/simulate chargemol-09_26_2017: /path/to/Chargemol_09_02_2017_linux_serialwhere the code labels need to correspond to the ones used in the pytest fixtures defined in the top-level
conftest.py
.Note
The tests currently assume a serial cp2k executable (
.sopt
or.ssmp
extension).Rerun the corresponding test with
--mock-regenerate-test-data
, e.g.pytest examples/test_multistage_aluminum.py --mock-regenerate-test-data
While running the tests, aiida-testing
will then automatically run the simulation code for new inputs as needed and store its outputs in tests/data
Please remember to:
Commit the new test data (as we do not install the simulation codes on CI)
Do not commit your
.aiida-testing-config.yaml
, since the paths to the simulation codes is only valid on your computer.
aiida_lsmo package¶
Subpackages¶
aiida_lsmo.calcfunctions package¶
Submodules¶
aiida_lsmo.calcfunctions.ff_builder_module module¶
ff_builder calcfunction.
- aiida_lsmo.calcfunctions.ff_builder_module.append_cif_molecule(ff_data, mol_cif)[source]¶
Append the FF parameters generated from the CifData, to the ff_loaded from the yaml
- aiida_lsmo.calcfunctions.ff_builder_module.check_ff_list(inp_list)[source]¶
Check a list of atom types: 1) Remove duplicates, preserving the order of the elements. 2) Warn if there are atom types with the same name but different parameters 3) If a shorter atom type comes later, swap the order # TODO!
- aiida_lsmo.calcfunctions.ff_builder_module.ff_builder(params, cif_molecule=None)[source]¶
AiiDA calcfunction to assemble force filed parameters into SinglefileData for Raspa.
- aiida_lsmo.calcfunctions.ff_builder_module.get_ase_charges(cifdata)[source]¶
Given a CifData, get an ASE object with charges.
- aiida_lsmo.calcfunctions.ff_builder_module.load_yaml()[source]¶
Load the ff_data.yaml as a dict.
Includes validation against schema.
- aiida_lsmo.calcfunctions.ff_builder_module.mix_molecule_ff(ff_list, mixing_rule)[source]¶
Mix molecule-molecule interactions in case of separate_interactions: return mixed ff_list
- aiida_lsmo.calcfunctions.ff_builder_module.render_ff_def(ff_data, params, ff_mix_found)[source]¶
Render the force_field.def file.
- aiida_lsmo.calcfunctions.ff_builder_module.render_ff_mixing_def(ff_data, params)[source]¶
Render the force_field_mixing_rules.def file.
- aiida_lsmo.calcfunctions.ff_builder_module.render_molecule_def(ff_data, params, molecule_name)[source]¶
Render the molecule.def file containing the thermophysical data, geometry and intramolecular force field.
aiida_lsmo.calcfunctions.ff_data_schema module¶
Voluptuous schema for ff_data.yml
aiida_lsmo.calcfunctions.oxidation_state module¶
CalcFunction to compute the oxidation states of metals using oximachine
aiida_lsmo.calcfunctions.selectivity module¶
Calcfunctions to compute gas-selectivity related applications.
- aiida_lsmo.calcfunctions.selectivity.calc_selectivity(isot_dict_a, isot_dict_b)[source]¶
Compute the selectivity of gas A on gas B as S = kH_a/kH_b. Note that if the material is not porous to one of the materials, the result is simply {‘is_porous’: False}. To maintain the comptaibility with v1, intead of checking ‘is_porous’, it checks for the henry_coefficient_average key in the Dict.
aiida_lsmo.calcfunctions.working_cap module¶
Calcfunctions to compute working capacities for different gasses.
- aiida_lsmo.calcfunctions.working_cap.calc_ch4_working_cap(isot_dict)[source]¶
Compute the CH4 working capacity from the output_parameters Dict of IsothermWorkChain. This must have run calculations at 5.8 and 65.0 bar (at 298K), which are the standard reference for the evaluation.
The results can be compared with Simon2015 (10.1039/C4EE03515A).
- aiida_lsmo.calcfunctions.working_cap.calc_h2_working_cap(isotmt_dict)[source]¶
Compute the H2 working capacity from the output_parameters Dict of MultiTempIsothermWorkChain. This must have run calculations at 1, 5 and 100 bar at 77, 198, 298 K. The US DOE Target for the Onboard Storage of Hydrogen Vehicles set the bar to 4.5 wt% and 30 g/L (Kapelewski2018). Case-A: near-ambient-T adsorption, 100bar/198K to 5bar/298K (cf. Kapelewski2018, 10.1021/acs.chemmater.8b03276) ……. Ni2(m-dobdc), experimental: 23.0 g/L Case-B: low T adsorption, 100-5bar at 77K (cf. Ahmed2019, 10.1038/s41467-019-09365-w) ……. NU-100, best experimental: 35.5 g/L Case-C: low T adsorption at low discharge, 100-1bar at 77K (cf. Thornton2017, 10.1021/acs.chemmater.6b04933) ……. hypMOF-5059389, best simulated: 40.0 g/L
- aiida_lsmo.calcfunctions.working_cap.calc_o2_working_cap(isot_dict)[source]¶
Compute the O2 working capacity from the output_parameters Dict of IsothermWorkChain. This must have run calculations at 5 and 140.0 bar (at 298K), to be consistent with the screening of Moghadam2018 (10.1038/s41467-018-03892-8), for which the MOF ANUGIA (UMCM-152) was found to have a volumetric working capacity of 249 vSTP/v (simulations are nearly identical to experiments). Consider that, at the same conditions, an empty thank can only store 136 vSTP/v, and a comparable working capacity can only br obtained compressing till 300bar.
aiida_lsmo.calcfunctions.wrappers module¶
Calculation functions that wrap some advanced script for process evaluation.
- aiida_lsmo.calcfunctions.wrappers.calc_co2_parasitic_energy(isot_co2, isot_n2, pe_parameters)[source]¶
Submit calc_pe calculation using AiiDA, for the CO2 parasitic energy. :isot_co2: (Dict) CO2 IsothermWorkChainNode.outputs[‘output_parameters’] :isot_n2: (Dict) N2 IsothermWorkChainNode.outputs[‘output_parameters’] :pe_parameters: (Dict) See PE_PARAMETERS_DEFAULT
Module contents¶
AiiDA calcfunctions
aiida_lsmo.parsers package¶
Submodules¶
aiida_lsmo.parsers.parser_functions module¶
Functions used for specific parsing of output files.
Module contents¶
Parsers for the specific usage of aiida-lsmo workchains.
aiida_lsmo.utils package¶
Submodules¶
aiida_lsmo.utils.cp2k_utils module¶
Utilities related to CP2K.
- aiida_lsmo.utils.cp2k_utils.get_bsse_section(natoms_a, natoms_b, mult_a=1, mult_b=1, charge_a=0, charge_b=0)[source]¶
Get the &FORCE_EVAL/&BSSE section.
- aiida_lsmo.utils.cp2k_utils.get_kinds_info(atoms)[source]¶
Get kinds information from ASE atoms
- Parameters
atoms – ASE atoms instance
- Returns
list of kind_info dictionaries (keys: ‘kind’, ‘element’, ‘magnetization’)
- aiida_lsmo.utils.cp2k_utils.get_kinds_section(atoms, protocol, with_ghost_atoms=False)[source]¶
Write the &KIND sections given the structure and the settings_dict
- Parameters
atoms – ASE atoms instance
protocol – protocol dict
with_ghost_atoms – if true, add ghost atoms for BSSE counterpoise correction (optional)
- aiida_lsmo.utils.cp2k_utils.get_multiplicity_section(atoms, protocol)[source]¶
Compute the total multiplicity of the structure by summing the atomic magnetizations.
- multiplicity = 1 + sum_i ( natoms_i * magnetization_i ), for each atom_type i
= 1 + sum_i magnetization_j, for each atomic site j
- Parameters
atoms – ASE atoms instance
protocol – protocol dict
- Returns
dict (for cp2k input)
- aiida_lsmo.utils.cp2k_utils.ot_has_small_bandgap(cp2k_input, cp2k_output, bandgap_thr_ev)[source]¶
Returns True if the calculation used OT and had a smaller bandgap then the guess needed for the OT. (NOTE: It has been observed also negative bandgap with OT in CP2K!) cp2k_input: dict cp2k_output: dict bandgap_thr_ev: float [eV]
aiida_lsmo.utils.isotherm_molecules_schema module¶
Voluptuous schema for isotherm_molecules.yaml
aiida_lsmo.utils.multiply_unitcell module¶
Utilities for unit cell multiplication, typically for cut-off issues.
- aiida_lsmo.utils.multiply_unitcell.check_resize_unit_cell(cif, threshold)[source]¶
Returns the multiplication factors for the cell vectors to respect, in every direction: min(perpendicular_width) > threshold.
- aiida_lsmo.utils.multiply_unitcell.check_resize_unit_cell_legacy(struct, threshold)[source]¶
Returns the multiplication factors for the cell vectors to respect, in every direction: min(perpendicular_width) > threshold. TODO: this has been used for CP2K, make it uniform to the other one used for Raspa (from CifFile).
aiida_lsmo.utils.other_utilities module¶
Other utilities
- aiida_lsmo.utils.other_utilities.aiida_cif_merge(aiida_cif_a, aiida_cif_b)[source]¶
Merge the coordinates of two CifData into a sigle one. Note: the two unit cells must be the same.
- aiida_lsmo.utils.other_utilities.aiida_dict_merge(to_dict, from_dict)[source]¶
Merge two aiida Dict objects.
- aiida_lsmo.utils.other_utilities.aiida_structure_merge(aiida_structure_a, aiida_structure_b)[source]¶
Merge the coordinates of two StructureData into a sigle one. Note: the two unit cells must be the same.
- aiida_lsmo.utils.other_utilities.ase_cells_are_similar(ase_a, ase_b, thr=2)[source]¶
Return True if the cell of two ASE objects are similar up to “thr” decimals. This avoids to give error if two Cells are different at a nth decimal number, tipically because of some truncation.
- aiida_lsmo.utils.other_utilities.dict_merge(dct, merge_dct)[source]¶
Taken from https://gist.github.com/angstwad/bf22d1822c38a92ec0a9 Recursive dict merge. Inspired by :meth:
dict.update()
, instead of updating only top-level keys, dict_merge recurses down into dicts nested to an arbitrary depth, updating keys. Themerge_dct
is merged intodct
. :param dct: dict onto which the merge is executed :param merge_dct: dct merged into dct :return: None
- aiida_lsmo.utils.other_utilities.get_cif_from_structure(structuredata)[source]¶
Convert CifData to StructureData maintaining the provenance.
Module contents¶
aiida-lsmo utils
aiida_lsmo.workchains package¶
Subpackages¶
Submodules¶
aiida_lsmo.workchains.binding_site module¶
BindingSite workchain.
aiida_lsmo.workchains.cp2k_binding_energy module¶
Binding energy workchain
- class aiida_lsmo.workchains.cp2k_binding_energy.Cp2kBindingEnergyWorkChain(*args: Any, **kwargs: Any)[source]¶
Bases:
WorkChain
Submits Cp2kBase work chain for structure + molecule system, first optimizing the geometry of the molecule and later computing the BSSE corrected interaction energy. This work chain is inspired to Cp2kMultistage, and shares some logics and data from it.
- __abstractmethods__ = frozenset({})¶
- __module__ = 'aiida_lsmo.workchains.cp2k_binding_energy'¶
- _abc_impl = <_abc_data object>¶
- _spec = <aiida.engine.processes.workchains.workchain.WorkChainSpec object>¶
- classmethod define(spec)[source]¶
Define the specification of the process, including its inputs, outputs and known exit codes.
A metadata input namespace is defined, with optional ports that are not stored in the database.
- inspect_and_update_settings_geo_opt()[source]¶
Inspect the settings_{idx} calculation and check if it is needed to update the settings and resubmint the calculation.
aiida_lsmo.workchains.cp2k_multistage module¶
Multistage work chain.
- class aiida_lsmo.workchains.cp2k_multistage.Cp2kMultistageWorkChain(*args: Any, **kwargs: Any)[source]¶
Bases:
WorkChain
Submits Cp2kBase workchains for ENERGY, GEO_OPT, CELL_OPT and MD jobs iteratively The protocol_yaml file contains a series of settings_x and stage_x: the workchains starts running the settings_0/stage_0 calculation, and, in case of a failure, changes the settings untill the SCF of stage_0 converges. Then it uses the same settings to run the next stages (i.e., stage_1, etc.).
- __abstractmethods__ = frozenset({})¶
- __module__ = 'aiida_lsmo.workchains.cp2k_multistage'¶
- _abc_impl = <_abc_data object>¶
- classmethod define(spec)[source]¶
Define the specification of the process, including its inputs, outputs and known exit codes.
A metadata input namespace is defined, with optional ports that are not stored in the database.
- aiida_lsmo.workchains.cp2k_multistage.apply_initial_magnetization(structure, protocol, oxidation_states=None, with_ghost_atoms=None)[source]¶
Prepare structure with correct initial magnetization.
Returns modified structuredata (possibly with specific atomic kinds for different inital magnetizations) as well as corresponding cp2k parameters dict.
Note: AiiDA does not allow one calcfunction to call another, which forces this split between workfunction and calcfunction.
- Parameters
structure – AiiDA StructureData
protocol – AiiDA Dict with appropriate cp2k parameters (kinds and multiplicity)
oxidation_states – Oxidation state computed with oximachine (optional)
with_ghost_atoms – if true, add ghost atoms for BSSE counterpoise correction (optional)
- Returns
{‘structure’: StructureData, ‘cp2k_param’: Dict }
- aiida_lsmo.workchains.cp2k_multistage.extract_results(resize, **kwargs)[source]¶
Extracts restults form the output_parameters of the single calculations (i.e., scf-converged stages) into a single Dict output. - resize (Dict) contains the unit cell resizing values - kwargs contains all the output_parameters for the stages and the extra initial change of settings, e.g.: ‘out_0’: cp2k’s output_parameters with Dict.label = ‘settings_0_stage_0_discard’ ‘out_1’: cp2k’s output_parameters with Dict.label = ‘settings_1_stage_0_valid’ ‘out_2’: cp2k’s output_parameters with Dict.label = ‘settings_1_stage_0_valid’ ‘out_3’: cp2k’s output_parameters with Dict.label = ‘settings_1_stage_0_valid’ This will be read as: output_dict = {‘nstages_valid’: 3, ‘nsettings_discarded’: 1}.
- aiida_lsmo.workchains.cp2k_multistage.get_initial_magnetization(structure, protocol, with_ghost_atoms=None)[source]¶
Prepare structure with correct initial magnetization.
Returns modified structuredata (possibly with specific atomic kinds for different inital magnetizations) as well as corresponding cp2k parameters dict.
- Parameters
structure – AiiDA StructureData
protocol – AiiDA Dict with appropriate cp2k parameters (kinds and multiplicity)
with_ghost_atoms – if true, add ghost atoms for BSSE counterpoise correction (optional)
- Returns
{‘structure’: StructureData, ‘cp2k_param’: Dict }
aiida_lsmo.workchains.cp2k_multistage_ddec module¶
Cp2kMultistageDdecWorkChain workchain
- class aiida_lsmo.workchains.cp2k_multistage_ddec.Cp2kMultistageDdecWorkChain(*args: Any, **kwargs: Any)[source]¶
Bases:
WorkChain
A workchain that combines: Cp2kMultistageWorkChain + Cp2kDdecWorkChain
- __abstractmethods__ = frozenset({})¶
- __module__ = 'aiida_lsmo.workchains.cp2k_multistage_ddec'¶
- _abc_impl = <_abc_data object>¶
aiida_lsmo.workchains.cp2k_phonopy module¶
Cp2kPhonopyWorkChain workchain
- class aiida_lsmo.workchains.cp2k_phonopy.Cp2kPhonopyWorkChain(*args: Any, **kwargs: Any)[source]¶
Bases:
WorkChain
A workchain to compute phonon frequencies using CP2K and Phonopy
- __abstractmethods__ = frozenset({})¶
- __module__ = 'aiida_lsmo.workchains.cp2k_phonopy'¶
- _abc_impl = <_abc_data object>¶
aiida_lsmo.workchains.isotherm module¶
Isotherm workchain
- class aiida_lsmo.workchains.isotherm.IsothermWorkChain(*args: Any, **kwargs: Any)[source]¶
Bases:
WorkChain
Workchain that computes volpo and blocking spheres: if accessible volpo>0 it also runs a raspa widom calculation for the Henry coefficient.
- __abstractmethods__ = frozenset({})¶
- __module__ = 'aiida_lsmo.workchains.isotherm'¶
- _abc_impl = <_abc_data object>¶
- classmethod define(spec)[source]¶
Define the specification of the process, including its inputs, outputs and known exit codes.
A metadata input namespace is defined, with optional ports that are not stored in the database.
- init_raspa_gcmc()[source]¶
Choose the pressures we want to sample, report some details, and update settings for GCMC
- parameters_info = {Required('ff_cutoff', description='CutOff truncation for the VdW interactions (Angstrom).'): Any(<class 'int'>, <class 'float'>, msg=None), Required('ff_framework', description='Forcefield of the structure (used also as a definition of ff.rad for zeopp)'): <class 'str'>, Required('ff_mixing_rule', description='Mixing rule'): Any('Lorentz-Berthelot', 'Jorgensen', msg=None), Required('ff_separate_interactions', description='if true use only ff_framework for framework-molecule interactions in the FFBuilder'): <class 'bool'>, Required('ff_shifted', description='Shift or truncate the potential at cutoff.'): <class 'bool'>, Required('ff_tail_corrections', description='Apply tail corrections.'): <class 'bool'>, Optional('pressure_list', description='Pressure list for the isotherm (bar): if given it will skip to guess it.'): <class 'list'>, Required('pressure_max', description='Upper pressure to sample (bar).'): Any(<class 'int'>, <class 'float'>, msg=None), Required('pressure_maxstep', description='(float) Max distance between pressure points (bar).'): Any(<class 'int'>, <class 'float'>, msg=None), Required('pressure_min', description='Lower pressure to sample (bar).'): Any(<class 'int'>, <class 'float'>, msg=None), Required('pressure_precision', description='Precision in the sampling of the isotherm: 0.1 ok, 0.05 for high resolution.'): Any(<class 'int'>, <class 'float'>, msg=None), Required('raspa_gcmc_init_cycles', description='Number of GCMC initialization cycles.'): <class 'int'>, Required('raspa_gcmc_prod_cycles', description='Number of GCMC production cycles.'): <class 'int'>, Required('raspa_minKh', description='If Henry coefficient < raspa_minKh do not run the isotherm (mol/kg/Pa).'): Any(<class 'int'>, <class 'float'>, msg=None), Required('raspa_verbosity', description='Print stats every: number of cycles / raspa_verbosity.'): <class 'int'>, Required('raspa_widom_cycles', description='Number of Widom cycles.'): <class 'int'>, Required('temperature', description='Temperature of the simulation.'): Any(<class 'int'>, <class 'float'>, msg=None), Optional('temperature_list', description='To be used by IsothermMultiTempWorkChain.'): <class 'list'>, Required('zeopp_block_samples', description='Number of samples for BLOCK calculation (per A^3).'): <class 'int'>, Required('zeopp_probe_scaling', description="scaling probe's diameter: molecular_rad * scaling"): Any(<class 'int'>, <class 'float'>, msg=None), Required('zeopp_volpo_samples', description='Number of samples for VOLPO calculation (per UC volume).'): <class 'int'>}¶
- parameters_schema = <Schema({Required('ff_framework', description='Forcefield of the structure (used also as a definition of ff.rad for zeopp)'): <class 'str'>, Required('ff_separate_interactions', description='if true use only ff_framework for framework-molecule interactions in the FFBuilder'): <class 'bool'>, Required('ff_mixing_rule', description='Mixing rule'): Any('Lorentz-Berthelot', 'Jorgensen', msg=None), Required('ff_tail_corrections', description='Apply tail corrections.'): <class 'bool'>, Required('ff_shifted', description='Shift or truncate the potential at cutoff.'): <class 'bool'>, Required('ff_cutoff', description='CutOff truncation for the VdW interactions (Angstrom).'): Any(<class 'int'>, <class 'float'>, msg=None), Required('zeopp_probe_scaling', description="scaling probe's diameter: molecular_rad * scaling"): Any(<class 'int'>, <class 'float'>, msg=None), Required('zeopp_volpo_samples', description='Number of samples for VOLPO calculation (per UC volume).'): <class 'int'>, Required('zeopp_block_samples', description='Number of samples for BLOCK calculation (per A^3).'): <class 'int'>, Required('raspa_verbosity', description='Print stats every: number of cycles / raspa_verbosity.'): <class 'int'>, Required('raspa_widom_cycles', description='Number of Widom cycles.'): <class 'int'>, Required('raspa_gcmc_init_cycles', description='Number of GCMC initialization cycles.'): <class 'int'>, Required('raspa_gcmc_prod_cycles', description='Number of GCMC production cycles.'): <class 'int'>, Required('raspa_minKh', description='If Henry coefficient < raspa_minKh do not run the isotherm (mol/kg/Pa).'): Any(<class 'int'>, <class 'float'>, msg=None), Required('temperature', description='Temperature of the simulation.'): Any(<class 'int'>, <class 'float'>, msg=None), Optional('temperature_list', description='To be used by IsothermMultiTempWorkChain.'): <class 'list'>, Required('pressure_min', description='Lower pressure to sample (bar).'): Any(<class 'int'>, <class 'float'>, msg=None), Required('pressure_max', description='Upper pressure to sample (bar).'): Any(<class 'int'>, <class 'float'>, msg=None), Required('pressure_maxstep', description='(float) Max distance between pressure points (bar).'): Any(<class 'int'>, <class 'float'>, msg=None), Required('pressure_precision', description='Precision in the sampling of the isotherm: 0.1 ok, 0.05 for high resolution.'): Any(<class 'int'>, <class 'float'>, msg=None), Optional('pressure_list', description='Pressure list for the isotherm (bar): if given it will skip to guess it.'): <class 'list'>}, extra=PREVENT_EXTRA, required=False) object>¶
- return_output_parameters()[source]¶
Merge all the parameters into output_parameters, depending on is_porous and is_kh_ehough.
- should_run_another_gcmc()[source]¶
We run another raspa calculation only if the current iteration is smaller than the total number of pressures we want to compute.
- aiida_lsmo.workchains.isotherm.choose_pressure_points(inp_param, geom, raspa_widom_out)[source]¶
If ‘pressure_list’ is not provided, model the isotherm as a single-site Langmuir and return a list of the most important pressure points to evaluate for an isotherm.
- aiida_lsmo.workchains.isotherm.get_atomic_radii(isotparam)[source]¶
Get {ff_framework}.rad as SinglefileData form workchain/isotherm_data. If not existing use DEFAULT.rad.
- aiida_lsmo.workchains.isotherm.get_ff_parameters(molecule_dict, isotparam)[source]¶
Get the parameters for ff_builder.
- aiida_lsmo.workchains.isotherm.get_geometric_dict(zeopp_out, molecule)[source]¶
Return the geometric Dict from Zeopp results, including Qsat and is_porous
- aiida_lsmo.workchains.isotherm.get_molecule_dict(molecule_name)[source]¶
Get a Dict from the isotherm_molecules.yaml
aiida_lsmo.workchains.isotherm_accurate module¶
IsothermAccurate work chain.
- class aiida_lsmo.workchains.isotherm_accurate.IsothermAccurateWorkChain(*args: Any, **kwargs: Any)[source]¶
Bases:
WorkChain
Workchain that computes volpo and blocking spheres: if accessible volpo>0 it also runs a raspa widom calculation for the Henry coefficient.
- __abstractmethods__ = frozenset({})¶
- __module__ = 'aiida_lsmo.workchains.isotherm_accurate'¶
- _abc_impl = <_abc_data object>¶
- classmethod define(spec)[source]¶
Define the specification of the process, including its inputs, outputs and known exit codes.
A metadata input namespace is defined, with optional ports that are not stored in the database.
- parameters_info = {Required('ff_cutoff', description='CutOff truncation for the VdW interactions (Angstrom).'): Any(<class 'int'>, <class 'float'>, msg=None), Required('ff_framework', description='Forcefield of the structure (used also as a definition of ff.rad for zeopp)'): <class 'str'>, Required('ff_mixing_rule', description='Mixing rule'): Any('Lorentz-Berthelot', 'Jorgensen', msg=None), Required('ff_separate_interactions', description='if true use only ff_framework for framework-molecule interactions in the FFBuilder'): <class 'bool'>, Required('ff_shifted', description='Shift or truncate the potential at cutoff.'): <class 'bool'>, Required('ff_tail_corrections', description='Apply tail corrections.'): <class 'bool'>, Required('loading_highp_sigma', description='Sigma fraction, to consider the sytem saturated.'): Any(<class 'int'>, <class 'float'>, msg=None), Required('loading_lowp_epsilon', description='Epsilon for convergence of low pressure loading.'): Any(<class 'int'>, <class 'float'>, msg=None), Required('n_dpmax', description='Number of pressure points to compute the max Delta pressure.'): <class 'int'>, Required('p_sat_coeff', description='Coefficient to push P_sat a little bit more to reach saturation'): Any(<class 'int'>, <class 'float'>, msg=None), Required('ph0_reiteration_coeff', description='Coefficient for P_H0 to iterate GCMC at lower P.'): Any(<class 'int'>, <class 'float'>, msg=None), Required('raspa_gcmc_init_cycles', description='Number of GCMC initialization cycles.'): <class 'int'>, Required('raspa_gcmc_prod_cycles', description='Number of GCMC production cycles.'): <class 'int'>, Required('raspa_minKh', description='If Henry coefficient < raspa_minKh do not run the isotherm (mol/kg/Pa).'): Any(<class 'int'>, <class 'float'>, msg=None), Required('raspa_verbosity', description='Print stats every: number of cycles / raspa_verbosity.'): <class 'int'>, Required('raspa_widom_cycles', description='Number of Widom cycles.'): <class 'int'>, Required('temperature', description='Temperature of the simulation.'): Any(<class 'int'>, <class 'float'>, msg=None), Optional('temperature_list', description='To be used by IsothermMultiTempWorkChain.'): <class 'list'>, Required('zeopp_block_samples', description='Number of samples for BLOCK calculation (per A^3).'): <class 'int'>, Required('zeopp_probe_scaling', description="scaling probe's diameter: molecular_rad * scaling"): Any(<class 'int'>, <class 'float'>, msg=None), Required('zeopp_volpo_samples', description='Number of samples for VOLPO calculation (per UC volume).'): <class 'int'>}¶
- parameters_schema = <Schema({Required('ff_framework', description='Forcefield of the structure (used also as a definition of ff.rad for zeopp)'): <class 'str'>, Required('ff_separate_interactions', description='if true use only ff_framework for framework-molecule interactions in the FFBuilder'): <class 'bool'>, Required('ff_mixing_rule', description='Mixing rule'): Any('Lorentz-Berthelot', 'Jorgensen', msg=None), Required('ff_tail_corrections', description='Apply tail corrections.'): <class 'bool'>, Required('ff_shifted', description='Shift or truncate the potential at cutoff.'): <class 'bool'>, Required('ff_cutoff', description='CutOff truncation for the VdW interactions (Angstrom).'): Any(<class 'int'>, <class 'float'>, msg=None), Required('zeopp_probe_scaling', description="scaling probe's diameter: molecular_rad * scaling"): Any(<class 'int'>, <class 'float'>, msg=None), Required('zeopp_volpo_samples', description='Number of samples for VOLPO calculation (per UC volume).'): <class 'int'>, Required('zeopp_block_samples', description='Number of samples for BLOCK calculation (per A^3).'): <class 'int'>, Required('raspa_verbosity', description='Print stats every: number of cycles / raspa_verbosity.'): <class 'int'>, Required('raspa_widom_cycles', description='Number of Widom cycles.'): <class 'int'>, Required('raspa_gcmc_init_cycles', description='Number of GCMC initialization cycles.'): <class 'int'>, Required('raspa_gcmc_prod_cycles', description='Number of GCMC production cycles.'): <class 'int'>, Required('raspa_minKh', description='If Henry coefficient < raspa_minKh do not run the isotherm (mol/kg/Pa).'): Any(<class 'int'>, <class 'float'>, msg=None), Required('temperature', description='Temperature of the simulation.'): Any(<class 'int'>, <class 'float'>, msg=None), Optional('temperature_list', description='To be used by IsothermMultiTempWorkChain.'): <class 'list'>, Required('loading_lowp_epsilon', description='Epsilon for convergence of low pressure loading.'): Any(<class 'int'>, <class 'float'>, msg=None), Required('loading_highp_sigma', description='Sigma fraction, to consider the sytem saturated.'): Any(<class 'int'>, <class 'float'>, msg=None), Required('n_dpmax', description='Number of pressure points to compute the max Delta pressure.'): <class 'int'>, Required('ph0_reiteration_coeff', description='Coefficient for P_H0 to iterate GCMC at lower P.'): Any(<class 'int'>, <class 'float'>, msg=None), Required('p_sat_coeff', description='Coefficient to push P_sat a little bit more to reach saturation'): Any(<class 'int'>, <class 'float'>, msg=None)}, extra=PREVENT_EXTRA, required=False) object>¶
- return_output_parameters()[source]¶
Merge all the parameters into output_parameters, depending on is_porous and is_kh_ehough.
- run_zeopp()[source]¶
Step 1: run zeo++ to calculate the density, void fraction and POAV and check if blocking spheres are needed or not.
- should_run_another_gcmc_highp()[source]¶
Step 4: determine the pressure at which the saturation starts, Psat Step 5: After calculating PH and Psat, pressure values in between are generated based on the following sampling scheme: more pressure points are needed when the isotherm is steep compared to when the isotherm is closer to saturation. The maximum pressure step is determined so that in the smoothest case 20 points are generated.
- should_run_another_gcmc_lowp()[source]¶
Step 3.1: I calculate the initial guess PH0 Step 3.2: To check if P_H0 belongs to the Henry’s regime, the error between the obtained uptake and Henry’s uptake should be smaller than a precision value, epsilon. If the error is higher than epsilon, P_HO is multiplied by a factor of 0.8 and the second step is repeated until the error converges to a value smaller than epsilon.
- should_run_gcmc()[source]¶
Output the widom results and decide to compute the isotherm if kH > kHmin, as defined by the user.
- should_run_widom()[source]¶
Submit widom calculation only if there is some accessible volume, also check the number of blocking spheres and estimate the saturation loading. Also, stop if called by IsothermMultiTemp for geometric results only.
Step 2: Calculate the theoretical q_sat based on the liquid density of the molecule (in the function get_geometric dict).
- aiida_lsmo.workchains.isotherm_accurate.get_atomic_radii(isotparam)[source]¶
Get {ff_framework}.rad as SinglefileData form workchain/isotherm_data. If not existing use DEFAULT.rad.
- aiida_lsmo.workchains.isotherm_accurate.get_ff_parameters(molecule_dict, isotparam)[source]¶
Get the parameters for ff_builder.
- aiida_lsmo.workchains.isotherm_accurate.get_geometric_dict(zeopp_out, molecule)[source]¶
Return the geometric Dict from Zeopp results, including Qsat and is_porous
- aiida_lsmo.workchains.isotherm_accurate.get_molecule_dict(molecule_name)[source]¶
Get a Dict from the isotherm_molecules.yaml
aiida_lsmo.workchains.isotherm_calc_pe module¶
IsothermCalcPE work chain.
- class aiida_lsmo.workchains.isotherm_calc_pe.IsothermCalcPEWorkChain(*args: Any, **kwargs: Any)[source]¶
Bases:
WorkChain
Compute CO2 parassitic energy (PE) after running IsothermWorkChain for CO2 and N2 at 300K.
- __abstractmethods__ = frozenset({})¶
- __module__ = 'aiida_lsmo.workchains.isotherm_calc_pe'¶
- _abc_impl = <_abc_data object>¶
- classmethod define(spec)[source]¶
Define the specification of the process, including its inputs, outputs and known exit codes.
A metadata input namespace is defined, with optional ports that are not stored in the database.
- parameters_info = {Required('ff_cutoff', description='CutOff truncation for the VdW interactions (Angstrom).'): Any(<class 'int'>, <class 'float'>, msg=None), Required('ff_framework', description='Forcefield of the structure (used also as a definition of ff.rad for zeopp)'): <class 'str'>, Required('ff_mixing_rule', description='Mixing rule'): Any('Lorentz-Berthelot', 'Jorgensen', msg=None), Required('ff_separate_interactions', description='if true use only ff_framework for framework-molecule interactions in the FFBuilder'): <class 'bool'>, Required('ff_shifted', description='Shift or truncate the potential at cutoff.'): <class 'bool'>, Required('ff_tail_corrections', description='Apply tail corrections.'): <class 'bool'>, Required('pressure_max', description='Upper pressure to sample (bar).'): Any(<class 'int'>, <class 'float'>, msg=None), Required('pressure_maxstep', description='(float) Max distance between pressure points (bar).'): Any(<class 'int'>, <class 'float'>, msg=None), Required('pressure_min', description='Lower pressure to sample (bar).'): Any(<class 'int'>, <class 'float'>, msg=None), Required('pressure_precision', description='Precision in the sampling of the isotherm: 0.1 ok, 0.05 for high resolution.'): Any(<class 'int'>, <class 'float'>, msg=None), Required('raspa_gcmc_init_cycles', description='Number of GCMC initialization cycles.'): <class 'int'>, Required('raspa_gcmc_prod_cycles', description='Number of GCMC production cycles.'): <class 'int'>, Required('raspa_minKh', description='If Henry coefficient < raspa_minKh do not run the isotherm (mol/kg/Pa).'): Any(<class 'int'>, <class 'float'>, msg=None), Required('raspa_verbosity', description='Print stats every: number of cycles / raspa_verbosity.'): <class 'int'>, Required('raspa_widom_cycles', description='Number of Widom cycles.'): <class 'int'>, Required('temperature', description='Temperature of the simulation.'): Any(<class 'int'>, <class 'float'>, msg=None), Required('zeopp_block_samples', description='Number of samples for BLOCK calculation (per A^3).'): <class 'int'>, Required('zeopp_probe_scaling', description="scaling probe's diameter: molecular_rad * scaling"): Any(<class 'int'>, <class 'float'>, msg=None), 'zeopp_volpo_samples': <class 'int'>}¶
- parameters_schema = <Schema({Required('ff_framework', description='Forcefield of the structure (used also as a definition of ff.rad for zeopp)'): <class 'str'>, Required('ff_separate_interactions', description='if true use only ff_framework for framework-molecule interactions in the FFBuilder'): <class 'bool'>, Required('ff_mixing_rule', description='Mixing rule'): Any('Lorentz-Berthelot', 'Jorgensen', msg=None), Required('ff_tail_corrections', description='Apply tail corrections.'): <class 'bool'>, Required('ff_shifted', description='Shift or truncate the potential at cutoff.'): <class 'bool'>, Required('ff_cutoff', description='CutOff truncation for the VdW interactions (Angstrom).'): Any(<class 'int'>, <class 'float'>, msg=None), Required('zeopp_probe_scaling', description="scaling probe's diameter: molecular_rad * scaling"): Any(<class 'int'>, <class 'float'>, msg=None), 'zeopp_volpo_samples': <class 'int'>, Required('zeopp_block_samples', description='Number of samples for BLOCK calculation (per A^3).'): <class 'int'>, Required('raspa_verbosity', description='Print stats every: number of cycles / raspa_verbosity.'): <class 'int'>, Required('raspa_widom_cycles', description='Number of Widom cycles.'): <class 'int'>, Required('raspa_gcmc_init_cycles', description='Number of GCMC initialization cycles.'): <class 'int'>, Required('raspa_gcmc_prod_cycles', description='Number of GCMC production cycles.'): <class 'int'>, Required('raspa_minKh', description='If Henry coefficient < raspa_minKh do not run the isotherm (mol/kg/Pa).'): Any(<class 'int'>, <class 'float'>, msg=None), Required('temperature', description='Temperature of the simulation.'): Any(<class 'int'>, <class 'float'>, msg=None), Required('pressure_min', description='Lower pressure to sample (bar).'): Any(<class 'int'>, <class 'float'>, msg=None), Required('pressure_max', description='Upper pressure to sample (bar).'): Any(<class 'int'>, <class 'float'>, msg=None), Required('pressure_maxstep', description='(float) Max distance between pressure points (bar).'): Any(<class 'int'>, <class 'float'>, msg=None), Required('pressure_precision', description='Precision in the sampling of the isotherm: 0.1 ok, 0.05 for high resolution.'): Any(<class 'int'>, <class 'float'>, msg=None)}, extra=PREVENT_EXTRA, required=False) object>¶
aiida_lsmo.workchains.isotherm_inflection module¶
A work chain.
- class aiida_lsmo.workchains.isotherm_inflection.IsothermInflectionWorkChain(*args: Any, **kwargs: Any)[source]¶
Bases:
WorkChain
A work chain to compute single component isotherms at adsorption and desorption: GCMC calculations are run in parallell at all pressures, starting from the empty framework and the saturated system. This workchain is useful to spot adsorption hysteresis.
- __abstractmethods__ = frozenset({})¶
- __module__ = 'aiida_lsmo.workchains.isotherm_inflection'¶
- _abc_impl = <_abc_data object>¶
- _get_mid_dens_molecules(raspa_calc_dil, raspa_calc_sat)[source]¶
Given a calculation at diluted and saturation condition, compute the total molecules at mid density.
- _get_saturation_molecules()[source]¶
Compute the estimate of molecules at saturation by: pore_vol * lid_dens * number_uc.
- _update_param_for_gcmc(number_of_molecules=0, swap_prob=0.5)[source]¶
Update Raspa input parameter, from Widom to GCMC
- classmethod define(spec)[source]¶
Define the specification of the process, including its inputs, outputs and known exit codes.
A metadata input namespace is defined, with optional ports that are not stored in the database.
- parameters_info = {Required('box_length', description='length of simulation box for simulation without framework'): Any(<class 'int'>, <class 'float'>, msg=None), Required('ff_cutoff', description='CutOff truncation for the VdW interactions (Angstrom).'): Any(<class 'int'>, <class 'float'>, msg=None), Required('ff_framework', description='Forcefield of the structure (used also as a definition of ff.rad for zeopp)'): <class 'str'>, Required('ff_mixing_rule', description='Mixing rule'): Any('Lorentz-Berthelot', 'Jorgensen', msg=None), Required('ff_separate_interactions', description='if true use only ff_framework for framework-molecule interactions in the FFBuilder'): <class 'bool'>, Required('ff_shifted', description='Shift or truncate the potential at cutoff.'): <class 'bool'>, Required('ff_tail_corrections', description='Apply tail corrections.'): <class 'bool'>, Optional('pressure_list', description='Pressure list for the isotherm (bar): if given it will skip to guess it.'): <class 'list'>, Required('pressure_max', description='Upper pressure to sample (bar).'): Any(<class 'int'>, <class 'float'>, msg=None), Required('pressure_min', description='Lower pressure to sample (bar).'): Any(<class 'int'>, <class 'float'>, msg=None), Required('pressure_num', description='Number of pressure points considered, eqispaced in a log plot'): <class 'int'>, Required('raspa_gcmc_init_cycles', description='Number of GCMC initialization cycles.'): <class 'int'>, Required('raspa_gcmc_prod_cycles', description='Number of GCMC production cycles.'): <class 'int'>, Required('raspa_verbosity', description='Print stats every: number of cycles / raspa_verbosity.'): <class 'int'>, Required('raspa_widom_cycles', description='Number of Widom cycles.'): <class 'int'>, Required('temperature', description='Temperature of the simulation.'): Any(<class 'int'>, <class 'float'>, msg=None), Required('zeopp_block_samples', description='Number of samples for BLOCK calculation (per A^3).'): <class 'int'>, Required('zeopp_probe_scaling', description="scaling probe's diameter: molecular_rad * scaling"): Any(<class 'int'>, <class 'float'>, msg=None), Required('zeopp_volpo_samples', description='Number of samples for VOLPO calculation (per UC volume).'): <class 'int'>}¶
- parameters_schema = <Schema({Required('ff_framework', description='Forcefield of the structure (used also as a definition of ff.rad for zeopp)'): <class 'str'>, Required('ff_separate_interactions', description='if true use only ff_framework for framework-molecule interactions in the FFBuilder'): <class 'bool'>, Required('ff_mixing_rule', description='Mixing rule'): Any('Lorentz-Berthelot', 'Jorgensen', msg=None), Required('ff_tail_corrections', description='Apply tail corrections.'): <class 'bool'>, Required('ff_shifted', description='Shift or truncate the potential at cutoff.'): <class 'bool'>, Required('ff_cutoff', description='CutOff truncation for the VdW interactions (Angstrom).'): Any(<class 'int'>, <class 'float'>, msg=None), Required('zeopp_probe_scaling', description="scaling probe's diameter: molecular_rad * scaling"): Any(<class 'int'>, <class 'float'>, msg=None), Required('zeopp_volpo_samples', description='Number of samples for VOLPO calculation (per UC volume).'): <class 'int'>, Required('zeopp_block_samples', description='Number of samples for BLOCK calculation (per A^3).'): <class 'int'>, Required('raspa_verbosity', description='Print stats every: number of cycles / raspa_verbosity.'): <class 'int'>, Required('raspa_widom_cycles', description='Number of Widom cycles.'): <class 'int'>, Required('raspa_gcmc_init_cycles', description='Number of GCMC initialization cycles.'): <class 'int'>, Required('raspa_gcmc_prod_cycles', description='Number of GCMC production cycles.'): <class 'int'>, Required('temperature', description='Temperature of the simulation.'): Any(<class 'int'>, <class 'float'>, msg=None), Required('pressure_min', description='Lower pressure to sample (bar).'): Any(<class 'int'>, <class 'float'>, msg=None), Required('pressure_max', description='Upper pressure to sample (bar).'): Any(<class 'int'>, <class 'float'>, msg=None), Optional('pressure_list', description='Pressure list for the isotherm (bar): if given it will skip to guess it.'): <class 'list'>, Required('pressure_num', description='Number of pressure points considered, eqispaced in a log plot'): <class 'int'>, Required('box_length', description='length of simulation box for simulation without framework'): Any(<class 'int'>, <class 'float'>, msg=None)}, extra=PREVENT_EXTRA, required=False) object>¶
- return_output_parameters()[source]¶
Merge all the parameters into output_parameters, depending on is_porous and is_kh_ehough.
- aiida_lsmo.workchains.isotherm_inflection.get_output_parameters(inp_params, pressures, geom_out, widom_out, **gcmc_dict)[source]¶
Merge results from all the steps of the work chain. geom_out (Dict) contains the output of Zeo++ widom_out (Dict) contains the output of Raspa’s Widom insertions calculation gcmc_dict (dict of Dicts) has the keys like: inp/out_RaspaGCMC/RaspaGCMCNew/RaspaGCMCSat_1..n
aiida_lsmo.workchains.isotherm_multi_temp module¶
IsothermMultiTemp workchain.
- class aiida_lsmo.workchains.isotherm_multi_temp.IsothermMultiTempWorkChain(*args: Any, **kwargs: Any)[source]¶
Bases:
WorkChain
Run IsothermWorkChain for multiple temperatures: first compute geometric properties and then submit Widom+GCMC at different temperatures in parallel
- __abstractmethods__ = frozenset({})¶
- __module__ = 'aiida_lsmo.workchains.isotherm_multi_temp'¶
- _abc_impl = <_abc_data object>¶
aiida_lsmo.workchains.multicomp_ads_des module¶
A work chain.
- class aiida_lsmo.workchains.multicomp_ads_des.MulticompAdsDesWorkChain(*args: Any, **kwargs: Any)[source]¶
Bases:
WorkChain
Compute Adsorption/Desorption in crystalline materials, for a mixture of componentes and at specific temperature/pressure conditions.
- __abstractmethods__ = frozenset({})¶
- __module__ = 'aiida_lsmo.workchains.multicomp_ads_des'¶
- _abc_impl = <_abc_data object>¶
- _get_gcmc_inputs_adsorption()[source]¶
Generate Raspa input parameters from scratch, for a multicomponent GCMC calculation.
- _update_gcmc_inputs_desorption()[source]¶
Update Raspa input parameters for desorption: Temperature, Pressure, Composition and Restart.
- classmethod define(spec)[source]¶
Define the specification of the process, including its inputs, outputs and known exit codes.
A metadata input namespace is defined, with optional ports that are not stored in the database.
- inspect_zeopp_calc()[source]¶
Asserts whether all widom calculations are finished ok. If so, manage zeopp results.
- parameters_info = {Required('ff_cutoff', description='CutOff truncation for the VdW interactions (Angstrom).'): Any(<class 'int'>, <class 'float'>, msg=None), Required('ff_framework', description='Forcefield of the structure (used also as a definition of ff.rad for zeopp)'): <class 'str'>, Required('ff_mixing_rule', description='Mixing rule'): Any('Lorentz-Berthelot', 'Jorgensen', msg=None), Required('ff_separate_interactions', description='if true use only ff_framework for framework-molecule interactions in the FFBuilder'): <class 'bool'>, Required('ff_shifted', description='Shift or truncate the potential at cutoff.'): <class 'bool'>, Required('ff_tail_corrections', description='Apply tail corrections.'): <class 'bool'>, Required('raspa_gcmc_init_cycles', description='Number of GCMC initialization cycles.'): <class 'int'>, Required('raspa_gcmc_prod_cycles', description='Number of GCMC production cycles.'): <class 'int'>, Required('raspa_verbosity', description='Print stats every: number of cycles / raspa_verbosity.'): <class 'int'>, Required('raspa_widom_cycles', description='Number of GCMC production cycles.'): <class 'int'>, Required('zeopp_block_samples', description='Number of samples for BLOCK calculation (per A^3).'): <class 'int'>, Required('zeopp_probe_scaling', description="scaling probe's diameter: molecular_rad * scaling"): Any(<class 'int'>, <class 'float'>, msg=None)}¶
- parameters_schema = <Schema({Required('ff_framework', description='Forcefield of the structure (used also as a definition of ff.rad for zeopp)'): <class 'str'>, Required('ff_separate_interactions', description='if true use only ff_framework for framework-molecule interactions in the FFBuilder'): <class 'bool'>, Required('ff_mixing_rule', description='Mixing rule'): Any('Lorentz-Berthelot', 'Jorgensen', msg=None), Required('ff_tail_corrections', description='Apply tail corrections.'): <class 'bool'>, Required('ff_shifted', description='Shift or truncate the potential at cutoff.'): <class 'bool'>, Required('ff_cutoff', description='CutOff truncation for the VdW interactions (Angstrom).'): Any(<class 'int'>, <class 'float'>, msg=None), Required('zeopp_probe_scaling', description="scaling probe's diameter: molecular_rad * scaling"): Any(<class 'int'>, <class 'float'>, msg=None), Required('zeopp_block_samples', description='Number of samples for BLOCK calculation (per A^3).'): <class 'int'>, Required('raspa_verbosity', description='Print stats every: number of cycles / raspa_verbosity.'): <class 'int'>, Required('raspa_gcmc_init_cycles', description='Number of GCMC initialization cycles.'): <class 'int'>, Required('raspa_gcmc_prod_cycles', description='Number of GCMC production cycles.'): <class 'int'>, Required('raspa_widom_cycles', description='Number of GCMC production cycles.'): <class 'int'>}, extra=PREVENT_EXTRA, required=False) object>¶
- aiida_lsmo.workchains.multicomp_ads_des.get_atomic_radii(isotparam)[source]¶
Get {ff_framework}.rad as SinglefileData form workchain/isotherm_data. If not existing use DEFAULT.rad.
- aiida_lsmo.workchains.multicomp_ads_des.get_components_dict(conditions, parameters)[source]¶
Construct components dict, like: {‘xenon’: { ‘name’: ‘Xe’, ‘molfraction’: xxx, ‘proberad’: xxx, ‘zeopp’: {…}, },…}
- aiida_lsmo.workchains.multicomp_ads_des.get_ff_parameters(components, isotparams)[source]¶
Get the parameters for ff_builder.
aiida_lsmo.workchains.multicomp_gcmc module¶
A work chain.
- class aiida_lsmo.workchains.multicomp_gcmc.MulticompGcmcWorkChain(*args: Any, **kwargs: Any)[source]¶
Bases:
WorkChain
Compute multicomponent GCMC in crystalline materials (or empty box), for a mixture of componentes and at specific temperature/pressure conditions.
- __abstractmethods__ = frozenset({})¶
- __module__ = 'aiida_lsmo.workchains.multicomp_gcmc'¶
- _abc_impl = <_abc_data object>¶
- _get_gcmc_inputs()[source]¶
Generate Raspa input parameters from scratch, for a multicomponent GCMC calculation.
- classmethod define(spec)[source]¶
Define the specification of the process, including its inputs, outputs and known exit codes.
A metadata input namespace is defined, with optional ports that are not stored in the database.
- inspect_zeopp_calc()[source]¶
Asserts whether all widom calculations are finished ok. If so, manage zeopp results.
- parameters_info = {Required('ff_cutoff', description='CutOff truncation for the VdW interactions (Angstrom).'): Any(<class 'int'>, <class 'float'>, msg=None), Required('ff_framework', description='Forcefield of the structure (used also as a definition of ff.rad for zeopp)'): <class 'str'>, Required('ff_mixing_rule', description='Mixing rule'): Any('Lorentz-Berthelot', 'Jorgensen', msg=None), Required('ff_separate_interactions', description='if true use only ff_framework for framework-molecule interactions in the FFBuilder'): <class 'bool'>, Required('ff_shifted', description='Shift or truncate the potential at cutoff.'): <class 'bool'>, Required('ff_tail_corrections', description='Apply tail corrections.'): <class 'bool'>, Required('raspa_gcmc_init_cycles', description='Number of GCMC initialization cycles.'): <class 'int'>, Required('raspa_gcmc_prod_cycles', description='Number of GCMC production cycles.'): <class 'int'>, Required('raspa_verbosity', description='Print stats every: number of cycles / raspa_verbosity.'): <class 'int'>, Required('zeopp_block_samples', description='Number of samples for BLOCK calculation (per A^3).'): <class 'int'>, Required('zeopp_probe_scaling', description="scaling probe's diameter: molecular_rad * scaling"): Any(<class 'int'>, <class 'float'>, msg=None)}¶
- parameters_schema = <Schema({Required('ff_framework', description='Forcefield of the structure (used also as a definition of ff.rad for zeopp)'): <class 'str'>, Required('ff_separate_interactions', description='if true use only ff_framework for framework-molecule interactions in the FFBuilder'): <class 'bool'>, Required('ff_mixing_rule', description='Mixing rule'): Any('Lorentz-Berthelot', 'Jorgensen', msg=None), Required('ff_tail_corrections', description='Apply tail corrections.'): <class 'bool'>, Required('ff_shifted', description='Shift or truncate the potential at cutoff.'): <class 'bool'>, Required('ff_cutoff', description='CutOff truncation for the VdW interactions (Angstrom).'): Any(<class 'int'>, <class 'float'>, msg=None), Required('zeopp_probe_scaling', description="scaling probe's diameter: molecular_rad * scaling"): Any(<class 'int'>, <class 'float'>, msg=None), Required('zeopp_block_samples', description='Number of samples for BLOCK calculation (per A^3).'): <class 'int'>, Required('raspa_gcmc_init_cycles', description='Number of GCMC initialization cycles.'): <class 'int'>, Required('raspa_gcmc_prod_cycles', description='Number of GCMC production cycles.'): <class 'int'>, Required('raspa_verbosity', description='Print stats every: number of cycles / raspa_verbosity.'): <class 'int'>}, extra=PREVENT_EXTRA, required=False) object>¶
- return_output_parameters()[source]¶
Merge all the parameters into output_parameters, depending on is_porous and is_kh_ehough.
- aiida_lsmo.workchains.multicomp_gcmc.get_atomic_radii(isotparam)[source]¶
Get {ff_framework}.rad as SinglefileData form workchain/isotherm_data. If not existing use DEFAULT.rad.
- aiida_lsmo.workchains.multicomp_gcmc.get_components_dict(conditions, parameters)[source]¶
Construct components dict, like: {‘xenon’: { ‘name’: ‘Xe’, ‘molfraction’: xxx, ‘proberad’: xxx, ‘zeopp’: {…}, },…}
aiida_lsmo.workchains.nanoporous_screening_1 module¶
ZeoppMultistageDdecPeWorkChain workchain
- class aiida_lsmo.workchains.nanoporous_screening_1.NanoporousScreening1WorkChain(*args: Any, **kwargs: Any)[source]¶
Bases:
WorkChain
A workchain that combines: ZeoppMultistageDdecWorkChain wc1 and IsothermCalcPEWorkChain wc2. In future I will use this to include more applications to run in parallel.
- __abstractmethods__ = frozenset({})¶
- __module__ = 'aiida_lsmo.workchains.nanoporous_screening_1'¶
- _abc_impl = <_abc_data object>¶
aiida_lsmo.workchains.parameters_schemas module¶
Schemas for validating input parameters of workchains.
Defines a couple of building blocks that are reused by many workchains.
- class aiida_lsmo.workchains.parameters_schemas.Optional(schema, msg=None, default=..., description=None)[source]¶
Bases:
Marker
Mark a node in the schema as optional, and optionally provide a default
>>> schema = Schema({Optional('key'): str}) >>> schema({}) {} >>> schema = Schema({Optional('key', default='value'): str}) >>> schema({}) {'key': 'value'} >>> schema = Schema({Optional('key', default=list): list}) >>> schema({}) {'key': []}
If ‘required’ flag is set for an entire schema, optional keys aren’t required
>>> schema = Schema({ ... Optional('key'): str, ... 'key2': str ... }, required=True) >>> schema({'key2':'value'}) {'key2': 'value'}
- __module__ = 'voluptuous.schema_builder'¶
- __repr__()¶
Return repr(self).
- __slotnames__ = []¶
- class aiida_lsmo.workchains.parameters_schemas.Required(schema, msg=None, default=..., description=None)[source]¶
Bases:
Marker
Mark a node in the schema as being required, and optionally provide a default value.
>>> schema = Schema({Required('key'): str}) >>> with raises(er.MultipleInvalid, "required key not provided @ data['key']"): ... schema({})
>>> schema = Schema({Required('key', default='value'): str}) >>> schema({}) {'key': 'value'} >>> schema = Schema({Required('key', default=list): list}) >>> schema({}) {'key': []}
- __module__ = 'voluptuous.schema_builder'¶
- __repr__()¶
Return repr(self).
- __slotnames__ = []¶
aiida_lsmo.workchains.sim_annealing module¶
Simulated Annealing workchain
- class aiida_lsmo.workchains.sim_annealing.SimAnnealingWorkChain(*args: Any, **kwargs: Any)[source]¶
Bases:
WorkChain
A work chain to compute the minimum energy geometry of a molecule inside a framework, using simulated annealing, i.e., decreasing the temperature of a Monte Carlo simulation and finally running and energy minimization step.
- __abstractmethods__ = frozenset({})¶
- __module__ = 'aiida_lsmo.workchains.sim_annealing'¶
- _abc_impl = <_abc_data object>¶
- _get_raspa_nvt_param()[source]¶
Write Raspa input parameters from scratch, for an MC NVT calculation
- _spec = <aiida.engine.processes.workchains.workchain.WorkChainSpec object>¶
- classmethod define(spec)[source]¶
Define the specification of the process, including its inputs, outputs and known exit codes.
A metadata input namespace is defined, with optional ports that are not stored in the database.
- parameters_info = {Required('ff_cutoff', description='CutOff truncation for the VdW interactions (Angstrom).'): Any(<class 'int'>, <class 'float'>, msg=None), Required('ff_framework', description='Forcefield of the structure (used also as a definition of ff.rad for zeopp)'): <class 'str'>, Required('ff_mixing_rule', description='Mixing rule'): Any('Lorentz-Berthelot', 'Jorgensen', msg=None), Required('ff_separate_interactions', description='if true use only ff_framework for framework-molecule interactions in the FFBuilder'): <class 'bool'>, Required('ff_shifted', description='Shift or truncate the potential at cutoff.'): <class 'bool'>, Required('ff_tail_corrections', description='Apply tail corrections.'): <class 'bool'>, Required('mc_steps', description='Number of MC cycles.'): <class 'int'>, Required('number_of_molecules', description='Number of molecules loaded in the framework.'): <class 'int'>, Required('temperature_list', description='List of decreasing temperatures for the annealing.'): <class 'list'>}¶
- parameters_schema = <Schema({Required('ff_framework', description='Forcefield of the structure (used also as a definition of ff.rad for zeopp)'): <class 'str'>, Required('ff_separate_interactions', description='if true use only ff_framework for framework-molecule interactions in the FFBuilder'): <class 'bool'>, Required('ff_mixing_rule', description='Mixing rule'): Any('Lorentz-Berthelot', 'Jorgensen', msg=None), Required('ff_tail_corrections', description='Apply tail corrections.'): <class 'bool'>, Required('ff_shifted', description='Shift or truncate the potential at cutoff.'): <class 'bool'>, Required('ff_cutoff', description='CutOff truncation for the VdW interactions (Angstrom).'): Any(<class 'int'>, <class 'float'>, msg=None), Required('temperature_list', description='List of decreasing temperatures for the annealing.'): <class 'list'>, Required('mc_steps', description='Number of MC cycles.'): <class 'int'>, Required('number_of_molecules', description='Number of molecules loaded in the framework.'): <class 'int'>}, extra=PREVENT_EXTRA, required=False) object>¶
- aiida_lsmo.workchains.sim_annealing.get_molecule_from_restart_file(structure_cif, molecule_folderdata, input_dict, molecule_dict)[source]¶
Get a CifData file having the cell of the initial (unexpanded) structure and the geometry of the loaded molecule. TODO: this is source of error if there are more than one molecule AND the cell has been expanded, as you can not wrap them in the small cell.
aiida_lsmo.workchains.singlecomp_widom module¶
A work chain.
- class aiida_lsmo.workchains.singlecomp_widom.SinglecompWidomWorkChain(*args: Any, **kwargs: Any)[source]¶
Bases:
WorkChain
Computes widom insertion for a framework/box at different temperatures.
- __abstractmethods__ = frozenset({})¶
- __module__ = 'aiida_lsmo.workchains.singlecomp_widom'¶
- _abc_impl = <_abc_data object>¶
- classmethod define(spec)[source]¶
Define the specification of the process, including its inputs, outputs and known exit codes.
A metadata input namespace is defined, with optional ports that are not stored in the database.
- inspect_zeopp_calc()[source]¶
Asserts whether all widom calculations are finished ok and expose block file.
- parameters_info = {Required('ff_cutoff', description='CutOff truncation for the VdW interactions (Angstrom).'): Any(<class 'int'>, <class 'float'>, msg=None), Required('ff_framework', description='Forcefield of the structure (used also as a definition of ff.rad for zeopp)'): <class 'str'>, Required('ff_mixing_rule', description='Mixing rule'): Any('Lorentz-Berthelot', 'Jorgensen', msg=None), Required('ff_separate_interactions', description='if true use only ff_framework for framework-molecule interactions in the FFBuilder'): <class 'bool'>, Required('ff_shifted', description='Shift or truncate the potential at cutoff.'): <class 'bool'>, Required('ff_tail_corrections', description='Apply tail corrections.'): <class 'bool'>, Required('raspa_verbosity', description='Print stats every: number of cycles / raspa_verbosity.'): <class 'int'>, Required('raspa_widom_cycles', description='Number of Widom cycles.'): <class 'int'>, 'temperatures': [Any(<class 'int'>, <class 'float'>, msg=None)], Required('zeopp_block_samples', description='Number of samples for BLOCK calculation (per A^3).'): <class 'int'>, Required('zeopp_probe_scaling', description="scaling probe's diameter: molecular_rad * scaling"): Any(<class 'int'>, <class 'float'>, msg=None)}¶
- parameters_schema = <Schema({Required('ff_framework', description='Forcefield of the structure (used also as a definition of ff.rad for zeopp)'): <class 'str'>, Required('ff_separate_interactions', description='if true use only ff_framework for framework-molecule interactions in the FFBuilder'): <class 'bool'>, Required('ff_mixing_rule', description='Mixing rule'): Any('Lorentz-Berthelot', 'Jorgensen', msg=None), Required('ff_tail_corrections', description='Apply tail corrections.'): <class 'bool'>, Required('ff_shifted', description='Shift or truncate the potential at cutoff.'): <class 'bool'>, Required('ff_cutoff', description='CutOff truncation for the VdW interactions (Angstrom).'): Any(<class 'int'>, <class 'float'>, msg=None), Required('zeopp_probe_scaling', description="scaling probe's diameter: molecular_rad * scaling"): Any(<class 'int'>, <class 'float'>, msg=None), Required('zeopp_block_samples', description='Number of samples for BLOCK calculation (per A^3).'): <class 'int'>, Required('raspa_verbosity', description='Print stats every: number of cycles / raspa_verbosity.'): <class 'int'>, Required('raspa_widom_cycles', description='Number of Widom cycles.'): <class 'int'>, 'temperatures': [Any(<class 'int'>, <class 'float'>, msg=None)]}, extra=PREVENT_EXTRA, required=False) object>¶
- return_output_parameters()[source]¶
Merge all the parameters into output_parameters, depending on is_porous and is_kh_ehough.
aiida_lsmo.workchains.zeopp_multistage_ddec module¶
ZeoppMultistageDdecWorkChain work chain
- class aiida_lsmo.workchains.zeopp_multistage_ddec.ZeoppMultistageDdecWorkChain(*args: Any, **kwargs: Any)[source]¶
Bases:
WorkChain
A workchain that combines: Zeopp + Cp2kMultistageWorkChain + Cp2kDdecWorkChain + Zeopp
- __abstractmethods__ = frozenset({})¶
- __module__ = 'aiida_lsmo.workchains.zeopp_multistage_ddec'¶
- _abc_impl = <_abc_data object>¶
- _spec = <aiida.engine.processes.workchains.workchain.WorkChainSpec object>¶
- parameters_info = {Required('ha', description='Using high accuracy (mandatory!)'): <class 'str'>, Required('psd', description='Small probe to compute the pore size distr'): [Any(<class 'int'>, <class 'float'>, msg=None), Any(<class 'int'>, <class 'float'>, msg=None), <class 'int'>], Required('res', description='Max included, free and incl in free sphere'): <class 'bool'>, Required('sa', description='Nitrogen probe to compute surface'): [Any(<class 'int'>, <class 'float'>, msg=None), Any(<class 'int'>, <class 'float'>, msg=None), <class 'int'>], Required('vol', description='Geometric pore volume'): [Any(<class 'int'>, <class 'float'>, msg=None), Any(<class 'int'>, <class 'float'>, msg=None), <class 'int'>], Required('volpo', description='Nitrogen probe to compute PO pore volume'): [Any(<class 'int'>, <class 'float'>, msg=None), Any(<class 'int'>, <class 'float'>, msg=None), <class 'int'>]}¶
- parameters_schema = <Schema({Required('ha', description='Using high accuracy (mandatory!)'): <class 'str'>, Required('res', description='Max included, free and incl in free sphere'): <class 'bool'>, Required('sa', description='Nitrogen probe to compute surface'): [Any(<class 'int'>, <class 'float'>, msg=None), Any(<class 'int'>, <class 'float'>, msg=None), <class 'int'>], Required('vol', description='Geometric pore volume'): [Any(<class 'int'>, <class 'float'>, msg=None), Any(<class 'int'>, <class 'float'>, msg=None), <class 'int'>], Required('volpo', description='Nitrogen probe to compute PO pore volume'): [Any(<class 'int'>, <class 'float'>, msg=None), Any(<class 'int'>, <class 'float'>, msg=None), <class 'int'>], Required('psd', description='Small probe to compute the pore size distr'): [Any(<class 'int'>, <class 'float'>, msg=None), Any(<class 'int'>, <class 'float'>, msg=None), <class 'int'>]}, extra=PREVENT_EXTRA, required=False) object>¶
Module contents¶
Workchains developed at LSMO laboratory.
Module contents¶
AiiDA workflows for the LSMO laboratory at EPFL
If you use this plugin for your research, please cite the following work:
Daniele Ongari, Aliksandr V. Yakutovich, Leopold Talirz, and Berend Smit, Building a Consistent and Reproducible Database for Adsorption Evaluation in Covalent–Organic Frameworks, ACS Cent. Sci. 2019, 5, 10, 1663-1675 (2019); https://doi.org/10.1021/acscentsci.9b00619.
If you use AiiDA for your research, please cite the following work:
AiiDA >= 1.0: Sebastiaan. P. Huber, Spyros Zoupanos, Martin Uhrin, Leopold Talirz, Leonid Kahle, Rico Häuselmann, Dominik Gresch, Tiziano Müller, Aliaksandr V. Yakutovich, Casper W. Andersen, Francisco F. Ramirez, Carl S. Adorf, Fernando Gargiulo, Snehal Kumbhar, Elsa Passaro, Conrad Johnston, Andrius Merkys, Andrea Cepellotti, Nicolas Mounet, Nicola Marzari, Boris Kozinsky, and Giovanni Pizzi, AiiDA 1.0, a scalable computational infrastructure for automated reproducible workflows and data provenance, Scientific Data 7, 300 (2020); DOI: 10.1038/s41597-020-00638-4
AiiDA >= 1.0: Martin Uhrin, Sebastiaan. P. Huber, Jusong Yu, Nicola Marzari, and Giovanni Pizzi, Workflows in AiiDA: Engineering a high-throughput, event-based engine for robust and modular computational workflows, Computational Materials Science 187, 110086 (2021); DOI: 10.1016/j.commatsci.2020.110086
AiiDA < 1.0: Giovanni Pizzi, Andrea Cepellotti, Riccardo Sabatini, Nicola Marzari, and Boris Kozinsky, AiiDA: automated interactive infrastructure and database for computational science, Computational Materials Science 111, 218-230 (2016); DOI: 10.1016/j.commatsci.2015.09.013
aiida-lsmo
is released under the MIT license.