Hi,
The ability to run in parallel depends on what version you have. When ADL 4.1 was originally released, the command line chain runner could not process algorithms in parallel. However a patch was released to fix that (to make ADL 4.1.1). That patch is available at
https://jpss.ssec.wisc.edu/adl/download ... L_4.1_Pat/If your $ADL_HOME/.version says ADL 4.1.1 you should be able to process granules in parallel. You can do this with the -m option. If you don't have the -m option, you can try to apply the patch.
However, that patch is for the version of ADL that has *not* already been converted into a virtual appliance. So, if you have installed ADL from source (i.e. you extracted source code from tar files), you should be able to use the patch. If you use the virtual appliance, I am not sure if that patch is in the latest virtual appliance. I am also not sure if that patch will successfully install in the virtual appliance. I have sent off an email to the University of Wisconsin to try to determine what the status of the 4.1.1 patch and the virtual appliance is.
If you want to try the patch, I would recommend making another ADL installation and attempting to apply the patch to that copy. If you have the virtual appliance or you have made code changes since ADL was installed, I cannot guarantee the patch will install correctly.
To answer your question, yes I *believe* that the landing zone and the location in the lw_properties files are the only locations to which data is written.
I have a few questions about your specific setup that may let us make some recommendations to help you:
--Are you using the virtual appliance or have you installed from source?
--Have you compiled optimized or are you compiling debug? Compiling optimized takes longer, but make things run faster.
--How much memory and how many CPUs do you have on the machine you're using?
--What algorithms are you running? Are you running only cloud mask, or are you running algorithms earlier in the chain such as VIIRS SDR?
--How much data is in the algorithm's input directories? A quick way to find out is "find . -name \*.asc | wc -l" to count the number of input files.
--I understand the command line chainrunner is easier to use in some instances, such as scripting, but have you tried the GUI chain runner? It should be able to process things in parallel.
ADL 4.1 uses OpenMP while reading input files in an attempt to make the reading faster. In $ADL_HOME/build/envSetup.ksh you can try changing OMP_NUM_THREADS to other values to see if that helps. Unless you have very fast hard disks or a solid state drive I'd recommend leaving the value at 2 because that's the value that worked the best for us. OpenMP is *not* used during algorithm processing.
Additionally if you have large quantities of static data (such as tiles) you could try creating "jumbo" asc files (with extension .jasc). Note that these may already exist in tile input directories. A .jasc is just a single file containing the contents of a series of .asc files. Using a .jasc file can speed up initialization because only 1 file read is performed for the .jasc file, instead of a file read for each .asc file. You can create a .jasc file using $ADL_HOME/script/createJascFiles.sh. Note that there are a couple things to watch out for when using .jasc files
--Check to make sure they're not already there.
--A .jasc file in a directory will cause all .asc files to be ignored
--A .jasc file can only be used for data which does not change.
Once you get the parallel processing ability, it is limited to 2 processes by default. You can change the maximum by changing "THREAD_COUNT" in $ADL_HOME/.lw_properties. It is set to 2 because running too many processes can easily overwhelm a machine if it doesn't have much memory or many CPUs.
I look forward to hearing from you.