CLI Reference
The primary entry point is gpu_main.py.
Usage
python gpu_main.py [OPTIONS]
Arguments
| Argument | Type | Default | Choices | Description |
|---|---|---|---|---|
--config | string | config.json | -- | Path to configuration file |
--mode | string | single | single, performance, monte_carlo | Simulation mode |
--profile | flag | false | -- | Enable TensorFlow profiling |
--runner | string | full | full, chunked, no_save | Simulation runner mode |
--chunk-size | int | 100 | -- | Chunk size for chunked runner |
Modes
single (default)
Runs a single atmospheric turbulence simulation:
python gpu_main.py --mode single
performance
Runs the performance test suite, benchmarking the simulator across varying configurations (r0 values, resolutions, layer counts):
python gpu_main.py --mode performance
See Performance Testing for details.
monte_carlo
Runs a Monte Carlo analysis with repeated simulations and statistical aggregation:
python gpu_main.py --mode monte_carlo
See Monte Carlo for details.
Runner Modes
The --runner flag controls how frames are stored during a single simulation:
full
All frames are kept in GPU memory and saved as a single frames.npy file. Fastest for short simulations but uses the most memory.
python gpu_main.py --runner full
chunked
Processes frames in chunks and streams them to disk. Uses double-buffered writes for overlapping computation and I/O. Produces a frames_chunk_manifest.json index file.
python gpu_main.py --runner chunked --chunk-size 50
no_save
Runs the simulation without saving any frames. Useful for throughput benchmarking and testing.
python gpu_main.py --runner no_save
Examples
# Basic run with defaults
python gpu_main.py
# Custom config with profiling
python gpu_main.py --config my_config.json --profile
# Chunked streaming with small chunks
python gpu_main.py --runner chunked --chunk-size 25
# Performance test suite
python gpu_main.py --mode performance
# Monte Carlo analysis
python gpu_main.py --mode monte_carlo
# Dataset generation (configured via config.json create_data_set flag)
python gpu_main.py
TensorFlow Profiling
When --profile is passed, TensorFlow traces are saved to the output directory under monitoring/tensorboard/. View them with:
tensorboard --logdir outputs/YYYY-MM-DD/HH/RUN_N/monitoring/tensorboard/