
The raw-to-raw augmentation functions are provided in src/aug_ops.opy. The testing code only uses JSON files to load ground-truth illumination for comparisons with our estimated values. Note that in testing, C5 does not require any metadata. images -white-balance True -g-multiplier True -model-name C5_m_7_h_64_w_G To test with the gain multiplie, use the following command: images -white-balance True -model-name C5_m_7_h_64 To white balance these raw images, as shown in the figure below, using a C5 model (trained on DSLR cameras from NUS and Gehler-Shi datasets), use the following command: In the images directory, there are few examples captured by Mobile Sony IMX135 from the INTEL-TAU dataset. npy files located in the folds directory with the same name of the dataset, which should be the same as the folder name in -testing-dir-in. The testing image filenames should be listed in.
When it is set to True, it is supposed to have three pre-trained models saved with a postfix of the fold number. -cross-validation: to use three-fold cross-validation.-white-balance: to save white-balanced testing images.-multiple_test: to apply multiple tests (ten as mentioned in the paper) and save their results.-testing-dir-in: testing image directory.
-g-multiplier: to use a G multiplier as explained in the paper.The following parameters are required to set model configuration and testing data information.
To test a pre-trained C5 model, testing data should have the following formatting:
-grad-clip-value: gradient clipping value if it's set to 0 (default), no clipping is applied. -increasing-batch-size: for increasing batch size during training. For example, -smoothness-factor-F can be used to set the smoothness loss for the conv filter. -smoothness-factor-*: smoothness loss factor of the following model components: F (conv filter), B (bias), G (multiplier layer). When this variable is True, -validation-dir-in and -validation-ratio will be ignored and 3-fold cross-validation, on the data provided in the -training-dir-in, will be applied. -cross-validation: To use three-fold cross-validation. -validation-frequency: validation frequency (in epochs). -model-location: when -load is True, this variable should point to the fullpath of the. -optimizer: optimization algorithm for stochastic gradient descent options are: Adam or SGD. -load-hist: to load histogram if pre-computed (recommended). The following parameters are useful to control training settings and hyperparameters: -model-name: name of the trained model. -augmentation-dir: directory(s) of augmentation data (optional). -validation-ratio: when -validation-dir-in is None, this argument determines the validation set ratio of the image set in -training-dir-in directory. -validation-dir-in: validation image directory when this variable is None (default), the validation set will be taken from the training data based on the -validation-ratio. -training-dir-in: training image directory. -learn-G: to use a G multiplier as explained in the paper. -input-size: number of histogram bins. This was mentioned in the main paper as m. -data-num: the number of images used for each inference (additional images + input query image). The following parameters are required to set model configuration and training data information. Each JSON file should include a key named either illuminant_color_raw or gt_ill that has the ground-truth illuminant color of the corresponding image. In src/ops.py, the function add_camera_name(dataset_dir) can be used to rename image filenames and corresponding ground-truth JSON files. | image1_sensorname_camera2_metadata.json | image2_sensorname_camera1_metadata.json | image1_sensorname_camera1_metadata.json