Each data point is an average of 5 runtimes collected from running
the program. To recreate out data, compile the programs using the
instructions in README.md
then run gather_data
in the analysis
directory from the project root. After the
the SLURM jobs are finished, run analysis/collate_data
to
collect the data into a single file analysis/data.csv
. Note
that the SLURM scripts are unique to MU Cluster and will need to be
tweaked to work on other systems.
For a raw csv view of the data see data.csv
. We
collected data for the following fractals:
Using a maximum iterations of 25. We collect 3 data points per serial
experiment, and 5 data points per shared and CUDA experiment. For a more
detailed view of our collection methods see the scripts in
analysis/scripts
. The runtimes for CUDA experiments include
the time to copy the data to and from the GPU for reasons that will
become apparent later. Note the block size is set to \(16x16\) for all CUDA experiments since we
found a maxium percentage difference of \(0.2%\) between \(16x16\) and other block sizes that yield
the maximum number of warps.
The following plots are interactive, allowing you to zoom in on the data. By clicking on the legend you can hide or show data for each fractal. Each plot has a line representing the median runtime for each fractal and program.
serial_plot <- full_data %>%
filter(program == "serial") %>%
ggplot(aes(x = samples, y = runtime, color = fractal)) +
geom_point() +
stat_summary(fun = median, geom = "line",
aes(group = fractal)) +
labs(title = "Serial Sampling's Effect on Runtimes",
x = "Samples",
y = "Runtime (s)",
color = "Fractal")
ggplotly(serial_plot)
cuda_plot <- full_data %>%
filter(program == "cuda") %>%
ggplot(aes(x = samples, y = runtime, color = fractal)) +
geom_point() +
stat_summary(fun = median, geom = "line",
aes(group = fractal)) +
labs(title = "CUDA Sampling's Effect on Runtimes",
x = "Samples",
y = "Runtime (s)",
color = "Fractal")
ggplotly(cuda_plot)
We fit linear models to the data to test our hypothesis that the runtime has a linear relationship with the number of samples. By doing so we can give with a high confidence models for computing the runtime of a fractal given the number of samples. In all the models below, the \(R^2\) value is very close to 1, indicating that the model fits the data very well.
Mandelbrot | Burning Ship | Tricorn | Multibrot | Multicorn | Julia | |
---|---|---|---|---|---|---|
(Intercept) | 0.010 | 0.016 | 0.007 | 0.069 | 0.040 | 0.014 |
(0.003) | (0.005) | (0.002) | (0.027) | (0.040) | (0.005) | |
samples | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
(0.000) | (0.000) | (0.000) | (0.000) | (0.000) | (0.000) | |
Num.Obs. | 96 | 96 | 96 | 96 | 96 | 96 |
R2 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 |
R2 Adj. | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 |
AIC | −408.1 | −339.1 | −524.1 | −1.6 | 74.8 | −340.5 |
BIC | −400.5 | −331.4 | −516.4 | 6.0 | 82.5 | −332.8 |
Log.Lik. | 207.072 | 172.560 | 265.031 | 3.824 | −34.384 | 173.257 |
RMSE | 0.03 | 0.04 | 0.02 | 0.23 | 0.35 | 0.04 |
Mandelbrot | Burning Ship | Tricorn | Multibrot | Multicorn | Julia | |
---|---|---|---|---|---|---|
(Intercept) | 0.182 | 0.182 | 0.182 | 0.184 | 0.185 | 0.182 |
(0.000) | (0.000) | (0.000) | (0.000) | (0.000) | (0.000) | |
samples | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
(0.000) | (0.000) | (0.000) | (0.000) | (0.000) | (0.000) | |
Num.Obs. | 160 | 160 | 160 | 160 | 160 | 160 |
R2 | 1.000 | 0.999 | 1.000 | 1.000 | 1.000 | 1.000 |
R2 Adj. | 1.000 | 0.999 | 1.000 | 1.000 | 1.000 | 1.000 |
AIC | −1765.1 | −1668.1 | −1786.7 | −1370.7 | −1348.4 | −1764.1 |
BIC | −1755.9 | −1658.8 | −1777.4 | −1361.5 | −1339.2 | −1754.9 |
Log.Lik. | 885.550 | 837.037 | 896.334 | 688.363 | 677.197 | 885.047 |
RMSE | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
In the plots below we show the percentage of parallelism according to Amdahls Law as a function of samples. We truncated the data to only show percentage parallelism greater than -75%. As we expected, the CUDA implementation is highly parallel, with the shared implementations have varying degrees of parallelism. For each thread count, the shared implementations tend to level off at a certain percentage of parallelism. An oddity in the data is that the shared multibrot and multicorn implementations have a negative percentage of parallelism. We believe this is from their usage of complex exponentiation, which is their ownly major difference between the other fractals. From this, we conclude that that there is some resource that each thread must share in multibrot and multicorn that is not shared in the other fractals. Most likely this resource is a specialized arrithmetic unit.
On the GPU we have a similar issue, the register count per thread from about 32 to 48 when computing the multibrot and multicorn fractals.
(Thanks to Yousuf for improving the legibility of these graphs)
parallel_plot <- parallel %>%
ggplot(aes(x = samples, y = percentage_parallel,
color = interaction(program,threads))) +
geom_point() +
stat_summary(fun = median, geom = "line",
aes(group = interaction(program, fractal, threads))) +
facet_wrap(~fractal, scales = "free_y") +
labs(title = "Program/Fractal Parallelism",
x = "Samples",
y = "Percentage Parallel",
color = "Implementation/Threads")
ggplotly(parallel_plot)
From this data we can conclude that the runtime of fractal generation is linear with respect to the number of samples. We also conclude that the CUDA implementation is highly parallel, with the shared implementations having varying degrees of parallelism.