Tuesday, May 15, 2012

Can of worms

In Walter's comments there was this observation: Figure 4, which plots a fit to the fiducial model, does not agree with the table, which also has a fit to fiducial model.  At first I thought it must have been something simple, like possibly I was not using the same domain of the data in the fit, or possibly fitting one parameter instead of two, something like that.  The answer turned out to be quite a bit more interesting.

It turns out, when we switched to the symmetric version of the code, and I re-ran all of the simulations from scratch, I forgot to halve the number of particles in the resolution study. So the simulations we have, instead of being [1k,2k,5k,10k,20k] actually correspond to (in our convention) [2k,4k,10k,20k,40k].  So it seemed like an easy fix.  Not so, the slope in the plot still did not agree with the 5k entry in the table.

I looked into why this was.  It turns out that I had made a resolution scan directory, and I had re-run the fiducial model (the 10k case) in that directory, separate from the fiducial model in the initial-condition scan.  Further inspection revealed that the two sets of initial conditions were not actually identical... one IC0 case had particle numbers (5003,5006,5009,5012,5015,5018) whereas the the other case had particle numbers (5004,5007,5010,5013,5016,5019).  Given the smallish dispersion between six sims of similar particle numbers, it seemed weird that the derived slopes should be so different for the realizations of the fiducial initial conditions.   Looking into this led me to make the following plot:


Odd that (given how many dozens of plots have been made) that we haven't seen a plot like this, but I think it is significant.  The x axis is time, and the y axis is the fit, done using the six simulations.  Red and blue are the two different sets of six.  Although the bootstrap tells us that for a given time step, the fit to $\rho(x)$ is very stable (i.e. always the same regardless of which subset of the $\rho(x)$ dataset we take, and stable to moderate changes in the range of data), the fit for different time dumps jumps around considerably.  Jerry thought he saw some kind of beating or periodicity in this plot, so I restarted at time 640, and dumped 16 times as often.  Here are the resulting plots:




Its unclear that these plots teach us anything more than the one above, the points are certainly not interpolating the data above, although I have checked that the at points that coincide, the fits agree precicely, and they do.

A long time ago, I suggensted using time averages, and made plots averaging several time steps.  Scott nixed it, here is the thread to remind you:



     The fit versus time plot suggested to me that the fits might stabilize
significantly if I were to average together several timesteps, as I do for the
clumpy warm cases.  Attached are some screen shots with results from this
procedure (average of 6 runs and 20 time steps).  One is the new table with fits
for the different resolutions, another is the plot where we vary the inner cut,
the third is the plot showing  the density scaled by the best fit.   These seem
cleaner to me, so let me know if you'd like me to use this method for all of the
fits...

I'm actually *not* enthusiastic about using the time-averaged plots that you showed me. The reason is that in these plots the error bars (the yellow bands) appear to be much narrower than the random bumps in the density plots (e.g., the bump at 0.15 in IC 2). I confess that I find this surprising---I think the yellow bands come from averaging several different simulations; why should they all show the same bumps and wiggles? Presumably this was true in the non-averaged plots as well, but didn't show up because the error bars were larger.


At this point I am not keene on redoing every plot again, so I am thinking of adding this plot, to give people a flavor of the kind of variability these systems exhibit....  Let me know what you think.  

No comments:

Post a Comment