Friday, June 11, 2010

Answers to some questions

The delay was caused by my getting distracted for a while on another project. I am back to this one now. I'll attempt to answer a few of Scott's questions.

- why is the minimum energy in the histograms 0.5? Is this obvious?


I have an answer for this one, but it's not a good answer. The reason is that the quantity "p" that I have been assuming was the potential energy has a minimum value around 0.43ish. I poked around and found that this is reasonably independent of the number of particles (same for 1k as 20k) and it also does not care if there are an odd/even number of particles. I can't think of a physical reason for this floor in the potential energy, I am assuming that I am mis-interpreting the output of the code.


- do the animations of the energy histograms start from t=0? If so, shouldn't there be some initial stage in which the distribution is changing rapidly (violent relaxation)?

I'm not sure what you mean by t=0, but the animations begin when I release the heavy particle (which is done by replacing one of the light ones, but keeping it's position and velocity). I release the particle a LONG time after the initial conditions, so if you were looking for the initial wind-up, that was long before. The massive particle does not seem to have a period of rapid energy exchange with its neighbors. In case you were interested in stage 1, I made you a movie of that here:





One thing to notice here is that the aforementioned floor in p seems to get established later on, there are frames in here where the minimum total energy (and thus the minimum p) are definitely smaller than the 0.4ish that I mentioned. One hint about these movies, when it's paused you can grab the little progress bar thingy and drag it back and forth to see particular frames.

- are you sure that the density distribution is *not* x^{-1/2}? In other
words, what are the error bars on the data points inside x=1?

I was pretty sure, because I've watched a movie of it, and because I have looked at a number of these different sims, with different particle numbers, and different initial conditions. To quantify this, however, I used the last 10 steps of the ultra long run (stage 1 of the dy friction experiment 20001 particles and 700 ATUs) to estimate the error bars. to avoid doing this run 10 times I am assuming something like ergoticity here, that a time average is equivalent to an ensemble average. I am plotting the mean, but the error bars are just the standard deviation, not the error on the mean (no 1/sqrt(n)).


- the yellow and red runs are the ones that seem to show a clump in the middle; they're also the ones in which the velocity gradient (Hubble constant) is smallest in the initial state near the middle. I wonder if these could be related?

I am a little confused by this statement... My interpretation of the phase space diagram is that the blue and green are already collapsing, the yellow is still in an expansion phase and has not turned around yet, and the red is collapsing but some un-named influence has slowed the particles in the center relative to the ones on the outside. Thus I would have said that the Hubble constant was greatest in the case of the yellow, i.e. the over-density is having a harder time recollapsing. I am missing something obviously. However, I am sure that the slope in phase space is the key point here. It's a true statement that the yellow and red each form 2 individual objects that later merge, which does not happen in the case of the green and blue. My hypothesis is that if the slope is continually steepening toward zero, there will always be 1 object, but if there is an inflection point then there will be two objects that later merge to form the final product.

- I agree that the blob in the red and yellow runs sure looks like it would create a constant-density core, inconsistent with the x^{-1/2} behavior.

I don't think either of them is consistent with x^-1/2. Binney had this to say about mergers:

It is not evident how to extend eq. (9) [which is the eqn for the maximum phase space density] to the three-dimensional case. This is unfortunate, for its implication that the highest phase-space densities are associated with the most massive ob jects, is unexpected. It also needs clarification: it does not apply to ob jects that form through the merger of virialized pieces(11).

It seems he expected merger products to have a smaller maximum phase space density.

No comments:

Post a Comment