Wednesday, June 30, 2010

Warm Initial Conditions I: density plots.

I have started studying the effects of adding small random deviations to the positions and/or velocities of the particles in the 1D collapse study. What I am finding is that the $r^{-1/2}$ scaling behavior is lost if the particles have some randomness in their initial conditions. The plots look fairly similar to one another, but for completeness, I will show all three.

1) In this simulation, I have perturbed the positions of the particles by 5% of the inter-particle spacing. The perturbations are drawn from a normal distribution. The final density profile at 700 ATUs looks like this:

2) In this simulation I have left the positions evenly spaced, but have perturbed the velocities by 5% of the amplitude of the velocities (supplied by the user to the code). The density profile after 700 ATUs looks like:


3) And finally, applying both perturbations at the same time:


The most obvious thing about these (other than the departure from $r^{-1/2}$) are the density spikes that happen in the exterior. I am not sure what kind of structure this is due to, but since these curves are averages of the 6 runs, I will have to look into whether they are appearing in all six in the same location, or whether the behavior is being dominated by a single individual run. Later on I will also post movies, and the distributions of the particle energies... It might also be interesting to check if these get closer to the scaling in the cold runs if I put (e.g.) 1000 times smaller perturbations into the initial conditions.

I have a separate line of investigation that I will tell you about soon, where I have perturbed the masses of the particles, but not the initial conditions of either the positions or velocities. I wanted to see if either the number density or the mass density would scale as $r^{-1/2}$ (initial plots suggest the number density does not), and also wanted to see if the mass distribution of particles in each bin remains roughly constant. I thought of compensating the velocities to offset the mass perturbations (to keep the kinetic energy of the perturbed particle the same) but the potential energies are still perturbed, so I didn't know quite what I would learn from that, and didn't attempt it.

Cold initial conditions with errors

First I wanted to demonstrate that I have sorted out the normalization problem I had, the density no longer depends on the binning as shown by this plot, which has 10 particles per bin and 300 particles per bin plotted on top of one another:


Next, the cold initial condition runs are no longer contaminated by the warm runs, and they have also been run to 700 ATUs instead of 400. The plots are below. The color coding used here is the same as I used initially (in this post).


A few observations: While it follows the same scaling relation in the density, the red run seems to have many fewer particles in the interior core of the halo than the others (this can be seen because the innermost data point containing the innermost 10 particles is significantly farther out than in the other runs). The red and the yellow, which are the scenarios that form via merger, seem to have a more rounded relation than the blue and green, which form monolithically. If the particles are binned more coarsely, however, this trend seems to be less apparent. Here are the x>0 portions of the 4 runs plotted together, with coarser binning.


You will see that run 4 has now had a chance to virialize and looks much like the other runs. As yet I have not established the extent to which these quantitatively follow an $r^{-1/2}$ scaling relation, but by eye (with coarse enough binning) they seem to.

Monday, June 28, 2010

Pushing farther in

In response to Scott's inquiry:

For x=0.001 x_max (about the inner edge of your plots) and N=10^4 there should still be 300 particles or so. So why can't you carry the plots to smaller distances?

Indeed, you'll see that the number of particles/bin that I selected (in the titles of the plots) were 100, 200, 300. No reason why you could not push this to smaller numbers, although the error bars get big. Here are some examples (with error bars):



So, the power law behavior goes in for at least another decade or more it seems like. This comparison also shows that I DO have an issue with the normalization. I was being clever using the "normed=True" option in making the histogram, because the particles all have mass 1/N, but I think in the case of uneven binning it is doing something unexpected. I can fix this, just wanted to confirm that I did in fact have a problem.

Incidentally, these error bars come from six separate runs, with the same initial conditions (in principle, tho I mucked it up, see below) except they differ by a couple of particles in the total number. I did this because the sim is deterministic. I wasn't sure if the numerical noise would duplicate itself or cause there to be some variation in the long term, but changing the number of particles by 3 (out of 10k) seemed to be a reasonable way to do an "independant" run of the same initial conditions.

Sadly, I got too ambitious, and I ran the warm and the cold initial conditions at the same time. It should have worked, because the scripts were out of phase, and it seemed like the liklihood that they would BOTH try and copy the mkinit file over and compile the code at the same time was extremely small. But I goofed something up because the runs seem to have gotten all mixed up with one another. So these error bars are probably overestimated because they come from computing the variance among 6 runs, some of which started cold and some warm. I am re-running in a less "clever" more straightforward way now. On the bright side, I can (preliminarily) confirm that the warm initial conditions do not seem to yield this r^-0.5 behavior.

Scott's next questions:

It looks like all of your runs show x^{-1/2} behavior except r4. Any idea what's different about this one?

It started out expanding, rather than collapsing. My best guess is that it's just not finished yet. I am running it for a LONGER time to confirm this hypothesis. If not, then we have an interesting issue, because it's NOT due to the merger, the red run has a merger too.

How long do these take and how much further could you push N?

This one I am plotting took around 67 minutes. Walter says the code scales (if I recall correctly) pretty close to N log N. But it's not so easy to answer your question since the more particles there are, the longer it takes to be done virializing. But if we wanted to run one for like a week, then the answer is: quite a lot more.

Thursday, June 24, 2010

Binning with same number of particles.

Here are the plots where each bin has the same number of particles in it. Bin "centers" are plotted at the mean of the bin edge values. I ran it for a few different choices of the particle number in the bin, here are the plots. These look much the same as the logarithmic binning example. We will start to see a difference when my runs finish and I can put some error bars on the plots (these will have nicer error properties).





This is the big-long run with 20k particles at 700ATU. I'll do the plots for the initial condition test tomorrow (once my redundant runs have finished so I can put on error bars). I can't decide if there is something wrong with the normalization, I'll look at it again tomorrow.

Testing Latex

Testing the LaTex gadget I just installed:

$\frac{dy}{dx}=\sqrt{1+x} $

Okay, let's see.

Wednesday, June 23, 2010

Initial Conditions 3: with better binning.

I used the log binning scheme to look at the 4 runs with different initial conditions. Here is the plot at the end of the run:


The spikes in the yellow are due to the patchy nature of the final product. If I run these runs longer, this might go away. This is still log binning, I'll work on uneven binning with same number of particles.

To answer Walter's question in more detail: You still not quite answered my technical question. For example how do you assign the position of the bin (it has a range, not just one, so which one do you pick for the plot?

For the log plot, I used log10(( 10**edge1 +10**edge2) /2.) , for the linear just the bin center. I did not use the positions of the particles to determine the abscissa, though I easily could.

Tuesday, June 22, 2010

r**-1/2 recovered

Walter's e-mail set me onto the right track, the r^-1/2 behavior is farther to the interior of the halo than where I was plotting in my previous studies. I changed the binning to have logarithmic spacing and what I find is (indeed) over several decades there is a powerlaw behavior r^-1/2.

My first attempt to plot looked like this:

Indicating that there does seem to be power law behavior, but as Walter pointed out, the center has drifted from x=0 (there are two separate density curves, for particles with positive and negative position coordinates). My crude fast way of correcting this was to find the x position of the density maximum, and shift all the coordinates by this amount, and re-make the histograms. When I did this, I got this plot:


Which as you can see is still not perfect, but improved. Anyhow, I think this probably now agrees with what Walter's students were finding.

Regarding the question: Btw, I havn't yet got an anwser to my question of how you obtain the density. Sorry, the method is pretty simple, I make a histogram of the particle positions, normalize it (because each particle has mass 1/N) and also divide by the bin width to get mass per unit length.

Friday, June 11, 2010

Answers to some questions

The delay was caused by my getting distracted for a while on another project. I am back to this one now. I'll attempt to answer a few of Scott's questions.

- why is the minimum energy in the histograms 0.5? Is this obvious?


I have an answer for this one, but it's not a good answer. The reason is that the quantity "p" that I have been assuming was the potential energy has a minimum value around 0.43ish. I poked around and found that this is reasonably independent of the number of particles (same for 1k as 20k) and it also does not care if there are an odd/even number of particles. I can't think of a physical reason for this floor in the potential energy, I am assuming that I am mis-interpreting the output of the code.


- do the animations of the energy histograms start from t=0? If so, shouldn't there be some initial stage in which the distribution is changing rapidly (violent relaxation)?

I'm not sure what you mean by t=0, but the animations begin when I release the heavy particle (which is done by replacing one of the light ones, but keeping it's position and velocity). I release the particle a LONG time after the initial conditions, so if you were looking for the initial wind-up, that was long before. The massive particle does not seem to have a period of rapid energy exchange with its neighbors. In case you were interested in stage 1, I made you a movie of that here:





One thing to notice here is that the aforementioned floor in p seems to get established later on, there are frames in here where the minimum total energy (and thus the minimum p) are definitely smaller than the 0.4ish that I mentioned. One hint about these movies, when it's paused you can grab the little progress bar thingy and drag it back and forth to see particular frames.

- are you sure that the density distribution is *not* x^{-1/2}? In other
words, what are the error bars on the data points inside x=1?

I was pretty sure, because I've watched a movie of it, and because I have looked at a number of these different sims, with different particle numbers, and different initial conditions. To quantify this, however, I used the last 10 steps of the ultra long run (stage 1 of the dy friction experiment 20001 particles and 700 ATUs) to estimate the error bars. to avoid doing this run 10 times I am assuming something like ergoticity here, that a time average is equivalent to an ensemble average. I am plotting the mean, but the error bars are just the standard deviation, not the error on the mean (no 1/sqrt(n)).


- the yellow and red runs are the ones that seem to show a clump in the middle; they're also the ones in which the velocity gradient (Hubble constant) is smallest in the initial state near the middle. I wonder if these could be related?

I am a little confused by this statement... My interpretation of the phase space diagram is that the blue and green are already collapsing, the yellow is still in an expansion phase and has not turned around yet, and the red is collapsing but some un-named influence has slowed the particles in the center relative to the ones on the outside. Thus I would have said that the Hubble constant was greatest in the case of the yellow, i.e. the over-density is having a harder time recollapsing. I am missing something obviously. However, I am sure that the slope in phase space is the key point here. It's a true statement that the yellow and red each form 2 individual objects that later merge, which does not happen in the case of the green and blue. My hypothesis is that if the slope is continually steepening toward zero, there will always be 1 object, but if there is an inflection point then there will be two objects that later merge to form the final product.

- I agree that the blob in the red and yellow runs sure looks like it would create a constant-density core, inconsistent with the x^{-1/2} behavior.

I don't think either of them is consistent with x^-1/2. Binney had this to say about mergers:

It is not evident how to extend eq. (9) [which is the eqn for the maximum phase space density] to the three-dimensional case. This is unfortunate, for its implication that the highest phase-space densities are associated with the most massive ob jects, is unexpected. It also needs clarification: it does not apply to ob jects that form through the merger of virialized pieces(11).

It seems he expected merger products to have a smaller maximum phase space density.

Thursday, June 3, 2010

Initial Condition 2: Cold (follow up)

Walter recommended that I check whether the r^-1/2 behavior might be recovered more robustly if I increased the particle number, so these 4 runs have 10k particles rather than 1000. Sorry about the r^-2, I was mixing up scaling with that in another project, that was simply a mistake on my part. I left it in the plots for comparison, however, because I'm not finding a clear power law behavior in the density. Here are the initial conditions. They are the same save the number of particles.


To answer Walter's question about how I found the middle, I assumed it was at zero. I checked this by plotting the density and checking that the spike lined up with zero, at least for the binning I chose (bins because I'm using a histogram tool to compute the density, density being the number of particles per unit length).

Here is the final state, after 400 ATUs.


I wanted to see if, in this run, the phase space plots all initially converged as in the previous case. It looks identical to the plot in the last post, I'll spare you, you can take my word for it.

However, in poking around this simulation I noticed something interesting, take a look at this plot form 100 ATUs:



What struck me is that I could see windings, but they seemed fat in the orange curves. Here's a zoom into the middle part, indeed the particles seem to be doing similar winding behavior, but as a clump. Maybe you guys have seen this before, but I thought it was interesting and pretty different from all the behavior I'd seen before.


I checked the other runs, the green and the blue do not seem to have this behavior, but the red had something that looked related:


The green and the blue had winding too, but in a single thin stream of particles, much as is seen in the Binney paper. Here, I'll just show you:


See, single winding, all the way in. I'm pretty sure this has to do with the fact that the red and the yellow both form two blobs in phase space that then merge together. We are probably seeing something related to the final merger. Anyhow, I did some spot checking to make sure that (e.g.) the blue never does anything like this, it seems like the answer is no it doesn't though I don't know for sure. Although there are some funky things going on too... for example:


Anyhow, it might be a matter of mild concern that we don't get to the r^-1/2 behavior. I've run stage 1 of the dynamical mass experiment nearly 2x as long, I thought maybe it was just a matter of time but it seems like no, the simulations have reached their final density distributions. Here is the run with 20k particles, out to 700 ATUs.
Same kind of shape.

Wednesday, June 2, 2010

Energy distribution movies

I made some stills of the energy distribution of the other particles and they were un-illuminating because there were virtually no differences. So then I made a movie to see what is going on. Here is the first run (the gappy one that we originally saw):




And here is the movie of the second run:




Which you will note is virtually identical, though there may be just a bit more structure in the first run than the second. The weird initial feature at 2.7ish is inherited from the first stage (which was the same in both runs). It has nothing to do with replacing the particle.

The shift in the behavior of the heavy particle is related to the disappearance of this weird feature, in both of the runs. Probably if I were to run stage 1 longer, then this feature would disappear before the heavy particle gets introduced, and different heavy particles would exhibit much more similar behavior to one another. The feature is moderately long lived, many crossing times, it might be interesting to make a movie of stage 1, to see how it arises. Since I'd like to run stage 1 for longer anyhow, I can start it from 0 again and dump out often enough to make a movie.