Thursday, December 16, 2010

Different initial density profiles

I found it very tricky to get these working.  They are not all working yet.  But here are a few that did, with 10k particles, 6 runs of each.  The initial velocities are set so that there is a coherent collapse going on, the same as for the equally spaced particles, namely:

V=shot.V(n) = -V*sin(shot.X(n));

These runs take a long time somehow.  Anyway, what happened is sort of interesting, I think.  Here is a plot of the initial conditions, these are all power law densities $\rho = A x^a$.  The slopes I picked to test out were a= -0.5, -0.8, -0.3, -2.0, +0.5  . 






I have not plotted the -2.0 because I have not succeeded in getting that set of initial conditions to run at all yet, I have not diagnosed what the problem could be.    So, the final density profiles are here:








 So, my interpretation of this is as follows: -0.8 seems to stay put at its initial slope, (as obviously does -0.5), whereas  -0.3 and +0.5 seem to drift to the -1/2 attractor.  However it seems unlikely that there is anything special about the particular value -0.8.  The big difference I see is that since I fix the initial spacial interval (which thus determines the normalization A), for a fixed number of particles the -0.8 initial configuration extends much farther into the center than the -0.3 and +0.5.  But all the runs collapse inward so that at the final configuration, they all have lots of particles interior to $10^{-3}$.  My guess is that if there are already particles in the interior, and they follow a different power law slope, then apparently they stay there, but if there are no particles in there to begin with, when they get there they do the $r^-1/2$ behavior.   I am pretty sure that adding a bunch more particles to the -0.3 run would make the initial configuration extend farther inward, and then we could test this.  I might make it -0.2 too, so it's easier to distinguish from -0.5 at the end. 

Incidentally, I also did a -0.8 run where the initial conditions were expanding rather than contracting (but the initial density configuration looks identical to what you see in the first figure).  This is what you get:


 So the stability of the inner slope does not seem to care about what the initial velocities are doing, as long as there is some long range coherence (and not thermal behavior between the particles).   Ignore the spike, my code was having trouble recentering one of the 6 runs.   I might try running some warm initial conditions on these non-uniformly-spaced runs, to see if it still flattens itself out in the central region. 


Monday, November 15, 2010

Zero velocity is not acting cold

I was able to get a run with 10k particles to run.  This run perturbs the initial positions of the particles by 5% of the interparticle spacing, and sets all the velocities equal to zero.  So far only two of the 6 runs have finished, but it's enough to make a density plot with (ignore the huge error bars for now).  Here it is.

What you should take away from this is that it's beginning to look like the $r^{-1/2}$ scaling is tied somehow to the fact that the particles were initially on a grid, rather than to the fact that the particles were initially cold.  Hopefully, I will have results from the runs with an initially non-uniform density profile, but at this point, I do not expect the results to differ qualitatively from these results here.

Friday, November 12, 2010

zero velocity should definitely be "cold"

I am very frustrated with the code lately, because I don't exactly understand the mechanism that is causing it to fail.  I will describe the trouble in more detail below.   However, I have been able to get the code to run for a certain number of particles near 1k (which is not nearly enough to test what I wanted to test).  Ultimately I would like to test how the initial density profile affects the final density profile that the system settles to.  However, I would like to make sure to start with COLD initial conditions.  I was feeling concerned because the runs that we had done before, where the regular spacing of the particles was jittered around a bit, displayed results that were qualitatively very similar to the warm simulations, they did not exhibit the $r^{-1/2}$ scaling.  I figure that one way to ensure that the particles are cold would be to give them zero velocity.  Before I tried $v=0$ on groups of particles with different initial density profiles, I thought it would be a good idea to test that the $r^{-1/2}$ scaling is reproduced on the equally spaced particles (which I fully expected since we tried both positive and negative initial velocities for the equally spaced particles and both went to the $r^{-1/2}$ scaling).   Here is the run with 1004 particles.




There are basically not quite enough particles to be able to see the inner scaling, but it looks roughly correct, if a little soft in the middle.  I also wanted to see if setting the velocities to zero would cause the runs with position jitter to exhibit $r^{-1/2}$ scaling rather than looking like the warm runs.  Here's the plot:


Again, there are not really enough particles to be able to tell, but it does look a little softer, still qualitatively more like the warm runs than the cold...  If I could run more particles it would be easier to say.

Why aren't there more particles, you may be asking!  Well, for days my scripts have been hanging, and I have traced the problem to the fact that for these initial conditions, for some numbers of particles the code simply runs forever without ever getting to the first time dump.  So for example I used to (in order to get the error bars) do runs with 1004 1007 1010 1013 1016 1019 particles in them.  I found that for this mkinit.cc initializer, 1004 and 1007 are okay (they run in about 22 seconds) , but 1010 and 1013 are not okay (they never finish running).   I am still looking into what is happening, but I think I will also send Walter the mkinit.cc code, and see if he reproduces the problem on his version. 

Friday, October 29, 2010

The supposed thermodynamic limit

According to the Kiessling paper, there is an exact thermodynamic limit (computed by Rybicki) that these systems should relax to, whose density profile is given by the form $\rho(x)=a sech^2(2ax)$. I wanted to see if any part of the profile matched this form, and if so, at what scale it departed from it, and started scaling like $r^{-1/2}$. My reasoning was that since $sech^2(x)$ rolls over, then somewhere locally it looks vaguely like $r^{-1/2}$. If that happened to be in the range we were studying, and we just were not probing far in enough for it to go flat, then we would have found the explanation. Here are the fits.



For what it's worth, I have fit all the data here, not just the scaling law portion in the interior. I tried just fitting the same range as I'd used before to fit $r^{-\alpha}$, the fit is equally bad.

Just as an experiment, I tried fitting the amplitude separately from the argument of sinh $\rho(x)=a sinh^2(bx)$ but this fared no better, see here:




I noticed that this new functional form qualitatively resembled what was going on in the warm data much more. I decided to try fitting hte warm data, because it might be interesting if the warm data was reaching this so-called thermodynamic limit, but the cold data was somehow being prevented from doing so. The results were disappointing I must say, here are the 1 and 2 parameter fits for the warm data:






So, I guess it looks like none of our simulations agree with the Rybicki derivation of the eventual density profile.

Wednesday, October 20, 2010

Algebra

I have repeated the calculation a couple of times. I have not used anything in the notes except equation 2 and the postulated form of x(mu,t). I am finding a different power of t than the two of you. I bringing it up because I think it affects the conditions for which equation 3 is solvable. Here is my calculation, you can click on it to enlarge. Can either of you spot a goof?

Thursday, October 14, 2010

Looking at poisson error bars

So far, the fits to a power law have all been using error bars that are obtained by computing the dispersion among the 6 runs for each set of initial conditions. I have now used Poisson error bars to do the same thing. Here I present a comparison. For one of the cold runs I have studied, here are the fits to a power law using the two different sets of error bars:



These look visually equivalent, though the reduced chi squared value is much smaller for the Poisson error bars. In order to compare the two sets, I made the following plot

Here it's clearly apparent that the Poisson error bars are systematically larger than those obtained using the dispersion among the runs. Just to be very specific, here is a snippet of code to illustrate how these two error bars were computed:

#this part is inside a loop ii over the 6 runs
d1[:,ii] =dens1 #these are the densities
d2[:,ii] =dens2
ep1[:,ii]=dens1/N.sqrt(h1[0]) #Poisson errors for the individual run
ep2[:,ii]=dens2/N.sqrt(h2[0]) #h1[0] is the number of particles in the bin

#Some statistics and output file
avgd1=d1.mean(axis=1) #This is the average density of the 6 runs
avgd2=d2.mean(axis=1)
if poisson==0:
stdd1=d1.std(axis=1) #This is the std. dev. of the 6 densities for each bin
stdd2=d2.std(axis=1)
else:
stdd1=N.sum(ep1*ep1,axis=1) #This step combines the Poisson error
stdd2=N.sum(ep2*ep2,axis=1) #bars from each run to get the error
stdd1=N.sqrt(stdd1)/N.sqrt(6) #on the mean.
stdd2=N.sqrt(stdd2)/N.sqrt(6)


I have also done the k-s test for these two cases, here are the plots.



In conclusion, it looks to me that the discrepancy between the chi square statistic and the outcome of the k-s test is not resolved by using Poisson errors instead of the run-run dispersion, indeed the Poisson errors are in fact larger than the dispersion errors, which causes the chisq statistic to improve. At the percent level, the choice of error bar affects the fit value of the exponent, but this is not an especially huge effect, obviously.

Tuesday, September 14, 2010

long+short wavelength velocity perturbations

We saw before that if the initial conditions are cold, the $r^{-1/2}$ scaling behavior is observed, for a number of different initial velocity structures. We also saw that if there is jitter in the initial positions and/or velocities, this scaling behavior is lost. In this study, I have added more structure to the initial velocities of the particles, which now have both a long and short wavelength mode. The positions and long wavelength portion of the velocities are

shot.X(n) = n*d-Pih;
shot.V(n) =-V*sin(shot.X(n));

Here pih is $\pi/2$. To this I have added a term with one tenth the amplitude, but higher frequency, i.e. multiple wavelengths in the initial velocity structure of the particles. For exmple:

shot.V(n)+=-0.1*V*sin(1000*shot.X(n));

I have studied $2\lambda$, $10\lambda$, $100\lambda$ and $1000\lambda$. Below are the resulting plots.



As the wavelength of the hight frequency mode approaches the interparticle spacing, we see that the $r^{-1/2}$ scaling is lost. This is consisent with what we see with the warm simulations, where the jitter could be interpreted as lots of high frequency components to the initial velocity structure. The plots above are the average of 6 runs, each of roughly 10k particles. Just to make sure, I also ran 1k particles, with the $100\lambda$ hight frequency velocity component. I suspected that this should look qualitatively similar to the $1000\lambda$ case with 10k particles. Here it is:


It's kindof hard to tell because fewer particles means a smaller dynamic range we have to look at, but qualitatively it seems to be departing from power law more than the 10k particle case.

Thursday, July 15, 2010

cold with position jitter

I have re-run the scenario where I define equally spaced positions, define the velocities according to the unperturbed position, and then added a 20% perturbation to the particle positions. This is in contrast to the run I showed you before, where I perturbed the position first, and used the perturbed positions to define the velocities. The result seems to be consistent with what we were finding with warm simulations, namely that the slope of the inner profile is flatter than for the cold simulations. If I understood Walter correctly, these conditions are effectively "cold," (though this sort-of confuses me), so this is a somewhat surprising result. Here is the plot:


There is a minor caveat, something went wrong with my run, and only 5 of the 6 instances of the initial conditions ran. I don't THINK I have a mix-up in my book-keeping, but it's odd that the run did not continue, and I am re-running from scratch just to double check.

Thursday, July 8, 2010

k-s clarifications

Here are a few clarifications and responses to Walter's email, I wrote my post latish last night, and was perhaps neglecting a few of the details:

we best take a simulation with equal-mass particles, such that the
hypothesis is equivalent to
the number density is proportional to $r^{-\alpha}$ for $r$ less than $R$.

Yes, this is what I did.

Then the K-S distance is just

D = max_{r < R} | N(< r)/N(< R) - (r/R)^{1-alpha} |

Indeed. This is exactly what I calculated. I used a canned k-s routine, but I also computed the D statistic by hand and checked that it agreed with what was coming out of the kstest function.

From your plots it looks like alpha is less than 1/2

What you call alpha is what I called expt in the previous post. The answer to this is that alpha is rolling from less than 1/2 for some of the range of the data over to greater than 1/2 for other parts. A better way to put it might be that the data is simply not particularly well described by a power law (a result that is seemingly in conflict with the chi-squared result). More on this below.


Also it seems to me that you performed the K-S test for data at r>R, but as you did not explain what exactly you did (no mentioning of R or a similar parameter), I can only guess.

I am not sure if this is what you were concerned about, but I truncated the data I used in the k-s test to only include the range where the power-law scaling behavior is seen. I did the analysis at two different choices of truncation, 0.1 and 0.3, and included only data between 0 and the truncation radius in the analysis. The results were qualitatively similar for the two choices. The plots in the previous post were for 0.3. What you call R was included in my previous post as $x_max$, in the first version of the cdf I used. The second version was done as a sanity check, because the results came out so poorly, I wanted to double check with a number density function that I was sure was a good fit to the density data (with error bars). It surprised me how much the two disagreed.

I had a chat with Scott about this, he thought that potentially the run-to-run scatter may be substantially greater than just the shot noise. So I will repeat the chisq. fit using only the shot noise, rather than the variance between runs, and see if the value of reduced chi square increases dramatically (suggesting that a power law is not an especially good fit). That would make the results more consistent.

I'll also have a look at the k-s test between two separate runs to see if the profile is at least something like "universal" even if it is not particularly well described by $r^{-1/2}$.

Wednesday, July 7, 2010

K-S testing.

I tried the k-s test on the data to see if it agrees with a r^-1/2 density profile. I had a fair bit of trouble with this k-s test stuff. At first I blamed the python implementation, but eventually I tested it by generating normally distributed random numbers, and passing it the cumulative distribution function cdf(x)=$\frac{1}{2} \left[1+erf(x/\sqrt{2})\right]$ and got reasonable agreement (D,p=(0.0583, 0.885) for a sample of 100 numbers). So I am reasonably sure now that it works.

The bottom line I guess is that the k-s test is concluding that the data are probably not drawn from an $x^{-1/2}$ distribution. I searched around the vicinity of -0.5, and found the local maximum for the p coming out of the ks test. Here is a plot of the cumulative distribution functions, and the D/p values that come out (for the best scaling). I plotted a scaling of -0.5 for comparison. N.B. the labeling for these models is $\rho(x)\propto x^{-expt}$.



This apparently means that even for the local maximum value of p, there is only a 2ish percent chance that these data came from an $x^{-expt}$ distribution.

I should mention at this point that I had trouble deciding what to use for the cdf; there were two logical options and I tried them both. The plot above was made using cdf(x)=$(x/x_{max})^{-expt+1}$, which has the advantage that the curves meet again at 1. The other strategy I tried was to turn the density $\rho(x)$ into a number density function, and then divide by the number of points I was using in the k-s test. In other words: cdf(x)=$\frac{A}{m_p N_{ks}} \frac{1}{(-expt+1)} x^{(-expt+1)}$. This has the advantage that it makes more sense. Here is the plot I got:

I was surprised this was so much worse, given that at the maximum point, the value of the theory cdf was 0.994, but I will explain the difference. In order to get the number density, I needed the amplitude of the mass density A. I did this by performing a least-squares fit to the density data points (with error bars) that I showed you in previous posts. Every time I adjust the value of the exponent, the leastsq fit allows the amplitude A to compensate, to best match the density data (which has been binned, and using the data twice in this way might be a cheat, statistically speaking...). That is why you see the curves jumping around a bit more in these plots. It makes me confused, because I felt that these theory CDFs differed only in a normalization from one another, but I guess there is more to it.

In any event, in the process of doing the least squares fit, I also got a reduced chi square value. Here is a plot of the measured density and the best fit model fixing the exponent to be $-1/2$.

I took this one further step, and fit both the amplitude and the exponent together. The plot is visually quite similar, and the parameters were A=0.2422, expt=0.4988 and the reduced chisq was 0.6912. Taking the circularity of my analysis to the point of the absurd, I then tried a k-s test using both the amplitude and the exponent I got out of the data (although I am vaguely recalling that NRC++ warned that there was a rule against such behavior)... still it very much distresses me, take a look:

That p is rediculous! I guess the only conclusion I can draw here is that my error bars are just way too big, so that the chi-suqare statistic is basically just telling me about the error bars (?). Otherwise how can these two statistical tests differ so extremely? I am beginning to lose my faith in statistics... not that I had all that much to begin with.

Wednesday, June 30, 2010

Warm Initial Conditions I: density plots.

I have started studying the effects of adding small random deviations to the positions and/or velocities of the particles in the 1D collapse study. What I am finding is that the $r^{-1/2}$ scaling behavior is lost if the particles have some randomness in their initial conditions. The plots look fairly similar to one another, but for completeness, I will show all three.

1) In this simulation, I have perturbed the positions of the particles by 5% of the inter-particle spacing. The perturbations are drawn from a normal distribution. The final density profile at 700 ATUs looks like this:

2) In this simulation I have left the positions evenly spaced, but have perturbed the velocities by 5% of the amplitude of the velocities (supplied by the user to the code). The density profile after 700 ATUs looks like:


3) And finally, applying both perturbations at the same time:


The most obvious thing about these (other than the departure from $r^{-1/2}$) are the density spikes that happen in the exterior. I am not sure what kind of structure this is due to, but since these curves are averages of the 6 runs, I will have to look into whether they are appearing in all six in the same location, or whether the behavior is being dominated by a single individual run. Later on I will also post movies, and the distributions of the particle energies... It might also be interesting to check if these get closer to the scaling in the cold runs if I put (e.g.) 1000 times smaller perturbations into the initial conditions.

I have a separate line of investigation that I will tell you about soon, where I have perturbed the masses of the particles, but not the initial conditions of either the positions or velocities. I wanted to see if either the number density or the mass density would scale as $r^{-1/2}$ (initial plots suggest the number density does not), and also wanted to see if the mass distribution of particles in each bin remains roughly constant. I thought of compensating the velocities to offset the mass perturbations (to keep the kinetic energy of the perturbed particle the same) but the potential energies are still perturbed, so I didn't know quite what I would learn from that, and didn't attempt it.

Cold initial conditions with errors

First I wanted to demonstrate that I have sorted out the normalization problem I had, the density no longer depends on the binning as shown by this plot, which has 10 particles per bin and 300 particles per bin plotted on top of one another:


Next, the cold initial condition runs are no longer contaminated by the warm runs, and they have also been run to 700 ATUs instead of 400. The plots are below. The color coding used here is the same as I used initially (in this post).


A few observations: While it follows the same scaling relation in the density, the red run seems to have many fewer particles in the interior core of the halo than the others (this can be seen because the innermost data point containing the innermost 10 particles is significantly farther out than in the other runs). The red and the yellow, which are the scenarios that form via merger, seem to have a more rounded relation than the blue and green, which form monolithically. If the particles are binned more coarsely, however, this trend seems to be less apparent. Here are the x>0 portions of the 4 runs plotted together, with coarser binning.


You will see that run 4 has now had a chance to virialize and looks much like the other runs. As yet I have not established the extent to which these quantitatively follow an $r^{-1/2}$ scaling relation, but by eye (with coarse enough binning) they seem to.

Monday, June 28, 2010

Pushing farther in

In response to Scott's inquiry:

For x=0.001 x_max (about the inner edge of your plots) and N=10^4 there should still be 300 particles or so. So why can't you carry the plots to smaller distances?

Indeed, you'll see that the number of particles/bin that I selected (in the titles of the plots) were 100, 200, 300. No reason why you could not push this to smaller numbers, although the error bars get big. Here are some examples (with error bars):



So, the power law behavior goes in for at least another decade or more it seems like. This comparison also shows that I DO have an issue with the normalization. I was being clever using the "normed=True" option in making the histogram, because the particles all have mass 1/N, but I think in the case of uneven binning it is doing something unexpected. I can fix this, just wanted to confirm that I did in fact have a problem.

Incidentally, these error bars come from six separate runs, with the same initial conditions (in principle, tho I mucked it up, see below) except they differ by a couple of particles in the total number. I did this because the sim is deterministic. I wasn't sure if the numerical noise would duplicate itself or cause there to be some variation in the long term, but changing the number of particles by 3 (out of 10k) seemed to be a reasonable way to do an "independant" run of the same initial conditions.

Sadly, I got too ambitious, and I ran the warm and the cold initial conditions at the same time. It should have worked, because the scripts were out of phase, and it seemed like the liklihood that they would BOTH try and copy the mkinit file over and compile the code at the same time was extremely small. But I goofed something up because the runs seem to have gotten all mixed up with one another. So these error bars are probably overestimated because they come from computing the variance among 6 runs, some of which started cold and some warm. I am re-running in a less "clever" more straightforward way now. On the bright side, I can (preliminarily) confirm that the warm initial conditions do not seem to yield this r^-0.5 behavior.

Scott's next questions:

It looks like all of your runs show x^{-1/2} behavior except r4. Any idea what's different about this one?

It started out expanding, rather than collapsing. My best guess is that it's just not finished yet. I am running it for a LONGER time to confirm this hypothesis. If not, then we have an interesting issue, because it's NOT due to the merger, the red run has a merger too.

How long do these take and how much further could you push N?

This one I am plotting took around 67 minutes. Walter says the code scales (if I recall correctly) pretty close to N log N. But it's not so easy to answer your question since the more particles there are, the longer it takes to be done virializing. But if we wanted to run one for like a week, then the answer is: quite a lot more.

Thursday, June 24, 2010

Binning with same number of particles.

Here are the plots where each bin has the same number of particles in it. Bin "centers" are plotted at the mean of the bin edge values. I ran it for a few different choices of the particle number in the bin, here are the plots. These look much the same as the logarithmic binning example. We will start to see a difference when my runs finish and I can put some error bars on the plots (these will have nicer error properties).





This is the big-long run with 20k particles at 700ATU. I'll do the plots for the initial condition test tomorrow (once my redundant runs have finished so I can put on error bars). I can't decide if there is something wrong with the normalization, I'll look at it again tomorrow.

Testing Latex

Testing the LaTex gadget I just installed:

$\frac{dy}{dx}=\sqrt{1+x} $

Okay, let's see.

Wednesday, June 23, 2010

Initial Conditions 3: with better binning.

I used the log binning scheme to look at the 4 runs with different initial conditions. Here is the plot at the end of the run:


The spikes in the yellow are due to the patchy nature of the final product. If I run these runs longer, this might go away. This is still log binning, I'll work on uneven binning with same number of particles.

To answer Walter's question in more detail: You still not quite answered my technical question. For example how do you assign the position of the bin (it has a range, not just one, so which one do you pick for the plot?

For the log plot, I used log10(( 10**edge1 +10**edge2) /2.) , for the linear just the bin center. I did not use the positions of the particles to determine the abscissa, though I easily could.

Tuesday, June 22, 2010

r**-1/2 recovered

Walter's e-mail set me onto the right track, the r^-1/2 behavior is farther to the interior of the halo than where I was plotting in my previous studies. I changed the binning to have logarithmic spacing and what I find is (indeed) over several decades there is a powerlaw behavior r^-1/2.

My first attempt to plot looked like this:

Indicating that there does seem to be power law behavior, but as Walter pointed out, the center has drifted from x=0 (there are two separate density curves, for particles with positive and negative position coordinates). My crude fast way of correcting this was to find the x position of the density maximum, and shift all the coordinates by this amount, and re-make the histograms. When I did this, I got this plot:


Which as you can see is still not perfect, but improved. Anyhow, I think this probably now agrees with what Walter's students were finding.

Regarding the question: Btw, I havn't yet got an anwser to my question of how you obtain the density. Sorry, the method is pretty simple, I make a histogram of the particle positions, normalize it (because each particle has mass 1/N) and also divide by the bin width to get mass per unit length.

Friday, June 11, 2010

Answers to some questions

The delay was caused by my getting distracted for a while on another project. I am back to this one now. I'll attempt to answer a few of Scott's questions.

- why is the minimum energy in the histograms 0.5? Is this obvious?


I have an answer for this one, but it's not a good answer. The reason is that the quantity "p" that I have been assuming was the potential energy has a minimum value around 0.43ish. I poked around and found that this is reasonably independent of the number of particles (same for 1k as 20k) and it also does not care if there are an odd/even number of particles. I can't think of a physical reason for this floor in the potential energy, I am assuming that I am mis-interpreting the output of the code.


- do the animations of the energy histograms start from t=0? If so, shouldn't there be some initial stage in which the distribution is changing rapidly (violent relaxation)?

I'm not sure what you mean by t=0, but the animations begin when I release the heavy particle (which is done by replacing one of the light ones, but keeping it's position and velocity). I release the particle a LONG time after the initial conditions, so if you were looking for the initial wind-up, that was long before. The massive particle does not seem to have a period of rapid energy exchange with its neighbors. In case you were interested in stage 1, I made you a movie of that here:





One thing to notice here is that the aforementioned floor in p seems to get established later on, there are frames in here where the minimum total energy (and thus the minimum p) are definitely smaller than the 0.4ish that I mentioned. One hint about these movies, when it's paused you can grab the little progress bar thingy and drag it back and forth to see particular frames.

- are you sure that the density distribution is *not* x^{-1/2}? In other
words, what are the error bars on the data points inside x=1?

I was pretty sure, because I've watched a movie of it, and because I have looked at a number of these different sims, with different particle numbers, and different initial conditions. To quantify this, however, I used the last 10 steps of the ultra long run (stage 1 of the dy friction experiment 20001 particles and 700 ATUs) to estimate the error bars. to avoid doing this run 10 times I am assuming something like ergoticity here, that a time average is equivalent to an ensemble average. I am plotting the mean, but the error bars are just the standard deviation, not the error on the mean (no 1/sqrt(n)).


- the yellow and red runs are the ones that seem to show a clump in the middle; they're also the ones in which the velocity gradient (Hubble constant) is smallest in the initial state near the middle. I wonder if these could be related?

I am a little confused by this statement... My interpretation of the phase space diagram is that the blue and green are already collapsing, the yellow is still in an expansion phase and has not turned around yet, and the red is collapsing but some un-named influence has slowed the particles in the center relative to the ones on the outside. Thus I would have said that the Hubble constant was greatest in the case of the yellow, i.e. the over-density is having a harder time recollapsing. I am missing something obviously. However, I am sure that the slope in phase space is the key point here. It's a true statement that the yellow and red each form 2 individual objects that later merge, which does not happen in the case of the green and blue. My hypothesis is that if the slope is continually steepening toward zero, there will always be 1 object, but if there is an inflection point then there will be two objects that later merge to form the final product.

- I agree that the blob in the red and yellow runs sure looks like it would create a constant-density core, inconsistent with the x^{-1/2} behavior.

I don't think either of them is consistent with x^-1/2. Binney had this to say about mergers:

It is not evident how to extend eq. (9) [which is the eqn for the maximum phase space density] to the three-dimensional case. This is unfortunate, for its implication that the highest phase-space densities are associated with the most massive ob jects, is unexpected. It also needs clarification: it does not apply to ob jects that form through the merger of virialized pieces(11).

It seems he expected merger products to have a smaller maximum phase space density.

Thursday, June 3, 2010

Initial Condition 2: Cold (follow up)

Walter recommended that I check whether the r^-1/2 behavior might be recovered more robustly if I increased the particle number, so these 4 runs have 10k particles rather than 1000. Sorry about the r^-2, I was mixing up scaling with that in another project, that was simply a mistake on my part. I left it in the plots for comparison, however, because I'm not finding a clear power law behavior in the density. Here are the initial conditions. They are the same save the number of particles.


To answer Walter's question about how I found the middle, I assumed it was at zero. I checked this by plotting the density and checking that the spike lined up with zero, at least for the binning I chose (bins because I'm using a histogram tool to compute the density, density being the number of particles per unit length).

Here is the final state, after 400 ATUs.


I wanted to see if, in this run, the phase space plots all initially converged as in the previous case. It looks identical to the plot in the last post, I'll spare you, you can take my word for it.

However, in poking around this simulation I noticed something interesting, take a look at this plot form 100 ATUs:



What struck me is that I could see windings, but they seemed fat in the orange curves. Here's a zoom into the middle part, indeed the particles seem to be doing similar winding behavior, but as a clump. Maybe you guys have seen this before, but I thought it was interesting and pretty different from all the behavior I'd seen before.


I checked the other runs, the green and the blue do not seem to have this behavior, but the red had something that looked related:


The green and the blue had winding too, but in a single thin stream of particles, much as is seen in the Binney paper. Here, I'll just show you:


See, single winding, all the way in. I'm pretty sure this has to do with the fact that the red and the yellow both form two blobs in phase space that then merge together. We are probably seeing something related to the final merger. Anyhow, I did some spot checking to make sure that (e.g.) the blue never does anything like this, it seems like the answer is no it doesn't though I don't know for sure. Although there are some funky things going on too... for example:


Anyhow, it might be a matter of mild concern that we don't get to the r^-1/2 behavior. I've run stage 1 of the dynamical mass experiment nearly 2x as long, I thought maybe it was just a matter of time but it seems like no, the simulations have reached their final density distributions. Here is the run with 20k particles, out to 700 ATUs.
Same kind of shape.