I tried a heavy particle that was 10x bigger than the little guys, and found that it essentially never falls in, at least not in 400 (arbitrary time units =ATU), which is a few 10's of crossing times. I think in order to try 100x bigger, I am going to have to run more background particles. One problem with the first experiment (I realized after I ran it) was that the heavy particle was EQUAL in mass to all the little guys combined, which I don't think is all that physically interesting. Anyhow, here is the movie. I warn you it's boring.
I think in order to study this, I'll start making a plot of x vs t for the heavy particle, that ought to give a more quantitative notion of how fast/if the thing is sinking to the center.
Anyhow, on to a few of Scott's questions. I'll put them in Blue.
It appears that you can handle a lot more particles with Walter's code. Is this mainly because it's in C++ rather than Python?
I really don't think so. If it had been C versus Py, maybe this would be a factor, but my impression is that C++ is comparable in speed if one is taking full advantage of the object oriented programming (which W certainly is). If you are interested in Python versus C, you should talk to Doug, who has looked into this in some detail. He was telling me that sometimes interpreted languages have better optimizations than compiled languages so the comparison can be complex. In this case, I think the main difference is that my original code was computing ALL the crossing times, sorting the whole list, and pushing all the particles at every crossing. In Walter's code, in contrast, each particle has it's own time variable, and only the particles near the shell crossing are pushed at all, so they all exist at different times until a dump, at which point they are synchronized. Also, he obviously only computes new crossing times for the relevant shells, and his sorting strategy is quite sophisticated (read very fast) compared to mine.
Incidentally, I did rewrite my code to maintain a sorted list of crossing times, and only recompute the ones near the crossing (and put them back into the sorted list). It had a few problems I was sorting out, but playing with W's code turned out to be much more fun, so I suspended that effort for the time being. I do think I will go back and sort it out eventually, because my algorithm is independent enough that (even if it is superslow) it will be useful for cross-checking any results that we find surprising.
In practice how many particles can you handle?
Quite a large number it would seem. When W was demonstrating the code, he was running 10k particles for 100 ATUs, and that was taking maybe 2 minutes or maybe 5, I can't really remember. Here is a plot:
More question answering soon. And a third dynamical friction test.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment