Skip navigation

I was reading about scaling laws in biology, in reference to a clip from Mysterious Island I am planning to use in my class, when I came across this quote from JBS Haldane:

And just as there is a best size for every animal, so the same is true for every human institution (emphasis mine). In the Greek type of democracy all the citizens could listen to a series of orators and vote directly on questions of legislation. Hence their philosophers held that a small city was the largest possible democratic state. The English invention of representative government made a democratic nation possible, and the possibility was first realized in the United States, and later elsewhere. With the development of broadcasting it has once more become possible for every citizen to listen to the political views of representative orators, and the future may perhaps see the return of the national state to the Greek form of democracy. Even the referendum has been made possible only by the institution of daily newspapers.

The full quote is avaiable in an online version of the essay. However, I started thinking about this, and realized this might be an interesting way to look at some of the following questions:

  • How large (in population or territory) could the Roman Republic have been, and remained a functional entity? Did it get to big for its form of government, leading to civil war and the Empire?
  • Did size have anything to do with the advent of tyrants (in the old sense) in the Greek city-states?
  • Could Alexander the Great have stopped conquering at some point and left behind a stable empire after his death? Related to this, is there any rhyme behind the size of the successor states of his generals?

Unfortunately, it doesn’t seem likely that there is a simple scaling law for questions like these. For example, part of the reason Alexander’s empire broke into the pieces it did — Macedonia, Egypt, the Seleucid Empire and Pergamum — at least for three of these is that there was a long period of previous history of political cohesion. Off the cuff, my guess would be that the following factors play a role:

  • Population size
  • Territorial extent
  • Military power of the regime
  • Economic power and system
  • Historical basis for unification

Thus, my guess would be that looking scaling laws of population against territorial size, for example, may miss important parts of the equation. On the other hand, I can believe that some measure of complexity in government (whatever that means) may accurately predict at least some transitions in political systems.

Continuing the tradition of never writing about the same topic in two continuous posts, I’ve been playing around with the idea of representing elementary particles as braids. This idea has been the focus of some attention after the paper by Sundance Bilson-Thompson entitled “A topological model of composite preons“, which revived the old idea of quarks and leptons as composite particles called preons (specifically, it used the rishon model of Harari). This has been later developed more by Bilson-Thompson and collaborators, including my PhD advisor Lee Smolin, in a paper or two.

One problem I have with these works is that there is no reason to stop the particles — the number of generations is infinite! In fact, this is listed as a prediction of the latest paper, arXiv:0804.0037. Since I personally prefer to limit the abundance as much as possible, I got interested in the idea of using elements of the quotient of a braid group. In other words, starting with a braid group, with the standard presentation of cross-overs between adjacent strands \sigma_1, \sigma_2, \cdots, but adding the condition \sigma_1 ^k = I for some power k. Note that requiring this ensures that all generators meet the same kind of condition, coming from the braid relation \sigma_i \sigma_{i+1} \sigma_i = \sigma_{i+1} \sigma_i \sigma_{i+1}. Luckily, there are only a finite number of such quotients, as detailed in the braid theory book “A Study of Braids” by Murasugi and Kurpita.

To show how this works, let’s go through the simplest case (ignoring those which are isomorphic to the permutation groups on n elements). This is B_3 (3), the group of braids with three strands and the condition \sigma_1 ^3 = I. This group has 24 elements, which would correspond to 23 particles since we designate the identity element I as the vacuum state. Thus, we’re almost but not quite at the number of quarks in the standard model when we include all flavors, chiral states and antimatter. Unfortunately, the braid-particle correspondence does not give this exactly, although it is interesting to see what we do get, so let’s start working! By the way, there is a nice group homomorphism from B_3 (3) to the 3×3 matrices with entries in {\mathbb Z}_3 which is easy to work with.

For simplicity, I started labeling the group elements according to their order |g|, i.e. the power k for which g^k = I. This gives

Order Elements
1 \alpha_0
2 \alpha_1
3 \alpha_2, \alpha_3, \cdots \alpha_9
4 \alpha_{10}, \alpha_{11}, \cdots \alpha_{15}
6 \alpha_{16}, \alpha_{17}, \cdots \alpha_{23}

To tease out the “particle properties” of these braids, we use two operations, charge C and parity P. The charge operator acting on a braid takes it to its inverse; this is equivalent to reflecting through either the top or the bottom of the braid (where the strands are attached). Parity is equivalent to reflecting in a mirror to the side of the braid. Whenever I figure out how to use the LaTeX package XY-pic, I’ll draw some pictures! These two operations are sufficient to figure out what kinds of particles we have, although extra stuff, such as SU(3) are not so obvious…

Anyway, besides the vacuum state I, we get five “neutrino-type” braids, where acting with C is the same as P — one state is the right-handed particle, the other is the left-handed anti-particle. These braid states I have labeled \alpha_2, \alpha_3, \alpha_{10}, \alpha_{16} and \alpha_{20}, and their anti-matter partners \alpha_4, \alpha_9, \alpha_{11}, \alpha_{21} and \alpha_{23} (obviously, which are the matter and which the anti-matter is arbitrary at this point). There are also three “quark/lepton-type” particles, where C and P do not give the same particle — this means a given particle has separate chiral matter and anti-matter states. The final state is a “scalar-type” particle \alpha_1, in the sense that it has no handedness (P acting on the braid gives the braid back) and is its own anti-particle.

So, instead of the three neutrinos and nine quark/lepton particles, we get something rather different. However, this is only the simplest of the quotients of braid groups; perhaps there is space in the others for the standard model. I hope to explore this further, as well as the question of how to pick out the group representations of SU(3) \times SU(2) which gives the particle quantum numbers. At this point, there is no sense of what quantities are conserved, so there is no way to know which particles are stable or not. Perhaps this is a question which will eventually be answered when the braid dynamics are developed.

Greetings internet!

I’ve been negligent of this blog site for too long, so I thought I would write some posts reflecting what I have been mulling over in the back of my minds. One project I’ve been working on from time to time is how to set up an efficient transportation system among the Galilean satellites of Jupiter, using the idea of cyclers (such as those proposed by Buzz Aldrin and many others).

Let’s first start by talking about the four Galilean satellites themselves; it turns out that the all but Callisto are in an orbital resonance, meaning their orbital periods are integer multiples of the smallest period (Io). Thus, Io goes around Jupiter twice for every single orbit of Europa, and Io orbits four times for one orbit of Ganymede (all to within less than 1%, not taking the shifts in the orbital ellipses themselves).

With this in mind, one can develop trajectories for cyclers going between a given pair of Galilean satellites; there is a paper by Russell and Strange that develops this in great detail, but in the context of scientific missions to Jupiter a la the Galileo probe. The question I am interested in is can one come up with a system that is “efficient” — presumably meaning something like reducing the waiting period necessary to go between any two satellites.

However, before I get into this phase, I’ve been trying to understand the mechanics of cyclers by working my way through another paper by McConaghy, Longuski and Byrnes. In this work, they develop a number of trajectories between the Earth and Mars, extending the original idea of Aldrin. After learning about the Lambert theorem for solving for the orbit that connects two points, taking a specified amount of time along the transfer orbit, I coded in the appropriate algorithms into Maple, to see if I could match their results (i.e. their Table 4). At this point, I have managed to reproduce the aphelion radii, and the hyperbolic excess velocities v_\infty at Earth and Mars, but not the shortest transfer times listed in the table.

Using the same Maple worksheet for the Jovian system, I have made a list of some cycler trajectories between Io and Europa, some of which cross the orbit of Ganymede and thus may be useful for a “triple transfer” cycler. Since I’m stuck on the transfer times, though, I’m stuck at the moment. Hopefully, time will solve this problem…

Unfortunately, it’s been a while since I’ve posted, but here is some work I did a couple of weeks back. Taking the standard map in Diplomacy, I looked at the adjacency matrix of the various territories, to see how many turns it would take to reach the rest of the board from a particular country. I did not take into account the difference between ocean, coast and inland spaces, to (1) avoid complexity, and (2) so that the idea of convoys could be taken into account at least tangentially.

Here are a series of maps, which color in the distance from each of the seven player countries:

distaustria.gifdistengland.gifdistfrance.gifdistgermany.gifdistitaly.gifdistrussia.gifdistturkey.gif

One interesting thing is how quickly one can get from one side of the board to the other, which was a bit unexpected. Another point is how this correlates to actual data from various internet Diplomacy games. In particular, notice how this matches (or not!) the maps given by Josh Burton in this article. In my next post, I will go into some detail about this, and any lessons learned. It will be neat to see how this carries over into the variant boards of Diplomacy.

I was thinking recently about how tired I am (as a true blue Gen-Xer) of the cultural wars the United States and its political arena have inherited from the Baby Boomer generation. To some extent, this is reflected in the current presidental race, with the age of John McCain and the relative youth of Barack Obama both issues in the campaign.

With this in mind, I was curious to see how the US presidents of the past stacked up against some generational scheme. To start with, I pulled off Wikipedia the generations list developed by William Strauss and Neil Howe in their book Generations (which is on my list of things to read), and matched it with the birth years of the presidents. Below is the table of my results, with columns representing the name of the generation and the years (given by Strauss and Howe), and the presidents listed in birth order.

prez2.jpg

The boldface presidents are those born in the first or last year of their generation; the interesting thing is how some (but not all!) of the transitional presidents are also part of a generational change — see Jackson, Teddy Roosevelt, FDR and Truman. Others you might expect, such as Lincoln or the shift from Carter to Reagan, are not so obvious (unless you count Carter as part of the Silent Generation).

I’m sure this is not a new thought, and it’s part of someone’s doctoral thesis somewhere, but still, it’s fun to play with. I hope to come back to this soon, looking at the generations of the current presidental candidates, as well as “generations of experience” — for example, of the veterans in the list, which share participation in the same war?

This past week in my physics classes, I have talked about momentum and collisions. One of the lab activities I tried for the first time this year was using video analysis to find the coefficient of restitution (COR) for various kinds of balls bouncing on either the lab tables or the floor. This got me started thinking about what COR means in the first place.

Checking through all the textbooks at work, I see a couple of places where it is defined, but usually either as a throwaway point or a problem in the momentum chapter. There is not really an explanation of why it might be a useful quantity, or what is describes exactly. Searching on the web finds web pages of two types — either academic sites (usually a physics lab or demo page!) or sports pages (for tennis, gold, racquetball, etc.). In the latter, you will see how the COR is a measure of the “bounce” of the ball off a racket of gold club. Why is this?

Using the definition in the Wikipedia article on COR, we have that

e = (v_{2, f} - v_{1, f})/(v_{1, i} - v_{2, i})

for two objects 1 and 2, and their initial and final velocities (really speeds here). This leads one to think COR has something to do with the relative motion of the two objects before and after the collision. Indeed, it does! Let’s consider the collision in the center of mass (CM) frame, i.e. the reference frame where the velocity of the CM is zero. For two objects, this is defined as

{\vec v}_{CM} = (m_1 {\vec v}_1 + m_2 {\vec v}_2)/(m_1 + m_2)

Because the numerator is simply the total momentum vector, which is conserved in collisions, it doesn’t really matter if we use the initial or final velocities (as long as we use the same type!).

In the CM frame, the velocities of the two objects are given by

{\vec u}_1 = {\vec v}_1 - {\vec v}_{CM}

and similarly for object 2. In terms of the masses and the velocities, this means

{\vec u}_1 = m_2 ({\vec v}_1 - {\vec v}_2)/(m_1 + m_2)

{\vec u}_2 = m_1 ({\vec v}_2 - {\vec v}_1)/(m_1 + m_2)

Thus, the total momentum vector in the CM frame is zero:

m_1 {\vec u}_1 + m_2 {\vec u}_2 = 0

Using these facts, we can show that the definition of the COR given above is equivalent to the following equations:

{\vec u}_{1, f} = -e {\vec  u}_{1, i}

{\vec u}_{2, f} = -e {\vec  u}_{2, i}

This perhaps makes things clearer. First, there is only one COR for the collision, since it depends on which two kinds of objects are colliding. This COR affects each object in the same manner; for both objects, the COR is the ratio of the final to initial speeds (modulo a sign for direction) in the CM frame. Each object seen in this reference frame will bounce off in the opposite direction it came into the collision, with a speed scaled by the COR. So the larger the COR, the larger the “bounce”, in the sense of remaining speed. A COR of zero will mean the objects stick together; they will move along at the CM velocity (as seen by someone in the lab frame).

Second, we can see how this collision changes the kinetic energy (KE) of the two objects. It is easy to see that

KE_f = e^2 KE_i

for the combined system. Thus, in the CM frame, a zero COR means all the KE goes away, while a COR of unity means KE is conserved (and thus the collision is elastic). You can even arrange it so that the COR is greater than one, by “stealing” from the velocity component parallel to the surface the object is bouncing off of; here, COR is defined in terms of the velocities perpendicular (or normal) to the surface.

Remember that all of this is how the collision is seen in the CM frame! This last caveat is important, since the KE is not invariant under affine transformations of the velocity vectors. I will address this point in a later post.

As a start to what I would like to write about, here are some questions or other things I hope to post about soon:

  • Is Hollywood getting more or less derivative? It seems like all blockbuster movies recently are either sequels, prequels, adaptations or otherwise taken from another source. Is this increasing with time, or it is just that I don’t know about how this was done decades ago? I intend to analyze where the idea for movies originated, over the entire history of film.
  • The physics of movies — In the classes that I teach, I have started to use clips from various Hollywood movies, which include “good” and “bad” physics (inspired by movie reviews on sites such as Intuitor Insultingly Stupid Movie Physics and the Bad Astronomy movies webpage). Hopefully, I can post the clips themselves, but we’ll see how that goes…
  • The mathematics of Diplomacy — My favorite board game is Diplomacy, and since it does not involve any chance, it is an interesting situation to consider mathematically. Some questions I want to consider are: which countries are natural enemies? which countries are more likely to win, based solely on the adjacency properties of the board? how does this carry over to the variants?

Greetings everyone! I have started this blog as a means of keeping track of random thoughts and interests I have. Any (constructive) comments you have on whatever’s on my mind at the moment are welcome.

A word on the title: I have always thought the name sounded like a cool one for some high-tech corporation I would start one day. Probably I need to point out, thought, that “Lucifer” is Latin for “light-bearer”, and as such is used in the context of the planet Venus as the morning star (i.e. preceding the sunrise) or for Prometheus, who brought fire to the early Greeks. The name of my blog does not mean I am a Satanist…