A Particle Automata (PA) is similar to a cellular automata (CA), but the rules address moving particles rather than fixed cells. Like a CA, a PA is defined over discrete space using discrete time. A given PA rule defines a set of particle types, T, and a collision function, C( ). When the lattice is populated with particles, each of a given particle type and possibly including extra state information, the particle movement rules and collision rules are used to determine the next generation.
PA’s typically have the following properties, similar to CA’s.
Squier and Steiglitz [94] refer to this as a “particle machine.”
A PA is a set of simple rules for defining particle movement and interactions. Since many physical processes are, at their core, just a group of particles moving around and interacting, it should make for a good model. And since both time and space are discrete, it produces exact results unlike models of continuous behavior which can suffer from roundoff problems when run on a computer.
Additionally, PA’s may prove to be an interesting place to explore emergent complexity. Wolfram [84] defines a class IV cellular automata as those where complex, localized, propagating structures are formed. This is where he finds complexity. Conway’s Life rule, which has been shown to be powerful enough to contain a universal computer [82], is a class IV CA. In Life, the “localized, propagating structures” are gliders. In these cases, it’s particlelike behavior that seems to be an important part of the complexity of the behavior.
Hordijk, Crutchfield, Mitchell [95?] analyze various cellular automata rules that solve global problems through local rules. The CA rules were arrived at through a genetic algorithm which selected for rules which executed the task best. They identify “particles,” configurations of cells which travel through the lattice carrying information. They then build a simple model of the particle interactions to demonstrate that the particles are responsible for doing the processing of the task.
All these results suggest that a PA should be capable of the same kind of complexity as a CA.
The fundamental difference between CA and PA is whether computation is a property of space itself or of the particles in that space. Computation involves the storage, transmission and processing of information. In a CA, space itself (the collection of cells) is responsible for storing information and the rules on the cells carry out the transmission and processing of the information. In a PA, information is stored in the particles (their type, position and state), transmission occurs by the particles traveling and collisions are responsible for the processing.
An interesting point here is that transmission of information is an emergent property in a CA. Too little or too much transmission makes effective computation difficult or impossible to achieve. Universal computation seems to be most likely in the narrow band of class IV CA where the proper balance of transmission occurs. In contrast, information transmission in a PA is an inherent property of the particles. Might this make universal computation more likely?
A particle automata can be thought of as a cellular automata which follows certain guidelines for rules (and likewise CA’s can be modeled as a special kind of PA).
A particle type may have a fixed velocity or a set of allowed velocities. Greek letters will be used to indicate a particle type or an instance of the particle type. Particle types differ based on the results of the collision function – if particles behave differently in a collision, they’re considered different particle types. The number of particle types in a rule is represented as k. The set of all particle types for a given PA is K. The velocity describes how many cells the particle moves when transitioning to the next generation, assuming a collision doesn’t affect the outcome. In the simplest case, consider a 1 dimensional PA on a simple lattice of adjacent cells with a particle type, a, which has a velocity, v, of 1. With each new generation, a particle of type a moves one cell to the right. We write a  v = 1 to define this particle type. If particle type b has v = 1, or b  v = 1, a particle of its type travels one cell to the left. The set of all allowable velocities for a given rule is V.
A particle type may also define a finite set of states its particles can be in. These states affect collisions. If no particle types have states, the PA is referred to as stateless. If, for example, a can be in one of the states 0, 1 or 2, we write a  s Î {0,1,2}. The states can be thought of as a shorthand for multiple particle types that share a similar definition and interpretation.
If a particle’s velocity is not an integer, it also includes a phase to aid in determining the proper movement. The phase is an integer which starts at 0 when a particle moves to a new cell and increments by 1 for each generation it spends in the cell. So, for example, if a particle g has v=1/3, it starts out at phase 0, steps to phase 1, phase 2, then moves one cell to the right and starts over. The phase can be written as a subscript on the particle type.
In the following figure, instances of the previously discussed particle types, a, b and g, are shown traveling in a 1D PA. Time increases going down.





b 

g_{0} 

a 









b 


g_{1} 


a 







b 



g_{2} 



a 





b 





g_{0} 



a 



b 






g_{1} 




a 

b 







g_{2} 





a 
The maximum speed for all particle types in a given PA is referred to as “the speed of light,” v_{max}. Typically, information can’t travel through the PA any faster than the speed of light, but this also depends on the collision rules. In most PA’s discussed here, we will use v_{max} Ł 1. An additional restriction that may be placed on v is to only allow integer or rational velocities. Note that if v_{max} = 0, this is equivalent to a cellular automata.
The collision function, C( ), defines the results of all the possible collisions for the set of particle types. After all particles’ positions have been updated according to their velocities, C( ) is applied to determine the results of any collisions. The input and output is a set of particles, possibly including phase information and the location of the particles. The output may include having multiple particles in the same cell. The number of cells away from a given cell that are examined for collisions is referred to as r. If r =0, only particles that are all in the same cell can collide. If r =1, the cell and the immediately adjacent cells are examined for possible collisions and so forth.
As a simple example, consider a PA with a  v = 1 and b  v = 1. If r =0, the only possible collision is when a and b occupy the same cell; neither particle type can collide with itself. Define C(a,b) = a, i.e. when a and b collide, the result is a. The following shows a collision between these two particles.
a 



b 

a 

b 



a 





a 

Note that at t=2, the movement phase of the PA rule first places a and b in the same cell. The collision phase then turns ab into a.
Now consider the following situation with the same two particles.
a 


b 

a 
b 


b 
a 

b 


a 
In this case, it appears that a and b have passed through each other. This is a consequence of r =0 rules. If v_{max} Ł 1, you need to create r =1 rules if you wish to ensure that particles can never pass through one another.
When defining a collision between an adjacent a and b, where should the results be placed? If you want the collision results to obey the speed of light, that is, no information can be transferred faster than the speed of light, the results should be put in the same cells in the next generation.
c is the number of collisions represented by C( ). The density, ¶, of C( ) is the average number of particles per collision. The density ratio is the ratio of the average number of output particles to the average number of input particles.
Now let’s consider a more complex case with the three particle types a  v = 1, b  v = 1, and g  v = 1/3. We’ll consider a collision function with r =0. Several things should be pointed out about defining C( ) for these particle types.
Ř Any particle can collide with any of the other particles since they all travel at different velocities. Thus we need to consider collisions ab, ag and bg (order doesn’t matter in this case).
Ř All three particles can collide in the same cell. Thus we need to consider the collision abg.
Ř It’s possible for g to collide with itself in different phases. Thus we need to consider gg and may want to consider g_{0}g_{1}, g_{0}g_{2} and g_{1}g_{2}.
Ř It’s possible for a and b to collide with g in different phases also, so we may want to consider ag_{0}, ag_{1}, ag_{2}, bg_{0}, bg_{1}, bg_{2}, abg_{0}, abg_{1}, abg_{2}, ag_{0}g_{1}, ag_{0}g_{2}, ag_{1}g_{2}, bg_{0}g_{1}, bg_{0}g_{2}, bg_{1}g_{2}, abg_{0}g_{1}, abg_{0}g2 and abg1g_{2}.
If we consider all phases of g equal when defining C( ), the rule is referred to as phase invariant. In this case, we need to define the results of the collisions ab, ag, bg, gg, abg, agg, bgg and abgg. If we instead want the collision results to vary by phase, we define collisions for ab, ag_{0}, ag_{1}, ag_{2}, bg_{0}, bg_{1}, bg_{2}, g_{0}g_{1}, g_{0}g_{2}, g_{1}g_{2}, abg_{0}, abg_{1}, abg_{2}, ag_{0}g_{1}, ag_{0}g_{2}, ag_{1}g_{2}, bg_{0}g_{1}, bg_{0}g_{2}, bg_{1}g_{2}, abg_{0}g_{1}, abg_{0}g2 and abg1g_{2}.. Note that some of these collisions can only happen from certain initial conditions or if C( ) is constructed such that they’re produced. As can be seen, the complexity of the collision function increases rapidly as particle types are added, especially types with nonintegral velocities.
Let’s pick a phase invariant C( ) and watch a given initial configuration of particles evolve through a few generations. Ř indicates that the particles are destroyed without any new ones being created. Note that even though we’re defining a phase invariant rule, we need to specify which phase g is in when it’s the result of a collision.
colliding particles 
resulting particles 
ab 
Ř 
ag 
a 
bg 
bg_{0} 
gg 
g_{2} 
abg 
ab 
agg 
ag_{0} 
bgg 
bg_{0} 
abgg 
Ř 
a 

g_{0} 

a 



g_{1} 


b 
b 
g_{0} 

b 

a 
g_{1} 


a 


g_{2} 

b 
b 

g_{1} 
b 



a 



a 


bg_{0} 
b 


bg_{0} 





a 



a 
b 
bg_{0} 


b 
g_{1} 






a 


b 
a 
g_{1} 

b 

g_{2} 







a 
b 


a 
b 



g_{0} 






b 
a 


b 
a 



g_{1} 

Boundary conditions for a PA lattice can be defined in several ways. They can be periodic (circular), i.e. the left and right sides are considered to be joined. Or particles can disappear when they pass off the edge. Or the edges can contain a fixed particle which interacts with particles colliding with it, typically reflecting some particles back or annihilating some.
Rules can be equivalent through rotation or reflection.
Some rules are equivalent relativistically, that is, by shifting the frame of reference. Consider particle automata A with av=1,bv=0 and B with av=0, bv= 1 which have the same C( ). Watching A evolve through time would be exactly the same as watching B with each generation shifted one cell to the right. In general, subtracting a fixed integer off each of the particle velocities for a given rule makes it relativistically equal to another rule.
Under the set of rules where each particle has a fixed velocity, a rule where each particle type can be one of a set of velocities is obviously equivalent to a rule where each particle type is restricted to a single velocity. In the latter case, there would be a particle type for each possible velocity of the former and C( ) would contain similar collisions.
Shepherd particles are particles which keep other particles from going beyond them. For example, you may have two shepherd particles, a and b, which travel slower than particles g and d. But each time g or d collides with a, it produces b and a particle which moves away from b and each time they collide with b, they produce an a and another particle which moves away from a. a and b are effectively shepherding g and d, keeping their influence from traveling quickly.
Rules in which none of the particles have states are stateless.
A phaseless rule is one in which the velocities are restricted to integers, thus none of the particles go through phases as they travel.
If C( ) treats all phases of particles with nonintegral speeds the same, the rule is referred to as phase invariant.
A rule is lawabiding if no collisions produce effects that travel faster than the speed of light.
If a cell can contain more than one particle during any generation, it is said to support superposition of particles. If r =0, a rule which doesn’t support superposition must have all collisions produce a single particle or else violate the speed of light so that new particles can be placed beyond the site of the collision.
A rule is orderdependent if the spatial orientation of the collision affects the outcome. If, for example, you have a  v = {0, 1} and b  v = {0, 1}, an a may hit a b from either the left or the right and you may wish these collisions to have different outcomes.
A 1D rule is symmetric if it meets the following criteria:
Some rules conserve various quantities.
Another variation on this basic set of rules for PA’s is to have the velocity of a particle vary. Or particles could be emitted by a single particle with a particular state. Rules can use probabilities to calculate particle velocities or to determine collision results. Alternately, a particle could occupy multiple cells.
Another approach is to have particles decay. When a particle is created, it starts a counter. When the counter reaches a certain value (perhaps with a certain amount of randomness thrown in), the particle decays. It may decay into a group of other particles, change its own velocity or simply vanish.
Observations
In some circumstances it’s fairly easy for multiple particles to become attached until another particle collides with them. Consider the following particles:
a  v = ˝
b  v = 1
If C({a, b}) = {a_{1}, b}, then a collision between a and b looks like the following:
b 

a_{0} 




b 
a_{1} 





b 
a_{0} 





ba_{1} 






ba_{1} 






ba_{1} 
Even though a and b travel at different speeds, once they collide they stick together until they collide with other particles, becoming a composite particle. Note that neither C({a, b}) = {a_{0}, b} nor C({a_{i}, b}) = {a_{i}, b} would cause composite particles. If you also had g  v = 0 and C({a,g}) = {a_{0}, g}, a and g would stick together in a similar manner. It wouldn’t be difficult to create collisions with other particles that would reinforce such “stickiness.” {a_{1}, b} and {a_{0}, g} then become a sort of composite particle. If nature is a PA, similar rules could account for the tendency of quarks to stick together so strongly.
For r =0, particles with velocities in the interval (1,0) may not collide w/ those with velocities in (0,1), but those in (0,1) always collide w/ each other.
Now let’s look at the likelihood that two arbitrary particles will collide. We’ll start by looking at the 1D case with r =0.
Consider two particles, a and b, where v_{a} < v_{b}. b starts out to the left of a, with no other particles on the lattice. b will obviously “catch up” to a at some point in the future. The question is whether they will end up in the same cell and collide or miss each other and continue on.
If both particles have a velocity in the interval [0,1], then at each time step to the next generation each particle will either stay in the same cell or move one cell to the right. Clearly each particle must eventually occupy every cell in order from its starting position on to the right. When b is in the cell immediately to the left of a, this gives us the following four possibilities for the next generation:
b 
a 

b 
a 


b 
a 

b 
a 

b 
a 

b 

a 


ba 


b 
a 
It’s obvious that there’s no way for the faster b to pass a without colliding with it. In general, relativistic equivalence shows us that any two particles with velocities in the same interval [i,i+1] for some integer i must eventually collide.
With v_{a} < v_{b}, let’s now choose v_{a} in [0,1] and v_{b} in [1,2]. At each step, a will either stay in the same cell or move one step right. b will step either one or two cells to the right. When b starts out one cell to the left of a, this means there’s a chance b will take two steps right at the same time a sits still, so the two particles may never collide.
In general, if i,j are integers with a<b, and v_{a} in [i,i+1] and v_{b} in [j,j+1] with v_{a} < v_{b}, there’s a chance a and b will pass each other without colliding. The further separated a and b are, the more likely this is.
With v_{a} in [0,1] and v_{b} in [1,2], you can clearly see that increasing the neighborhood considered for collisions to one instead of zero requires a and b to collide at some point. More generally, with v_{a} in [i,i+1] and v_{b} in [j,j+1], r>=ba is necessary to ensure that the two particles always collide without passing.
Some common general behaviors starting from a random rule with a random initial state:
Universal computation
At this point it’s useful to ask if PA’s are capable of universal computation. If so, any task you hand to a computer could also be handled by a PA. It’s very easy to demonstrate this is possible since the common definition of a Turing machine is simply a subset of how you construct a PA.
We can create a 1D PA with k=2, r =0, V=1 with the following particle definitions:
a  v = 0, s Î S_{0}
b  v Î { 1, 1 }, s Î S_{1}
where are S_{0} and S_{1} are finite subsets of the integers. The a’s are the equivalent of the tape while b can be viewed as the read/write head. The collision table is equivalent to the machine’s program, where each combination of a and b in various states produce a new a and b in different states and a new velocity for b.
However, this requires a new set of PA rules for each different computational task. A more interesting question is if there exists a particular PA rule which is capable of universal computation. You could then simply change the initial conditions fed to this rule in order to carry out any arbitrary computation.
A simple example of this is to build a PA to emulate the behavior of Fredkin and Toffoli’s Billiard Ball Model (BBM) [82]. They constructed simple rules for defining how idealized billiard balls can bounce off each other and “mirrors” to produce a universal computer. Since their model was based on particle interactions, it’s natural to model this as a PA. If we can show we can emulate the behavior of their BBM, we have shown that a universal computer can be built in a PA.
We will build a 2D PA with k = 2, r = 0, v_{max} = 1. The rule supports conservation of particles, particle types and mass but not momentum. As with the BBM, it’s reversible.
a has a speed of 1 and can go in any diagonal direction. It’s equivalent to a billiard ball in the BBM. We’ll create two particles, b and g, which don’t move and are equivalent to mirrors. b reflects a counterclockwise and g reflects clockwise. So we have...
a  v Î { (1,1), (1,1), (1,1), (1,1) }
with a_{0}  v = (1,1), a_{1}  v = (1,1), etc.
b  v = 0
g  v = 0

C( ) 
a_{i}a_{j} 
a_{i}a_{j} 
a_{i}b 
a_{(i+1)mod4}b 
a_{i}g 
a_{(i1)mod4}g 
It’s now simple to create some of the essential pieces of the BBM in this PA. In the following figures, the a’s will generally come in from the left and leave to the right. The position of the a’s over time are shown in the same figure for simplicity. When combined with a mirror particle, the results after the collision are shown.
b or g can be used to deflect the path of a. The following figure shows a_{0} collide with g, reflect clockwise and travel off as a_{3.}

a_{3}g 

a_{0} 

a_{3} 
The following shows the path a takes when reflecting off a g and a b to implement a sideways shift.





a_{0} 


a_{3}g 

a_{0} 


a_{0} 

a_{0}b 


a_{0} 





This figure shows a time delay circuit. The a_{0} in the lower left is a at t=0 and the one in the upper left is a at t=5. Without the “mirrors”, a would have reached that point at t=3.



a_{0} 


a_{0}g 


a_{3}g 

a_{1}b 
a_{0} 

a_{0}b 

The following figure is a crossover circuit, with the particles coming in from the left and leaving to the right. Note that the definition of the a collisions automatically allows for crossover so this circuit isn’t strictly necessary, but makes for simpler equivalence with the BBM.
a_{3} 

a_{3}g 

a_{0} 

a_{0}a_{3} 

a_{0}a_{3} 

a_{0} 

a_{0}b 

a_{3} 
One minor note is that, with r =0, collisions don’t occur until the particles occupy the same cell, thus this model differs slightly from the BBM’s. It shouldn’t prove too difficult to adjust the positions of the constructions (similar to Margolus’ approach in reconciling his BBMCA with the BBM). Another possible approach would be to use r=1 to precisely match the actual behavior of the idealized ball collision so no adjustments need to be made.
It should be clear that the rest of the constructions of the BBM should be easy to reproduce using these simple rules. Thus there exists a simple PA capable of universal computation.
Modeling PA as CA
We will transform 1D particle automata with r =0 into an equivalent cellular automata rule.
The state of a cell in the CA represents the set of particles residing in the cell in the PA. If there are k different particle types, a cell’s state in the CA will have k digits. If there can be multiple particles in a cell, but only n instances of a given particle type in the cell, then each digit can take on any of n+1 different values. So, for example, if only a single instance of each particle type can appear in a cell at a time, each digit can take on two different values, hence the state can be represented as a binary number with k digits. Thus there are 2^{k} possible states. In general, there are (n+1)^{k} possible states per cell.
The radius of the neighborhood for the CA transition rule is the smallest integer, r’, such that r’ł v_{max}. We can then fairly easily define the transition rule. First create an intermediate state for the cell by adding in all the particles in its neighborhood whose velocities land the particle in the cell. C( ) then defines a straightforward mapping to the cell’s new state. If the intermediate state only consists of a single particle, that becomes the cell’s new state, otherwise use the collision results.
Consider the following PA:
a  v = 1
b
 v = 0
g  v = 1

C( ) 
ab 
ag 
ag 
ab 
bg 
abg 
abg 
Ř 
There a three particle types with only one instance of a particle allowed in a cell. Therefore there are 2^{3}=8 possible states, which can be represented by three binary digits. We will have the digits represent the particle types in decreasing order of velocities, with 0 indicating no particle and 1 indicating a particle. So, for example, 110 would represent both a and b being present in the cell.
The transition rule has radius r’=1. If the states of the left neighbor, the cell and its right neighbor are s_{i1}, s_{i} and s_{i+1} respectively, the intermediate state, s’, can be defined as
s’ = (s_{i1} 100) + (s_{i}  010) + (s_{i+1}  001)
where  represents a bitwise OR and the numbers are in binary. The following table can then be used to turn this intermediate state into the cell’s new state.
000 
000 
001 
001 
010 
010 
011 
111 
100 
100 
101 
110 
110 
101 
111 
000 
Note that any intermediate state with no 1’s or a single 1 in it is preserved. This is the same for all PA’s, since it simply represents a single particle traveling through the lattice. All other intermediate states have a direct correspondence to C( ). So we can see that k+1 of the transitions are predefined by the nature of a PA. The remaining 2^{k} – (k+1) rules can be any of 2^{k} different states, thus there are (2^{k})^{(}^{2^k – (k+1))} different rules possible. In the previous example this means there are 8^{4} = 4096 possible rules when there are three particle types involved. Note that this scales very quickly. The possibilities are already on the order of 10^{13} for 4 particle types.
Evolving Rules
Evolving the rules for a system such as particle automata or cellular automata isn’t obviously a useful thing. The overall behavior of the system is often highly dependent on the exact details of the rules, with the emergent behavior being unpredictable. However, PA rules mutate well because a single bit flip in the collision result table simply means that one more or one less particle was produced from a particular collision. Even with small numbers of particle types, this can be a small change in the overall behavior. (A quick comparison of mutating a single bit versus replacing an entire collision result yielding very similar behavior.)
A typical genetic algorithm combines two rules that were successful in a previous generation, but they may have absolutely nothing in common. Perhaps one area of a rule was responsible for expanding the number of particles while another area reduced the overall number of particles – or more likely different areas had much more subtle tasks that melded into the whole. Combining part of that rule with another rule that got its success from some completely different way of handling tasks will most likely result in a completely useless rule.
You’re likely to end up with a particular rule variant dominating the pool once it reaches a certain success level. It will be difficult to impossible for a different type of rule, on a more remote maximal peak to ever get enough of a foothold to make a difference, especially if mating it with a rule from the main pool results in a very unsuccessful rule. It seems that what you need is some form of sexual selection and speciation. If rules can recognize similar rules, together they can evolve to the local maximum of their class of rules, letting other classes of rules recombine among themselves. A large comingling population tends to be fairly stable. A small population thrown into a new environment is more likely to become a new species. Perhaps occasionally a small subset of rules needs to be broken off from the main group and given a slightly different problem to optimize, just to kick it over toward a different local maximum. You could shift it back toward the original problem and expand the population after awhile to get it back on track. This may tend to produce rules that are more adaptable rather than more optimal, however.
· some form of sexual selection and speciation
· small populations broken off from the main group that are pushed somehow  given a slightly different goal, increased mutation rate, ...
Universal Logic Gates
Next let’s spend a little time playing around with logic gates for 1D PA with r =0.
A set of logic gates is universal if an arbitrary finite Boolean function can be represented as a composition of the given logic gates. Various sets that are universal are {AND, OR, NOT}, {AND, NOT}, {NAND}, {NOR}.
a 
b 
a NOR b 
0 
0 
1 
0 
1 
0 
1 
0 
0 
1 
1 
0 
NOR (and similarly NAND) is an interesting one to work with because it’s universal by itself. It also forces us to think about how to represent 0’s and 1’s. If we represent 1 with a particular particle type and 0 by the lack of a particle, we then need to work out how to generate a particle given no input. One possibility is given in the following example with k=5, c=6.
a  v = 1
a’  v = 1
b  v = 0
c  v = 1
c’  v = 1

C( ) 
ab 
bc’ 
aba’ 
bc’ 
abc’ 
ab 
a’bc 
bc’ 
bc 
abc’ 
bc’ 
bc 
a 


b 
c 
b 




a’ 

a 

b 

c’b 



a’ 



a 
b 
c’ 
b 


a’ 





ab 

b 

a’ 






b 
a 
b 
a’ 







b 

c’b 








b 
c’ 
b 








bc 

cb 








b 
c 
b 








bc’ 

c’ba 








b 
c’ 
b 
a 







bc 

b 

a 



In this example, c and c’ bouncing between parallel b’s serve as a clock, sending out a stream of a’s if not given any input, which is equivalent to a stream of 0’s. When an a and an a’ are sent in, in the top of the figure, you can see that nothing is produced, hence you get 1 NOR 1 = 0. The a that is sent out near the bottom of the figure shows the result when you send in two 0’s, which gives you 0 NOR 0 = 1. You can easily see that a single a or a’ sent in would produce no output.
An alternate approach is to have one particle type represent 1 and another represent 0. Then you no longer need a ‘clock’ for recognizing when you have a pair of 0 inputs. The following example has k=5, c=4 with the inputs coming from opposite sides similar to the last example. Particles a and a’ represent 0 while b and b’ represent 1. c represents the immobile logic gate. The figure shows 0 NOR 1 = 0 and 0 NOR 0 = 1.
a,b  v = 1
a’,b’  v = 1
c  v = 0

C( ) 
aa’c 
ac 
ab’c 
c 
a’bc 
c 
bb’c 
c 
a 

c 

b’ 

a 

c 

a’ 

a 
c 
b’ 



a 
c 
a’ 



c 





ac 




c 





c 
a 



c 





c 

a 
Using c as a logic gate is useful only if you need to add other collision types. You could get rid of c and simply have the ab collisions directly encode NOR so we end up with k=4, c=4.
a,b  v = 1
a’,b’  v = 1

C( ) 
aa’ 
a 
ab’ 

a’b 

bb’ 

That’s about as straightforward as a PA could be to implement NOR. But how low can we make k and c and still support a NOR gate?
The following example takes a different approach to lower k. As in the first example it uses the presence or absence of a particle to indicate 1 or 0, but it uses a ‘control’ particle to tell the logic gate when an answer needs to be produced. Particle type a is used for both input (v=1 and v=2) and the control (v=0) with particle b representing the logic gate. This results in a rule with k=2 and c=3.
a  v Î { 0,1,2 }
b  v = 1

C( ) 
ab 
a_{v}_{=1}b 
aab 
b 
aaab 
b 
a_{v}_{=2} 

a_{v}_{=1} 

a_{v}_{=0} 

b 





a_{v}_{=2} 
a_{v}_{=1} 
a_{v}_{=0} 
b 








b 








b 








b 







The next issue is how to string together sets of logic gates in a 1D PA to allow a universal set of logic gates. First off, we’ll assume that all the input particles come in from the left with the result emerging eventually to the right, with all the particles performing the logic operations in the middle. The avoids the issue of how some particles might magically appear in the middle of other particles when tracing the generations backwards would produce invalid collision results
The obvious problem is how to get the results of one gate to pass unchanged through gates that are in the way in order to allow for arbitrarily nested logic gates. There are several ways to accomplish this.
· You could use a clock similar to the first example, with particles not destined for that logic gate timed so they don’t hit the clock particle, passing through the ‘walls’ of the gate without any effect.
· Use a higher velocity particle which is timed to miss logic gates it should ignore.
· Add extra particle types and collision results that effectively bypass certain gates.
· Have the logic gates ‘selfdestruct’ as they’re used. This has the advantage of simplicity but of course you can only use the system once.
· Have the logic gates go into a ‘fired’ state once they’re used so subsequent particles can ignore them. The final input to the system can be a ‘reset’ particle which restores all the gates to their preferred state.
The selfdestruct technique can easily be turned into the firedreset method with the addition of extra particle types and collision results, so we’ll simply work with a selfdestructing rule for simplicity.
For the following rule, k=5 and c=6. These gates can easily be chained together, with all the inputs coming in from the left and all the gates lined up to the right. The final answer comes out from the right side. a represents 0 and b represents 1. c,d and e represent phases of the logic gate which store information about the initial particle collision.
a,b  v = 1
c,d,e  v = 0

C( ) 
ac 
d 
bc 
e 
ad 
b 
ae 
a 
bd 
a 
be 
a 
A different solution is to simply have a particle that reflects anything that hits it and let the basic collisions carry out the logic. As before, a represents 0 and b 1. This rule is also k=3, c=5.
a,b  v Î { 1,1 }
c  v = 0

C( ) 
ac 
a_{v}_{=1} 
bc 
b_{v}_{=1} 
aa 
a_{v}_{=1} 
ab 

bb 

An even simpler solution has k=2, c=3.
a,b  v Î { 0,1 }

C( ) 
aa 
b_{v}_{=1} 
ab 
a_{v}_{=1} 
bb 
a_{v}_{=1} 
The following shows ((((0 NOR 1) NOR 0) NOR 0) NOR 1) = 0.
a 

b 

a 

a 

b 


a 
b 

a 

a 

b 



a 

a 

a 

b 




a 
a 

a 

b 





b 

a 

b 






b 
a 

b 







a 

b 








a 
b 









a 










a 