02.26.2019  Status Report: Pac-man Specification and Synthesis (aka Flip)

The primary objective of the last five weeks of research and experimentation was to create a version of the low-level Flip machine synthesis tool that produced linear code, as in programs that compute functions on binary vector space that are linear and invertible, in the linear algebric sense.  This objective was not achieved, and may not even be possible given the constraints I opted for.  Nonetheless, I made progress in other ways.

Binary Matrix Equations and SAT solvers.  The original impetus for seeking linear programs was the observation from December that SAT solvers (as in tools for Boolean formulae satisfiability searching and checking) could be put to use in seeking linear transformations of binary vector space that meet certain input and output conditions.  More specifically, given a set of input and output bit patterns, one can construct a so-called Binary Matrix Equation (BME), i.e. an equation of the form A*B=C where A, B, and C are matrices with entries equal to 0 or 1, and normal matrix multiplication applies, plus the additional simplification that all arithmetic is done modulo 2.  Technically, these matrices are seen as linear operators over binary vector space, which is the n-dimensional vector space built over the finite field with two elements as its scalars, known as F_2 or GF(2) (for Galois Field).  I spent most of my time searching for invertible linear operators, meaning the matrices have inverses, which is equivalent to several other useful conditions (by fundamental theorems in linear algebra).  The condition I used was that an invertible matrix has a non-zero determinant.  

Binary matrix equations can be translated directly to boolean formulae, and these formulae can be checked with a SAT solver.  I did just this by integrating the Z3 SMT solver from Microsoft Research (SMT solvers, for Satisfiability Modulo Theories, is a successor to SAT that adds more sophisticated automatic reasoning methods, and includes SAT as a sub-solver).  I created an interesting tool that allows a person to express a binary matrix equation with unknowns, and then let the SAT solver search for a model (assignment of unknowns) with which the equation is true.  While I ultimately failed to find a path to general purpose computation with linear functions, I found that the tool was very handy in running various experiments.  In some cases, up to approximately dimension 9, Z3 can find solutions quickly, or prove that no solutions are possible.  Unfortunately, generally around dimension 12, Z3 fails reliably with an internal overflow exception.  It is possible that someone with greater mastery of Z3 could tune it and surpass this obstacle.

Resetting Bits, Glue Code and Clean Bit Supply.  The general approach I took to seeking linear operators that produce desired output bit patterns given input bit patterns included the possibility of adding extra bits (increasing dimension), with the vague notion that these extra dimensions give a place to have a function that may be intrinsically non-linear on its minimal set of bits, computed by a function that is linear (and invertible) over more bits and computes the desired function when looking only at the restriction to the original bits.  I still believe this is generally possible, however, in the context of the reversible Flip machine, one must address the question of what state these additional extra bits are in when the function is called.  The most difficult, but easy to use, situation arises when the state of these extra bits is unknown, as in when the target operator is called, the extra input bits could have any value.  In this case, I believe it can be proved that it is not possible to compute any desired function in an invertible linear way.  However, if one choses an underlying machine that provides the ability to supply these extra bits always in a clean state (zero) via whatever means, I believe that a general function can be computed as a restriction of a linear and invertible function.  However, this concession of assuming a supply of clean bits, means that at least part of that reason for desiring linear invertible operators is undercut, which is that combination of any such operators forms the general linear group over the vector space.  If the computation involves a mix of these invertible linear operators, mixed with non-linear resets of dirty bits into clean bits, the overall program (computed function) is no longer guaranteed to be a member of the group.  This is disappointing, but we must also keep in mind that staying within the group is not a strict requirement, just a self-chosen goal of interest.

The need for setting bits to various desired values comes up immediately in the Flip synthesis tools, for example in the case of synthesizing a range-filled loop where each value of the range of a sub-program input must be set onto the input bits before the sub-program is executed, e.g. the color-ids 0-15 of the last color demo program.  I think of this vaguely as "glue code".

Affine Transformations.  One way to cast light on this situation is by the distinction between linear transformations and affine transformations.  In linear algebra, an affine transformation is one of the form Ax + B, where A is a matrix, B is a translation vector, and x is the input vector.  Bit resets in binary vector space correspond to translations, since if the input vector v  is in state [v_0, v_1, … v_{n-1}] , then to reset it to zero, one adds v to it.  That is, translating a vector by itself takes it back to the origin, since we are working modulo 2 (aka the field has characteristic two).  Translations are not linear functions (since they move the origin), but they can be expressed as linear functions by using so-called homogenous coordinates, which is how OpenGL handles them, for example.

Binary Function Space Revisited.  I gained some clarity on how flip functions, linear functions, etc., all fit together as subsets of the set of all functions (operators) on binary vector space F2n, and in some cases these sets form groups (generally when they are closed under inverses). [Diagram corrected 2019-03-07].

To summarize:

Binary Function Space and Flip Programming.  The relation between these function spaces and the current Flip programming model, that is, the level-0 FLIP/JUMPTO/JUMPFROM instructions, is not immediately obvious.  Which of these function sets are computable by Flip?  The answer is that the Flip machine can compute any function on binary vector space (i.e. the outermost set above), due to the simple fact that any set of input bit patterns can be mapped to any set of output bit patterns by, at worst, simply encoding a lookup table.  In fact, this is precisely what the current Flip synthesis tool does 😳😬😎.  

Some of the confusion arises from the fact that if we consider only the bit flip operations themselves, then we stay inside the smallest set in the diagram, the group of flips.  But when we add the conditional branching instructions, the entire function space becomes computable.  One notion I continue to be interested in, is whether we can choose a middle ground, where we can achieve general purpose computation, but remain in one of the intermediary groups.  An attractive property of this is that if the operators belong to a group, and the execution of an operator corresponds to the group action, then the total sequence of any operations condenses to a single operator, being a product within the group.   This is weird and may be impossible (but brings to mind vague notions of using paths on a Cayley diagram as conditional branches).  One approach to the question is distinguishing between external conditional branching and internal conditional branching: external meaning the compute substrate provides vector space operators (such as the Flip instruction) plus explicit instructions for conditional branching (such as the JUMPTO instruction).  Alternatively, a system with internal conditional branching would not have branching instructions/statements, but rather a program would simply be a sequence of operators executing on the vector space, where conditionality is somehow built into the operators.  I do not know whether internal conditional branching is possible in general.

Reversibility of Compute Substrate vs. Invertibility of Computed Functions.  I settled this mental riddle: given a reversible compute substrate, such as Flip, and a computed function, how does one interpret the situation where the function may not invertible but the substrate is executed in reverse?  Say the computed function is f, and we describe the forward execution as  v' = f(v)  where v is the input bit vector and v' is the resulting output bit vector.  The answer is simply that if f is invertible then we can describe the reverse execution as  f -1(v') = v.  If f is not invertible, we cannot describe the reverse execution with this equation or anything using f-inverse, since it does not exist.  

A related notion is the consideration of using only invertible operators, even when the substrate is not reversible.  One advantage to this is that such functions can always be un-done, by executing their inverses on the forward-only substrate.

One additional point is how to classify the current implementation of reversible branching in the flip machine, with its one-bit-per-program-instruction execution trace.  Is this essentially transforming any operator into an invertible operator?

Future Directions