Encoding Bitwise Functions as Polynomials
Introduction
Symbolic solvers and theorem provers are powerful tools that allow the user to prove the satisfiability of a collection of statements and can even provide concrete values that satisfy them. Common applications of solvers are solving a function for its inputs given its outputs or proving specific properties.
Interestingly, while building this solver, I realized that I had technically built an automated theorem prover (ATP). As I was exploring the Wikipedia rabbit hole for ATPs, I came across Zhegalkin polynomials[1], and realized that I had rediscovered them.
Polynomials
Definition
Polynomials are expressions consisting of variables and coefficients that involve only the operations of addition, subtraction, multiplication, and positive integer exponentiation of variables[2]. You may remember them from math class. A few examples are \(x^2 - 4\) or \(3y + 1\). Note that \(x^y\) is not a polynomial as \(y\) is a variable and not a positive integer. Each part of the polynomial separated by \(+\) or \(-\) is called a term. A term can be further broken down into the coefficient, variable (also known as the indeterminate, which I will use interchangeably), and degree. For example, in \(3x^2\), \(3\) is the coefficient, \(x\) is the variable, and \(2\) is the degree.
Addition is performed by lining up polynomials by the degree of each term and then adding their coefficients. Subtraction works in the same way. Multiplication works by multiplying every term of the first polynomial with every term of the second.
Interpretation as structures
A polynomial can be interpreted as the set of all numbers that satisfy its structure. For example, \(2x\) can be interpreted as all even numbers if we restrict the inputs to integers. This can be seen if we evaluate \(x\) at different integers: \(\{0, 1, 2, 3\} \rightarrow \{0, 2, 4, 6\}\). \(2x + 1\) can then be interpreted as all odd numbers.
Imagine we have two polynomials, \(2x + 3\) and \(x^2 - x + 1\). Adding them together results in \(x^2 + x + 4\). If we evaluate all three polynomials at let's say \(3\), we get \(2(3) + 3 = 9\), \((3)^2 - (3) + 1 = 7\), and \((3)^2 + (3) + 4 = 16\). You may notice that adding together the results of the first two polynomials, \(9\) and \(7\), is equivalent to the result of the third polynomial, \(16\). This is no coincidence. When we added the polynomials together, we added their structures together to form the third polynomial.
Composition
At this point, you may be thinking, "Hey, can we evaluate the polynomial's indeterminate at another polynomial?" And the answer is yes! Let's take another look at \(2x + 3\). If we evaluate it at \(y^2 + 1\), we get \(2(y^2 + 1) + 3 = 2y^2 + 5\). This is known as composition and is quite powerful. In our framework, it's akin to evaluating a function with the symbolic result of another function.
We can also use this to create relations between variables in multivariate polynomials. A polynomial in one variable is called a univariate polynomial. A polynomial in more than one variable is called a multivariate polynomial. For example, \(x + y\) is a multivariate polynomial. Imagine we have the multivariate polynomial \(3x^2 + xy^2 + y + 1\). If we want to set the relation \(y = x + 1\), we can evaluate \(y\) for \(x + 1\) to see the result. This would evaluate to \(3x^2 + x(x + 1)^2 + (x + 1) + 1 = 3x^2 + x(x^2 + 2x + 1) + x + 2 = x^3 + 5x^2 + 2x + 2\).
Solving with polynomials
Imagine we have a complicated function \(f(a)\) defined as follows:
def f(a):
a -= 53
for i in range(1, 17):
if not i % 7:
a *= i*a
elif i == 5:
a += 20
else:
a += i+1
a -= 5
return a - 107
At which integer value of a does \(f(a) = 1597172565\)? This would probably be quite difficult to do by hand. However, it's fairly simple if we use polynomials. Using a computer algebra system (CAS), we can plug in a variable \(x\) into \(f\), and get the polynomial result.
└─$ poly = f(x)
<Polynomial: (686)*x^4 + (-115248)*x^3 + (7267288)*x^2 + (-203857248)*x + 2146394847, coeff_ring=ZZ>
Now we just have to get the CAS to solve \(686x^4 - 115248x^3 + 7267288x^2 - 203857248x + 2146394847 = 1597172565\) for \(x\), giving us the solutions \(3\) and \(81\). Internally, this works by using complicated algebraic algorithms for the common problem of root finding. The roots of a polynomial are the values for which it evaluates to zero. However, you may notice that we're not solving for zero, but for \(1597172565\). This is easily remedied by subtracting \(1597172565\) from both sides, giving us \(686x^4 - 115248x^3 + 7267288x^2 - 203857248x + 549222282 = 0\)
Using this method, we can solve certain types of functions for their inputs given the outputs. Specifically, the functions must only perform operations that can be represented by a polynomial. This hints to us that polynomials may be a viable option for representing and solving functions in general.
Mathematically representing bitwise functions
Binary operations over bits
First, we must choose a suitable mathematical representation for bits and their operations. Recall that bits are either \(0\) or \(1\) and have various operations such as \(XOR\), \(OR\), and \(AND\). To do this mathematically, I used abstract algebra to construct a field that could accurately re-create these operations. It's not necessary to understand what that means, but I wanted to include that explanation for the algebraically inclined.
I represented a bit as an element of \(Z/Z_2\) or "Z mod 2". Basically, this means we have two elements \(0\) and \(1\), and we can do normal arithmetic with them, but we must take the result modulo \(2\) after every operation. For our purposes, modulo is a fancy word for “divide and return only the remainder.” For example, \(1 + 1 = 2 \mod 2 = 0\). Using this simple rule, we can re-create every unary and binary bit operation. In fact, we only need \(NOT\) and \(AND\) because every other bit operation can be constructed using those two.
Here are a few examples:Name | Definition |
---|---|
\(XOR(a,b)\) | \(a + b\) |
\(AND(a,b)\) | \(a * b \) |
\(NOT(a)\) | \(a + 1 \) |
\(OR(a,b) \) | \(XOR(XOR(a,b), AND(a,b)) \) |
\(IMPLIES(a,b) \) | \(OR(NOT(a), b) \) |
\(EQ(a,b) \) | \(XOR(a, NOT(b)) \) |
Symbolic representation using polynomials
So now we can represent binary operations on concrete values of bits, but how can we represent them symbolically? Well with polynomials, of course! Something not generally discussed in high school math is that polynomials must be defined over algebraic structures such as our field, \(Z/Z_2\). This is required because we need to know how to perform arithmetic with the coefficients and what values can be substituted for the variable.
From here on, all polynomial arithmetic will take place in \(Z/Z_2\) unless stated otherwise. For example, if \(f(a) = a^2 + a\) and \(g(a) = a + 1\), then \(f + g = a^2 + 2a + 1 = a^2 + 1\). If we add \(a\) to \(a\), we get the polynomial \(0\) because \(2a \mod 2 = 0a\). This should make sense as addition in \(Z/Z_2\) is \(XOR\), and anything \(XOR\) itself is \(0\). In addition, when we evaluate a polynomial, it will result in an element of \(Z/Z_2\).
There's another subtle behavior in \(Z/Z_2\) that might not be immediately apparent. Let's look at a degree two polynomial. We know that \(a^2 = a*a = a \& a\). But that's just redundant because \(a \& a = a\). Thus, any degree greater than one can be reduced to one.
So far, we've only been working with one variable, \(a\). However, \(a\) only represents one bit. Let's call the second bit \(b\). In order to work with both variables, we can create a multivariate polynomial in indeterminate \(b\) whose coefficients are polynomials in \(a\).
Name | Symbol |
---|---|
\(XOR \) | \(\wedge \) |
\(AND \) | \(\& \) |
\(NOT \) | \(\sim \) |
\(OR \) | \(| \) |
\(IMPLIES \) | \(@ \) |
\(EQ \) | \(== \) |
Imagine we're working with this function:
def f(a, b):
c = a ^ b
return (c | a) & (b @ ~a) & b
The symbolic compiler reads the function's signature, creates two symbits \(a\) and \(b\), and then runs them through the function.
The result is:
<SymBit: value=<Polynomial: (a + 1)*b, coeff_ring=ZZ/(ZZ(2))[a]>>
You can see that the resulting symbit is a simplified version of the function. What it's saying is that \(f\) is equivalent to \(~a \& b\). This simplification is purely a result of our degree reduction and the fact that zero coefficients are deleted.
Since the underlying value is a polynomial, we can evaluate the symbit by specifying the values to use for \(a\) and \(b\). We denote evaluating the symbit \(s\) for \(a=1\) and \(b=0\) as \(s(a=1,b=0)\). We can also perform a partial evaluation of the symbit by not evaluating every variable. For example, we can partially evaluate \(s(a,b) = (a + 1)*b\) like \(s(a,b=0) = (a + 1)*0 = 0\). Partial evaluation is possible because \(s(a,b) = s(a)(b)\).
Bit vectors
Let's define another convenience wrapper called a bit vector. The bit vector is just a fixed size vector of symbits. Normal bit operations are possible by lining up the bit vectors and performing the operations on each pair of elements. Here's an example of \(XOR\) with 4-bit bit vectors:
\(XOR([0, 1, 1, 0], [1, 0, 1, 1]) = [1, 1, 0, 1] \)
We can define left shifts and right shifts by shifting the vector whichever direction the designated number of times and padding the opposite side with zeroes.
\(SHFTL([0,1,1,0], 1) = [1, 1, 0, 0] \)
When we define a function that uses bit vectors, we'll denote their type and size in the function signature like so:
def f(a: BitVector[4], b: BitVector[4]):
return a & b | (a >> 2)
The symbolic compiler reads the function signature to determine the name of the bit vectors and their size. For the bit vector \(a\) of size \(4\), the compiler will define the bit vector's symbits to be \([a_3, a_2, a_1, a_0]\). Likewise, \(b\) will be \([b_3, b_2, b_1, b_0]\).
The result of compiling this function would be another bit vector: \([a3*b3, a2*b2, (a1*a3 + a1)*b1 + a3, (a0*a2 + a0)*b0 + a2]\)
The bit vector implementation allows us to either evaluate it bit-by-bit (e.g., \(bv(a0=1, b1=0)\)) or set an entire variable to an integer that it will automatically decompose (e.g., \(bv(a=7) \rightarrow bv(a3=0, a2=1, a1=1, a0=1)\)). Like symbits, bit vectors support partial evaluation and composition.
Solving symbolic expressions
Defining the problem
Solving expressions is a key feature of symbolic solvers... obviously. Earlier, we looked at how univariate polynomials can be solved for desired outputs using root finding. However, our system is much more complicated. Not only are we using multivariate polynomials with many variables, but we also have several polynomials to solve! Conceptually, this sounds complicated, but perhaps we can write a simple algorithm that exploits the structure of the polynomials.
We should first go over how the computer algebra system represents polynomials. Let's say we have a polynomial over the integers \(3x^2 + 2x + 1\). Internally, that would be represented as a list, \([1, 2, 3]\), where index zero is the degree zero term, index one is the degree one term, etc. In our representation, anything above degree one is redundant, so we'd only have a two-element list.
How does a multivariate polynomial such as \(a*b + 1\) get stored? \(a\) is the variable of a polynomial with coefficients in \(Z/Z_2\), but \(b\) is the variable of a polynomial with coefficients in \((Z/Z_2)[a]\); its coefficients are polynomials in \(a\). That means that \(a*b + 1\) looks like \([[1, 0], [0, 1]]\)
We know that our multivariate polynomials have a recursive structure (polynomials with polynomial coefficients), and that every polynomial in that structure is maximally degree one. Let's generically represent these polynomials as \(c_1*x + c_0\), where \(c_1\) is the degree one coefficient, \(x\) is the indeterminate, and \(c_0\) is the degree zero coefficient. We'll define one more variable, \(out\), to represent the desired output. Altogether, we'll represent any polynomial in this system as \(c_1*x + c_0 = out\).
Creating the algorithm
Now that we have our representation, we should observe the effect of the \(out\) variable. Recall that addition in \(Z/Z_2\) is \(XOR\). If \(out\) is \(0\), then \(c_1*x + c_0 = 0\). Therefore, \(c_1*x\) must equal \(c_0\). When \(out = 1\), \(c_1*x\) must not equal \(c_0\). We can use this information to reduce the number of states we explore.
What kind of rules we can come up with when we set \(out\) to \(0\) or \(1\)? Let's start with \(out = 0\). This means \(XOR(c_1*x, c_0) = 0\), so \(c_1*x = c_0\). Let \(c_0 = 1\), then \(c_1*x = 1\). The only way that \(c_1*x = 1\) is if both \(c_1=1\) and \(x=1\). If \(c_1\) is also a polynomial, then we recursively apply the same idea to \(c_1\) and solve it for \(out = 1\) because we know \(c_1\) must evaluate to \(1\).
We continue recursing until we get to a polynomial with constant coefficients (that means they're not variables). For example, imagine we're solving \((a+1)*b = 1\). Here, \((a+1)\) is the coefficient of \(b\). Using our above rule of \(c_1*x = 1 \rightarrow (c_1=1, x=1)\), we find that \(b = 1\) and solve its polynomial coefficient \(a+1\) for \(1\). We now have the equation \((a+1)=1\). Adding \(1\) to both sides, we get \(a=1+1=0\), so we return the solution \((a=0, b=1)\).
Now let \(c_0 = 0\) and \(out = 0\), then \(c_1*x = 0\). This will be true if either or both \(c_1 = 0\) and \(x = 0\). Thus, there are three possible configurations: \((c_1 = 0, x = 0)\), \((c_1 = 1, x = 0)\), and \((c_1 = 0, x = 1)\). If \(c_1\) is a polynomial, then we also need to solve it for both \(0\) and \(1\). This is a more complicated situation because the solution can be any of these three configurations, and each of the coefficients may have sub-solutions as well. We need to create some sort of structured solution system to handle the layered complexity.
The constraint system
To help us with this task, we'll need a recursive solution system with structural and behavioral rules. Let's define a few objects that will help us keep track of the solution space.
The constraint system, \(CS\), holds one or more constraints. All of these constraints must be satisfied to be a valid solution. Constraint systems can be seen as performing an \(AND\) between all of the constraints. In order to ensure a consistent and simplified model, a constraint system cannot be a direct child of another constraint system, and the topmost object must be a constraint system. Note that the removal of these rules would allow infinite, different representations of the same constraint system, for example, \(CS([EQ(x, 1), EQ(y, 1)]) == CS([CS(EQ(x, 1)), CS(EQ(y, 1))])\).
The equals constraint, \(EQ\), specifies that a variable must equal a concrete value, either \(0\) or \(1\). We'll denote it as \(EQ(x, val)\).
The one-of constraint, \(OO\), specifies that every solution satisfies exactly one of its child constraint systems. We denote it as \(OO([CS_0, CS_1, …])\).
The any constraint, \(ANY\), specifies that a variable \(x\) can be either \(0\) or \(1\). It's a free variable. We denote it as \(ANY(x)\).
Next, we'll define the interactions between the constraints. Constraint systems act as an \(AND\), so adding two of them together results in a constraint system with the union of their constraints.
Adding two \(EQ\) constraints together that act on different variables returns a constraint system with both constraints. If they act on the same variable, then they must agree. Otherwise, we throw a NoSolutionException
to represent unsatisfiability (UNSAT). Adding an \(EQ\) to an \(OO\) prunes any constraint systems that it makes UNSAT, and then it returns a constraint system with the \(EQ\) and pruned \(OO\). An \(EQ\) and an \(ANY\) returns a constraint system with both if they act on different variables, and just the \(EQ\) if they act on the same variable.
Adding two \(OO\) constraints creates a resultant \(OO\) by taking the product of their child constraint systems and pruning the UNSAT ones. Adding an \(OO\) to an \(ANY\) creates a constraint system with both constraints if they act on different variables, and returns the unchanged \(OO\) if they act on the same variable.
Adding two \(ANY\) constraints creates a constraint system with both if they act on different variables, and just one of them if they act on the same variable. An important property of \(ANY(x)\) is that it is equivalent to \(OO([(x=0), (x=1)])\). Imagine adding \(n\) of these \(OO\) constraints with different variables. Since we must take the product of them, we would get an \(OO\) with \(2^n\) child constraint systems. Using \(ANY\) constraints, we can represent this product with \(n\) constraints.
Finally, let's discuss solution generation. \(EQ(x, val)\) generates one solution for one variable, \((x = val)\). \(ANY(x)\) generates two possible solutions for a single variable, \((x=0)\), \((x=1)\). \(CS\) generates the product of all solutions generated by its child constraints. Lastly, \(OO\) generates solutions from each of its child constraint systems and returns those as its solutions.
Let's take another look at \(c_1*x = 0\). Recall that \(c_1\) is the coefficient and \(x\) is the indeterminate. Our constraint system would model this as \(CS(OO([EQ(c_1, 0), ANY(x)), (ANY(c_1), EQ(x, 0))])\). This is saying that there are two solution systems that satisfy the equation:
- Set \(c_1\) to \(0\), and let \(x\) be free
- Let \(c_1\) be free, and set \(x\) to \(0\)
Note that while there are only two child constraints, together they generate three distinct solutions: \((c_1=0, x=0), (c_1=0, x=1), (c_1=1, x=0)\).
Analyzing the symbolic representation
Before we compile more complex functions, we should analyze the properties of our representation to get a good idea of its strengths and limitations.
Branchless
One limitation lies in the polynomial structure. Polynomials cannot represent all functions simply because their definition does not allow things such as \(a^b\). Let's take a closer look at what this means. \(a^b\) is really multiplying \(a\) by itself \(b\) times.
def f(a: BitVector[4], b: BitVector[4]): c = 1 for _ in range(b): c *= a return c
The obvious problem is that \(b\) is symbolic and not a concrete value. How do we know when to stop? Maybe this is just a simple programming issue. Of course, we can't give Python a symbolic value to loop over! Let's try it again but try to be really careful about how we use the bit vectors.
def f(a: BitVector[4], b: BitVector[4]): c = 1 d = 1 while True: d += 1 c *= a if b ^ d: # This is not concrete! break return cHere we define the truthiness of a bit vector to be False for all zeroes and True otherwise. However, we still don't know when to stop! It's true that during evaluation, \(b\) could be a concrete value, and this would be valid. Unfortunately, this just won't work if \(b\) is not concrete at compile time. Why? Because we can't branch on a symbolic value. Branching is simply not possible. There are a lot of subtle ways branching happens. In general, deciding "how many" or "when to stop" causes a branch. This means that symbolic values cannot be used as size information.
The following is a more subtle example of attempting to use a symbolic value as size information:
def f(a: BitVector[4], b: BitVector[4]): d = [a]*b # b is size information! c = 1 for val in d: c *= val return c
A lot of this falls out of the fact that the function f is not being directly compiled. The function is manipulating symbolic objects that record all state changes. As a consequence of being branchless and the fact that we are evaluating a finite polynomial, evaluation always halts and thus our representation is not Turing complete.
Cardinality of bitwise functions
We are also able to determine the exact number of functions that can be represented with \(n\) bits of input and \(m\) bits of output. Recall that polynomials over \(Z/Z_2\) are internally represented as a two-element list. Let's start with representing a function using a single symbit. It has a list \(l = [i_0, i_1]\) with indices \(i_0\) and \(i_1\) where \(i_0 \in \{0,1\}\) and \(i_1 \in \{0,1\}\). Therefore, there are \(4\) possible states for \(l\).
How many polynomials can there be with two symbits? Recall that multivariate polynomials are represented as a list of lists. Well, each of its indices can take on \(4\) different values, so \(4*4 = 16\). If there was a third symbit, it would be \(16*16 = 256\). This grows at a rate of \(2^{2^n}\). However, there are also \(2^m\) possible outputs, and we can create a function that maps any input to any output. The final result is that there are \((2^m)^{(2^n)}\) or \(2^{2^nm}\) \(n\)-to-\(m\) bit functions. No matter how many steps we arbitrarily add to the function, it will simplify to one of these functions.
The number of \(n\) bit functions is a well-known result[3], and the fact that our representation can encode exactly that many means that we can actually encode every \(n\) bit function.
Constructing an ALU
Introduction
An arithmetic logic unit (ALU) is the part of a CPU responsible for various arithmetic operations. Generally, it performs addition, subtraction, multiplication, division, left rotates, right rotates, etc.
Conceptually, building the ALU using the symbolic compiler is simple. We just need to implement its hardware algorithms using bitwise functions. I will not be going over implementing each function because the algorithms are well known. However, a problem quickly arises from using the symbolic representation: conditionals. Recall from the analysis that we are not allowed to branch. Is building the ALU even still possible? Well yes, we just have to encode the result of the conditional in a branchless way.
Branchless conditionals
First, we create a function called \(NZTRANS\), short for "not zero transform." It takes in a bit vector, performs \(OR\) between all of its bits, and sets each bit to that value. If the bit vector is all zeroes, the output will be zeroes. Otherwise, it will be all ones.
def NZTRANS(a): """Transforms non-zero bitvectors to ALL ones""" for i in range(a.SIZE.bit_length()-1): a |= a >> 2**i for i in range(a.SIZE.bit_length()-1): a |= a << 2**i return a
Next, we define a function, \(IFNZ\), short for "if not zero." It takes in a control bit vector \(c\) and two "statement" bit vectors \(s1\) and \(s2\). Recall that we've defined the truthiness of a bit vector to be False
if all zeroes and True
otherwise. We first use \(NZTRANS\) on \(c\), making it either \(\{0\}^n\) (all zeroes) or \(\{1\}^n\) (all ones). We then \(AND\) it with \(s1\), \(AND\) its inverse with \(s2\), and \(XOR\) those results together.
def IFNZ(c, s1, s2): """If `c` is non-zero, s1, else s2""" d = ADVOP.NZTRANS(c) return (d & s1) ^ (~d & s2)
Remember that for any bit or bit vector \(a\), \(a \& 1 = a \), \(a \& 0 = 0\), and \(a \wedge 0 = a\). In the above function, if bit vector \(c\) is non-zero, \(d\) will be \(\{1\}^n\) so \(s1 \& 1 = s1\), but \(NOT(d)\) will be \(\{0\}^n\) so \(s2 \& 0 = 0\). Finally, \(s1 \wedge 0 = s1\), so \(s1\) is returned. If \(c\) is all zeroes instead, \(s1\) will be zeroed out, and \(s2\) will be returned. This allows us to "choose" between \(s1\) and \(s2\) while encoding both results.
This was pivotal in creating the ALU, and many challenges faced while coding the ALU were due to writing branchless code.
Implementation
We're here! Believe it or not, this is going to be the section with the least content. We've already implemented all of the functionality we're going to need, so this is really just gluing it together! Let's discuss how the ALU implementation works.
The ALU takes an opcode that we'll call \(ctrl\) and two operands, \(a\) and \(b\). The opcode decides which operation the ALU should perform on \(a\) and \(b\). For convenience, I've defined an enum with integer values to make which opcode we're using more apparent. The code for the ALU simply loops over every supported operation and its opcode, compares \(ctrl\) with the opcode's value, and uses \(IFNZ\) to select the correct result for the opcode.
class ALU(BaseObject): def __init__(self, num_bits: int) -> None: self.n = num_bits OP_CODES = { ALUOP.ADD: ADVOP.ADD, ALUOP.SUB: ADVOP.SUB, ALUOP.AND: lambda a,b: a & b, ALUOP.OR: lambda a,b: a | b, ALUOP.XOR: lambda a,b: a ^ b, ALUOP.TWO_CMPT_A: lambda a,b: ADVOP.TWO_CMPT(a), ALUOP.TWO_CMPT_B: lambda a,b: ADVOP.TWO_CMPT(b), ALUOP.MUL: ADVOP.MUL, ALUOP.DIV: ADVOP.DIV, ALUOP.MIN: ADVOP.MIN, ALUOP.MAX: ADVOP.MAX, ALUOP.GT: ADVOP.GT, ALUOP.LT: ADVOP.LT, ALUOP.EQ: ADVOP.EQ, } def ALU(ctrl: BitVector[self.n]=None, a: BitVector[self.n]=None, b: BitVector[self.n]=None): c = a ^ a for op_code, func in OP_CODES.items(): c = ADVOP.IFNZ(ctrl ^ op_code.value, c, func(a, b)) return c self.alu = BitVector.from_func(ALU) self.func = ALU
Demonstration
Let's play around with our new toy! Here I define a 4-bit ALU, and show that the operations perform as expected.
└─$ alu = ALU(num_bits=4) └─$ alu(ctrl=ALUOP.ADD, a=3, b=4).int() 7 └─$ alu(ctrl=ALUOP.SUB, a=5, b=6).int() 15 └─$ alu(ctrl=ALUOP.DIV, a=11, b=2).int() 5 └─$ alu(ctrl=ALUOP.MAX, a=5, b=8).int() 8
Recall that we can also partially evaluate the underlying bit vector bit-by-bit. Here I'm partially evaluating the ALU for concrete values of \(a\) and \(b\) while keeping \(ctrl\) symbolic. The resultant bit vector represents the result of every operation of the ALU when \(a=5\) and \(b=6\).
└─$ noop = alu(a=5, b=6) └─$ noop(ctrl=ALUOP.ADD.value).int() 11 └─$ noop(ctrl=ALUOP.OR.value).int() 7 └─$ noop(ctrl=ALUOP.MIN.value).int() 5
We can also force relations between bits by using composition. Imagine we want \(b0\) to be equal to \((a0 \& b1) == b2\). We can simply evaluate \(b0\) at that symbolic value to create the relation. Paired with the solving system, this is extremely powerful. We can effectively query the symbolic representation such as, "Give me all of the inputs for the ADD
opcode where the result is \(7\) and \(b0 = ((a0 \& b1) == b2)\)."
# This injects the symbits into our local scope as variables └─$ alu.alu.inject_locals(locals()) └─$ add_relation = alu(ctrl=ALUOP.ADD, b0=(a0 & b1) == b2) └─$ sol = add_relation.solve(7) └─$ sol.generate()[0] {'a0': 1, 'a1': 1, 'a2': 0, 'a3': 0, 'b0': 0, 'b1': 0, 'b2': 1, 'b3': 0, 'ctrl0': 0, 'ctrl1': 0, 'ctrl2': 0, 'ctrl3': 0} └─$ all([add_relation(**r).int() == 7 for r in sol.generate()]) True
Conclusion and closing remarks
Polynomials are a versatile and powerful tool in mathematics. Encoding problems in polynomials allows us to apply centuries of reasoning to them. Using some basic math, we were able to represent an entire ALU and solve it for specific outputs. This is a simple study implementation meant to showcase one way to solve bitwise functions, so the current implementation is quite slow. There are many areas to explore in optimizing both the compiler and solver.
Footnotes
1. https://en.wikipedia.org/wiki/Zhegalkin_polynomial
2. https://en.wikipedia.org/wiki/Polynomial
3. https://math.stackexchange.com/questions/2418323/how-many-functions-have-an-n-bit-input-and-m-bit-output
Credits
This article was written by Dani Cronce, a Senior Security Consultant here at Leviathan. You can get in touch with Dani on her LinkedIn profile.