The complexity class of #P problems
Recently, I had an assignment in one of my courses  Beyond NPCompleteness, to present the complexity class of #P (SharpP) problems based on L.G. Valiant’s paper on complexity of computing the permanent of a matrix.
This post is an attempt to explain the paper as I found very limited number of resources on the internet where this topic is discussed in detail, one of the sightings being a lecture by Professor Erik Demaine under MIT open courseware which can be found on youtube.
Before we begin, it is strongly recommended to familiarize yourself with the following:
 The classes P, NP, NPHard, NPComplete
 Polytime reductions
 The class FP
 The notion of an Oracle
So let’s begin with the permanent, which is defined for an n x n matrix A as:
Before proceeding, let’s understand what this means, and what we want to compute. is a permutation of the numbers {1, 2, 3, … n}. For example, for n = 5, if = {2, 5, 3, 4, 1}, then , , and so on. The summation, therefore, is over all , which means we are doing the summation for all possible permutations, a total of them being (factorial of n). So it is evident, that when taking the product, which is inside the summation term, we must take one element from each row and each column of the matrix, and a total of n elements have to be picked, therefore exactly one element from each row and each column has to be picked.
A closer look at the permanent tells us that if we change all the negative signs in the expression of the determinant of a matrix to positive signs, it will indeed become the Permanent. To show this with an example: Consider the matrix,
The determinant of , while the permanent is: , which is clearly obtainable by changing all negative signs in the determinant’s expression to positive. Surprisingly, though there exist efficient solutions to compute the determinant of a matrix, yet to compute the permanent, no algorithm that takes better than exponential time is known.
Valiant comments on the complexity of the problem of finding the permanent of a (01) matrix, for which he defines the class #P. To put it easily, #P problems are the counting problems associated with the decision problems in class NP. It is a class of function problems, and not decision problems (where the answer is a simple yes/no). An NP problem of the form “Does there exist a solution that satisfies X?” usually corresponds to the #P problem “How many solutions exist which satisfy X?”. Here goes the example of a #P problem: #SAT  Given a boolean formula , find the number of assignments that satisfy . This is the counting version of the famous SAT problem which is known to be NPcomplete.
Defining #P completeness
To define completeness in this class of problems, we need to bring in Oracle Turing Machines and the class FP. Oracle machines are those which have access to an oracle that can magically solve the decision problem for some language . These machines with oracle access can then make queries of the form “Is ?” in one computational step. We can generalize this to nonboolean functions by saying that a T.M. M has oracle access to a function if it is given access to the language . For every , we denote as the set of languages that can be decided by a polynomialtime DTM (Deterministic Turing Machine) with oracle access to O. As an example, consider the problem, which denotes the language of unsatisfiable boolean formulae. because if we are given an oracle access to the SAT language, then we can solve an instance of the problem in polynomial time, by asking “Is ?” and negating the answer to that. Now we are ready to define #P completeness. A function is said to be #Pcomplete if and is in (Cook reduction).
Back to the problem
Now let’s get back to the permanent. We take a special case of the permanent problem where we put a constraint that the input matrix is a (0, 1) matrix, that is, all the entries of the given matrix are either 0 or 1. Let us look at this problem of finding the permanent of a binary matrix with a different perspective. Imagine that the given matrix A is an adjacency matrix of a bipartite graph where,
Now if we look at the term inside the summation, that is for , we can try to imagine the value of this term as synonymous to a possible perfect matching in the bipartite graph represented by A. Why? Because we know that this term takes one element from each row and each column, so all vertices are covered in the term, and this term = 0, if any of the elements picked are 0. But in our adjacency matrix, 0 for row i and column j means that there is no edge between and , so this term evaluates to 1 if and only if the selected elements form an edge cover (a perfect matching) for the given bipartite graph. So clearly, the whole term where the summation is over all represents the total number of perfect matchings in the bipartite graph represented by A. As finding the number of perfect matchings in a bipartite graph #P, therefore clearly, finding the permanent of a binary matrix #P.
Now we shall move on to prove the Valiant’s theorem which says that finding permanent for a binary matrix is #Pcomplete. We already proved it is in #P, so now all remains for us is to prove that it is in #PHard, that is all #P problems can be reduced to this problem in the way explained earlier in this post (where completeness is defined for #P problems). For this, we look at the problem of finding the permanent as a different graph problem.
Consider A = adjacency matrix of a weighted and directed graph with n nodes (We are talking about a general matrix now, not a binary matrix). We define two things now:
 Cycle Cover of a graph G: A set of cycles (subgraphs of G in which all vertices have indegree = 1 and outdegree = 1) which contains all vertices of G
 Weight of a cycle cover: Product of weights of edges involved in the cover
Now, with a good observation, we can conclude that where is the weight of the cycle cover. We sum the weights of all possible cycle covers of the graph and it turns out to be equal to the permanent of the adjacency matrix. If this is hard to visualize, feel free to work out on an example having 3 or 4 nodes to let that sink in.
Proceeding with the proof, we will attempt to reduce an instance of the 3SAT problem to an instance of the cycle cover problem. The methodology and examples are directly taken from the original paper. We begin with a boolean formula given to us in 3CNF form. , a conjunction of m clauses where , a disjunction of 3 literals, where , the set of variable and their negations.
We construct graph G by superposing the following the structures:
 A Track for each variable
 An Interchange for each clause
 For each literal such that or , a Junction at which interchange and track meet.
 The interchanges also have internal junctions, which are exactly same as above
Example: Let’s take some formula F where: occurs in and occurs in
For this example, the following are the structures:
Track  Interchange 

The small shaded regions are the junctions, which are themselves a network of nodes and edges represented by the adjacency matrix,
Why this matrix is valued exactly with these numbers will be clear as we proceed. This is a very crucial part. Let’s note some important things:
 Each junction (represented by ) has external connections only via nodes 1 and 4
 Taking as the matrix leftover after deleting rows a and columns b, we note the following properties of the matrix :
 Perm() = 0
 Perm() = 0
 Perm() = 0
 Perm() = 0
 Perm() = Perm() = 4
Using these properties, we can draw very good insights. Let’s look at routes in the graph. A route is a cycle cover. If we consider all the routes which have the same set of edges outside of the junctions, then we can call a route bad if:

It ignores a junction
In this case, the cycle cover will have the ignored junction to be covered, so it will come separately as a product in the term. But Perm(X) = 0, so it will make the whole term 0 and thus will not contribute to the cycle cover.

It enters and leaves a junction at the same end
This case is bad because , so if nodes 1, 2, 3 or 2, 3, 4 remain (only one of the ends is covered) then again these nodes will separately come as a cycle and make that whole term 0. Thus no contribution to the total number of cycle covers

It enters at node 1 of a junction, jumps to node 4 and then leaves out
This case leaves out nodes 2 and 3 of a junction, so they have to be covered in a separate cycle, but , so this will again make the term 0 and contribute nothing in the total number of cycle covers
So the only choice we have is to enter at either node 1 or node 4 and leave at the opposite end after covering nodes 2 and 3 if we want to make that route count towards the total number of cycle covers (the value of the permanent). Now if we go by this only choice, the contribution to the cycle will be 4, as Perm() = Perm() = 4.
In any track of any good route, there are two cases as seen in the structure of the track:
 All junctions on the left side are picked by the track and those on the right side are picked by interchanges
 The viceversa of the above
These two cases correspond to whether or
Now observe the interchanges. Each interchange has 5 junctions, 3 of which are connected to a corresponding track, which consists of the variable that is present in that particular clause, and 2 are internal junctions, which are of the same structure as a normal junction. A careful observation tells us that the whole of an interchange (all the five junctions) cannot be picked up by a route, in fact, all 3 of the external junctions can never be picked up in a route by the interchange, so at least one of the 3 junctions connected to tracks must be picked up by a track, this constraint being synonymous to the fact that we need at least one literal in the clause to be true, to make the whole clause true.
Now the total number of good routes (cycle covers) exactly corresponds to the total number of satisfying variable assignments for the boolean formula of the SAT instance we had taken. Since #SAT is known to be #Pcomplete, the Permanent problem is now proved to be #PHard.
Let’s get on to finding the permanent of a 01 matrix now. It is a great thing to note that though the problem of finding a perfect matching in a bipartite graph P, yet counting the total number of perfect matchings #Pcomplete. This is one of those examples where it becomes clear that easy decision problems can have very hard counting versions of themselves. Rest of the post is about proving that 01 permanent is #Pcomplete.
To prove this, we will reduce the permanent problem to the 01 permanent problem. First, we need to make the weights of all the edges nonnegative. Doing this is easy with modular arithmetic. We can compute for all where , as we do not need to consider primes greater than the largest value the expression of the permanent can take. Here is the maximum value in the matrix, so a total of permutations of the entries with each of them being equal to the maximum value. That’s the maximum we will ever have.
Since the number of primes is , the complexity is polynomial in terms of input size for this reduction. We can reconstruct the permanent using CRT.
So now we have a matrix where all the entries are nonnegative. We have to reduce this to a form where all entries are either 0 or 1, to prove that the permanent problem is reducible to the 01 permanent problem. We can do so in two ways, by transforming the original graph to an equivalent graph.
 As mentioned in the original paper, Fig. 2 by forming selfloops proportional to the weight of the edge.
 First, convert all the edges to widgets such that all the edges in the resultant graph have weights which are powers of 2. Then all edges that are powers of 2 can be easily transformed to widgets where we only have 01 edges. All the while proving that the initial and final graphs are equivalent.
The first method is mentioned in the paper, so I will be explaining the second perspective here. Images are taken from Wikipedia. First, we transform the graph into one where all edge weights are powers of 2, the graph . To do this, we take an edge with weight and split it into a widget as shown in the image below. We use the fact that any number can be expressed as a sum of powers of 2.
To prove the correspondence, take two cases for a cycle cover C of the graph :
 If edge uv was not in C, then to cover all the edges in the transformed graph, we must use all the selfloops, so the total contribution = 1 in the product, and hence we have no change in the output. This is as good as not considering the edge uv in the original graph
 If edge uv was present in C, then in all the corresponding cycle covers in , there must be a path from u to v. We can see that there are total such paths and they sum up to
Hence graphs and are equivalent in terms of the permanent problem. Now we are left with a transformation such that all edges become binary, with weights 0 or 1. This is easy in the following way (Refer to the image below).
This is the transformation from to . Again we have two cases to show the similarity. Let C be a cycle cover in , then:
 If edge uv was not a part of C, the only way to form a cycle cover (taking all vertices) is to take all the selfloops
 If edge uv was present in C, then in any cycle cover of there must be a path from u to v. At each step from u to v, we have 2 choices, and such a choice has to be taken times, so we have a total of different possible paths from u to v, so it will contribute overall, same as the weight of the uv edge in
Thus the problem of finding the permanent of a 01 matrix, and equivalently the problem of finding the number of perfect matchings in a bipartite graph #Pcomplete.
I have talked about more #Pcomplete problems and their proofs in this post.