## 10808 - Rational Resistors

Moderator: Board moderators

Abednego
A great helper
Posts: 281
Joined: Tue Sep 10, 2002 5:14 am
Location: Mountain View, CA, USA
Contact:

### 10808 - Rational Resistors

Did I scare everybody with the warning? It was only there to discourage timeout submissions during the contest. The problem is not that difficult. :-)
If only I had as much free time as I did in college...

Christian Schuster
Learning poster
Posts: 63
Joined: Thu Apr 04, 2002 2:00 am
Is there a way other than solving a linear system of equations?

nnahas
New poster
Posts: 40
Joined: Mon Nov 01, 2004 7:56 pm
Christian Schuster wrote:Is there a way other than solving a linear system of equations?
I don't think so . my background is in computer engineering and we took several electric circuits courses, and this problem is a very classical one . The method to solve it is called the "Node Voltage Method" (you can find several web sites detailing it). I think one possible problem is the need for large integer arithmetic . I don't know if 64 bits integer are enough .
All I can say is that 110 bits ought to be enough for the largest numerator and denominator, here is why : By Cramer's rule, every solution to n equations in n unknowns is a ratio of certain determinants (you may look up Cramer's Rule to know which exactly). Using Leibniz formula for the determinant http://en.wikipedia.org/wiki/Determinant we can see that each determinant will a sum of products of n entries of the matrix . since there will be n! products in the sum , the total number of bits needed will be bounded by lg(n!)+ (nb) where b is the number of bits needed to represent a single entry, so b will be 4. 4*16 +lg(16!) < 110 so we won't need more than 110 bits.
There is still one gap in this proof, and that is the possibility that we need more than 110 bits at some intermediate stage of elimination but I really don't think so. The proof might be tedious, however.
I don't know if tighter bounds are possible. It is quite possible that 64 bit integers are OK , if you take care of simplifying the fraction after each operation.

Christian Schuster
Learning poster
Posts: 63
Joined: Thu Apr 04, 2002 2:00 am
I just got AC after a few (7) tries using that Node Voltage Method!

nnahas
New poster
Posts: 40
Joined: Mon Nov 01, 2004 7:56 pm
Christian Schuster wrote:I just got AC after a few (7) tries using that Node Voltage Method!
Did you implement large integer operations (+,*,%,-) or was long long enough ?

Christian Schuster
Learning poster
Posts: 63
Joined: Thu Apr 04, 2002 2:00 am
I wrote a class and some operators which automatically simplified the fraction after each operation - and I used long long for numerator and denominator.

nnahas
New poster
Posts: 40
Joined: Mon Nov 01, 2004 7:56 pm
Thank you!

Destination Goa
Experienced poster
Posts: 123
Joined: Thu Feb 10, 2005 4:46 am
It is possible to perform gauss only once and get right parts as linear forms of c1..cN. Then, for each query it will take just O(N) to find potentials x1..xN.

P.S: My int64-based solution just got WA. I will try some efforts, then I will write long arithmetics. In gauss you can either multiplicate rows by their opposite leading coefficients and then perform non-scaled subtraction, or perform scaled subtraction with fractional coefficient right away. Whichever you choose may affect occurence of int64 overflow on test data. Generally this problem can't be solved using int64.
To be the best you must become the best!

Christian Schuster
Learning poster
Posts: 63
Joined: Thu Apr 04, 2002 2:00 am
Yes, int64 may be enough for some algorithms, for others it may not. Luckily, my solution belongs to the ones for which int64 suffices.

But I don't understand how it is possible to perform Gauss only once, because the currents depend on the two queried nodes. That is, you have to modify the rows belonging to Kirchhoff's current law, and the two specifying the voltage source (or maybe current source).

The only constant part of the matrix are the rows relating the voltage drop along a resistor to the current flowing through it.

I arranged the rows to give the result after making the matrix triangular, but that is currently the only "optimization" I could think of.

Destination Goa
Experienced poster
Posts: 123
Joined: Thu Feb 10, 2005 4:46 am
By performing gauss only once I mean performing triangulation part of it (which is O(N^3)). Consider that Aij are known and Ci are vectors of N elements.

Code: Select all

``````A11 A12 ... A1n | 1 0 ... 0
A21 A22 ... A2n | 0 1 ... 0
An1 An2 ... Ann | 0 0 ... 1
``````
Where Aij is a fraction like before, but right parts are vectors initially laid out to an identity matrix. Whenever we add/subtract/multiplicate/divide rows to triangulate left part (A), we perform corresponding operation on the right vectors. We will never have to divide c / c[j], so entire triangulation process will keep C1...Cn as a linear form with fractional coefficients.

Now, for each query you know that Ci is 0 for all, but CA/CB (namely +1 and -1 if we treat outgoing current as positive).

Now, we can calculate these Cth and perform 2nd gauss part (X evaluation) just at O(N^2) avoiding running O(N^3) triangulation which yields same sub-matrix of A on the left. Right part will be different, but it will never leave linear form, so this form can be precomputed only once.

After all, this is not very important optimization unless Q becomes something like 100 or 1000 [/code]
To be the best you must become the best!

Destination Goa
Experienced poster
Posts: 123
Joined: Thu Feb 10, 2005 4:46 am
X can be precomputed as well. Think of it as of solving the following:

A*X = C where X and C are also matrices over arbitrary coefficients c1..cN. Then we can find X as (A^-1)*C. Then, when we read a new query, we can evaluate X[A] and X at O(1) because only two items of c1...cN will be non-zero.

To spot infinite case we will have to check all rows of the form 0 = (some linear form of C) to have zero right parts which yields O(n), but this can be precompiled at another O(N^3) by applying Floyd-Warshall over original resistors map. In case of disconnected niodes, we will have infinity as the output. In case of connected nodes A/B, there will always be at least one solution.
To be the best you must become the best!

Destination Goa
Experienced poster
Posts: 123
Joined: Thu Feb 10, 2005 4:46 am
Ok. I've got AC with big nums. Interesting facts:

1. Scaling both rows and subtracting them didn't work even for 4096 digits. For more it just timed out. That's because values get up to C*10^3 digits this way. After all, it's useless method if we don't care about precision (and we don't with rational approach).

2. Usual divider coefficient method worked fine for 1024 digits. Maybe less. If you got AC with int64, even 20 digits should be enough but I don't know the way you triangulated matrix.

O(Q*N^3) solution worked at 1.266 sec (it's big int, remember)

O(N^3+Q) solution worked at 0.762 sec

The last one precomputed matrix of X over C. Since there are only two non-zero elements in vector C and we need only two elements from vector X (X=(A^-1)*C), we can find x[A] - x at O(1). To detect infinities also at O(1), Floyd-Warshall was launched on resistors map. If nodes are disconnected, R will be infinite. Otherwise there will be finite answer.

Then I tried to process each connected component individually to get N1^3+N2^3+...+Nk^3 instead of (N1+N2+...+Nk)^3 on the first step. Well, this also gave 0.762 sec. So I think there are very few sparse network tests.
To be the best you must become the best!

Christian Schuster
Learning poster
Posts: 63
Joined: Thu Apr 04, 2002 2:00 am
I just saw that it is possible to do the calculation with a single (m+n)x(m+n) matrix, when using the RHS as you described it. I did it with a (m+n+2)x(m+n+1) matrix, adding two rows for a "voltage source" setting V(P)=0 and V(Q)=1, and a column for the current through the voltage source. The RHS is then a vector containing 1 as last element (for V(Q)=1), the rest are zeroes. Forward elimination gives an upper triangular matrix and leaves the current through the voltage source in the form a*I = c, so there is no need for the substitution step.

I also thought of implementing Floyd-Warshall to find connected components, which makes the matrix regular, thus invertible. Then the solution should be O((m+n)^2), if everything fits into a basic datatype.

Maybe I'll try to implement that one, but at the moment it seems a little brain-bending to me.

little joey
Guru
Posts: 1080
Joined: Thu Dec 19, 2002 7:37 pm
Yes, that's the way I did it, and it's pretty fast.
There is even more room for improvement (that I didn't implement) by:
- combining paralel resistors;
- combining serial resistors;
before you set up the matrix.
It depends on the topolgy of the actual cases how much time this saves and it might even cost time if too few eliminations can be made.
I'm a little too lazy to try and implement it...

Destination Goa
Experienced poster
Posts: 123
Joined: Thu Feb 10, 2005 4:46 am
What's the reason of O((m+n)^2) = O(n^4) solution if basic approach works at O(n^3)?

Joey,
If you set x as potential in some cell, then parallel resistors are eliminated automatically during building matrix A (otherwise you'll get WA).
To be the best you must become the best!