10606  Opening Doors
Moderator: Board moderators
10606  Opening Doors
I used two different algorithms to get the answer.
1. find the sqrt (n) and the multiply it again with itself
2. take two numbers a and b, such that a * a < n < b * b and do a binary search in
[a, b] for an interger m such that m * m <= n and (m + 1)*(m +1) > n.
i still get TLE.
is there a better method?
abi
1. find the sqrt (n) and the multiply it again with itself
2. take two numbers a and b, such that a * a < n < b * b and do a binary search in
[a, b] for an interger m such that m * m <= n and (m + 1)*(m +1) > n.
i still get TLE.
is there a better method?
abi
I don't think there is a better method than squaring floor of the square root (though I would love to see someone prove me wrong on this).
The only problem is that your sqrt routine has to be fast enough. The optimisation that did the trick for me, giving a vast speed increase, was to begin by approximating the square root with a double.
A very rough estimate is that you need to be able to solve around 100150 maximum size test cases per second (on the judge's computer)
The only problem is that your sqrt routine has to be fast enough. The optimisation that did the trick for me, giving a vast speed increase, was to begin by approximating the square root with a double.
A very rough estimate is that you need to be able to solve around 100150 maximum size test cases per second (on the judge's computer)

 Learning poster
 Posts: 53
 Joined: Sat Jan 24, 2004 9:22 pm
 Location: Tomsk, Siberia, RUSSIA
 Contact:
Well there is a different method, I am not trying to say that it is better, but I got AC using it.
If the square is X digits long then the root has to be (X+1)/2 digits long or less. What I did was : i went from the left to right, finding digits in every position of the root.
Consider B is the next Digit you are trying to find, and A is the current approximation of the root.
(A+B)^2 = A^2 + 2AB + B^2;
In this formula A^2 we already know from previous iterations, 2AB is a multiplication "short by long" (and then you add zeros to the right) and B^2 is a square of a single digit.
To fasten things up (I got TLE before getting AC in 4.2s)
1. Use binary search to find digit B
2. Dont run through the whole number when you dont need it in Multiplication.
3. Use 10000Base System.
Hope this Helps somebody...
If the square is X digits long then the root has to be (X+1)/2 digits long or less. What I did was : i went from the left to right, finding digits in every position of the root.
Consider B is the next Digit you are trying to find, and A is the current approximation of the root.
(A+B)^2 = A^2 + 2AB + B^2;
In this formula A^2 we already know from previous iterations, 2AB is a multiplication "short by long" (and then you add zeros to the right) and B^2 is a square of a single digit.
To fasten things up (I got TLE before getting AC in 4.2s)
1. Use binary search to find digit B
2. Dont run through the whole number when you dont need it in Multiplication.
3. Use 10000Base System.
Hope this Helps somebody...
The more contests are held, the happier I am.

 Guru
 Posts: 1080
 Joined: Thu Dec 19, 2002 7:37 pm
I'm not shure what you mean by this, Per. But my guess is that you use a binary search, where you determine the initial boudaries by converting to doubles.Per wrote:I don't think there is a better method than squaring floor of the square root (though I would love to see someone prove me wrong on this).
If so, this method is basically O(n^3*ln(n)) (where n is the number of digits in the answer): O(n*ln(n)) iterations in the binary search, doing one multiplication, which is O(n^2), per iteration to determine a square.
My method, like the method alex[LSD] describes, is only O(n^3): Producing a constant number of digits each iteration, it requires O(n) iterations, each also with one multiplication, which is O(n^2).
For such small numbers of n (I use nine decimal digits to the int, so for a 50 decimal square root n is only 6), it takes a darn good algorithm and some darn good bigint functions to overtake the simplicity of a binary search. Spending the best part of last sunday, I was able to bring down the runtime to 0.3 sec, one third of which is spent on I/O.
I am not going to publish my method, because I think it's a spoiler. But if you got AC on this problem and send me a PM, I might send you my source code (C).
little joey
What I meant was: I don't think this problem is easier than taking square roots (i.e. I don't think there is a solution which is more efficient than taking the square root and then squaring it).
I'm not sure I understand yours or Alex's method completely, but it sounds like you solve the problem by finding the square root, so I don't think I am proven wrong yet.
Actually, I didn't solve the problem by binary search, but by Newton's method applied to finding square roots. I think the complexity should be better than O(n^3) (I will look it up when I'm not as tired as I am at the moment), though of course, it gets pretty bad constant factors as it makes use of division.
I'm not sure I understand yours or Alex's method completely, but it sounds like you solve the problem by finding the square root, so I don't think I am proven wrong yet.
Actually, I didn't solve the problem by binary search, but by Newton's method applied to finding square roots. I think the complexity should be better than O(n^3) (I will look it up when I'm not as tired as I am at the moment), though of course, it gets pretty bad constant factors as it makes use of division.
Well, one invigorating coffee break and some calculations later I feel pretty certain that the NewtonRaphson solution needs no more than O(log n) iterations, thus making the time complexity O(n^2 log n) assuming an O(n^2) division algorithm. But as you said, using a base 10^9representation, n is quite small in this problem, so constant factors become important.
Just out of sheer curiosity, I wrote a 15line solution using GMP, and it was ~12.5 times faster than my solution compiled with optimisation (which in turn is ~5 times faster than compiled without optimisation). The square root algorithm[1] used by GMP appearantly has the same complexity as Karatsuba's algorithm (and with nice constants).
[1]: Paul Zimmermann, "Karatsuba Square Root", INRIA Research Report 3805, November 1999, `http://www.inria.fr/RRRT/RR3805.html'
Just out of sheer curiosity, I wrote a 15line solution using GMP, and it was ~12.5 times faster than my solution compiled with optimisation (which in turn is ~5 times faster than compiled without optimisation). The square root algorithm[1] used by GMP appearantly has the same complexity as Karatsuba's algorithm (and with nice constants).
[1]: Paul Zimmermann, "Karatsuba Square Root", INRIA Research Report 3805, November 1999, `http://www.inria.fr/RRRT/RR3805.html'

 Guru
 Posts: 1080
 Joined: Thu Dec 19, 2002 7:37 pm
Now I see what you mean. I can't imagine a better method either.
Although I'm neither a mathematician nor a computer scientist and understand only a small part of it, I found the article you referred to quite interesting. I can hardly understand bigoh notation for complexity analisys, let alone bigem or bigkay
By relooking, rethinking and recalculating I found out that my method _is_ basically NewtonRaphson, although I never recognised it as such... I calculate a correction to an estimate of sqrt(x) using doubles instead of bignums to avoid dividing bignums. This technicaly limits me to about 10 decimals per iteration, making it O(n^3), although it could be O(n^2*ln(n)), in priciple, if I did bignum division.
Although I'm neither a mathematician nor a computer scientist and understand only a small part of it, I found the article you referred to quite interesting. I can hardly understand bigoh notation for complexity analisys, let alone bigem or bigkay
By relooking, rethinking and recalculating I found out that my method _is_ basically NewtonRaphson, although I never recognised it as such... I calculate a correction to an estimate of sqrt(x) using doubles instead of bignums to avoid dividing bignums. This technicaly limits me to about 10 decimals per iteration, making it O(n^3), although it could be O(n^2*ln(n)), in priciple, if I did bignum division.

 Guru
 Posts: 1080
 Joined: Thu Dec 19, 2002 7:37 pm
OK, this is realy the last thing I'm going to say about this...
Consider this function in pseudo code:
[c]type_x closest_square(type_x A){
find i such that 2^(2*i) <= A < 2^(2*i+2)
if (2^(2*i) == A) return 2^(2*i);
x = 2^i;
xx = 2^(2*i);
while (i >= 0){
a = 2^i;
aa = 2^(2*i);
t = x + a;
tt = xx + 2*x*a + aa;
if (tt == A) return tt;
if (tt < A){
x = t;
xx = tt;
}
}
return xx;
}[/c]
It finds the required square without squaring the square root, in the sense that Per ment (Although it finally _knows_ the square root, but only as an itermediate value).
It is O(n^2) if you make type_x a binary type (a binary number with 300+ bits), because then you only need operators that are linear in the number of bits (assignment, comparison, addition and shift); 2^i can be rewritten as 1<<i, and 2*x*a can be rewitten as x<<(i+1).
My current implementation of this algorithm gives AC in 0.85 sec, much slower than my previous program. This is mainly due to the 300+ iterations it requires for a 100 digit decimal number. But my suspicion is that it can be made an order of magnitude faster if the operators could be worked out in assembly (or better C, for that matter).
Consider this function in pseudo code:
[c]type_x closest_square(type_x A){
find i such that 2^(2*i) <= A < 2^(2*i+2)
if (2^(2*i) == A) return 2^(2*i);
x = 2^i;
xx = 2^(2*i);
while (i >= 0){
a = 2^i;
aa = 2^(2*i);
t = x + a;
tt = xx + 2*x*a + aa;
if (tt == A) return tt;
if (tt < A){
x = t;
xx = tt;
}
}
return xx;
}[/c]
It finds the required square without squaring the square root, in the sense that Per ment (Although it finally _knows_ the square root, but only as an itermediate value).
It is O(n^2) if you make type_x a binary type (a binary number with 300+ bits), because then you only need operators that are linear in the number of bits (assignment, comparison, addition and shift); 2^i can be rewritten as 1<<i, and 2*x*a can be rewitten as x<<(i+1).
My current implementation of this algorithm gives AC in 0.85 sec, much slower than my previous program. This is mainly due to the 300+ iterations it requires for a 100 digit decimal number. But my suspicion is that it can be made an order of magnitude faster if the operators could be worked out in assembly (or better C, for that matter).

 Learning poster
 Posts: 53
 Joined: Sat Jan 24, 2004 9:22 pm
 Location: Tomsk, Siberia, RUSSIA
 Contact:
Well thats pretty much what I was talking about! You go from left to right while (i >= 0) and find the current digit (t). The only difference is that I was choosing t from 0..9999 not 0..1. You wont believe but in Russia we re still using base 10 system. Well anyways I m glad to know that people around the globe are thinking the same way... It is some feeling... really hehehe...
The more contests are held, the happier I am.
10606  Opening Doors
TLE plz help.........
Code: Select all
import java.math.BigInteger;
import java.util.Scanner;
class SquareRoot
{
BigInteger bigsqrt(BigInteger in)
{
BigInteger x,y=in,div=new BigInteger("2");
do
{
x=y;
y=x.add(in.divide(x)).divide(div);
}while(x.compareTo(y)!=0);
return y;
}
}
public class Main
{
public static void main(String[] args)
{
Scanner in=new Scanner(System.in);
SquareRoot inp=new SquareRoot();
while(in.hasNext())
{
BigInteger n=in.nextBigInteger();
if(n.compareTo(BigInteger.ZERO)==0)
break;
BigInteger num=inp.bigsqrt(n);
System.out.println(num.multiply(num));
}
}
}
Nothing is imposible in the world.....And
Never Judge a Book by Its Cover.............
BUBT_Psycho
http://uhunt.felixhalim.net/id/168573
http://shipuahamed.blogspot.com
Never Judge a Book by Its Cover.............
BUBT_Psycho
http://uhunt.felixhalim.net/id/168573
http://shipuahamed.blogspot.com

 Guru
 Posts: 5947
 Joined: Thu Sep 01, 2011 9:09 am
 Location: San Jose, CA, USA
Re: 10606  Opening Doors
Try using binary search to find the sqrt.
Check input and AC output for thousands of problems on uDebug!