So, we can suspect at least one data will have n == 10000 or somewhere very close to it. But, I figured out that no data has a value of more than 1000 for n, using assert();

no wonder why so many got this one AC in the real time contest.

Well, there was a small problem I noticed: The text of the problem statement says:

You should note that 80% of the test cases will have at most 1000 numbers in the sequence.

And the clarification just before the contest says:

Problem B:
There are only three sets of inputs.

Surely you can see that combined together this says: Each of the (at most 3) test cases has at most 1000 numbers. (If some of them didn't, it would be more than 20%.)

Last edited by misof on Tue Jun 28, 2005 3:29 am, edited 1 time in total.

The judge data used for this problem is not complete. because a slow solution takes a lot of time if all judge data are used. The original contest allowed the slow solution to pass, so to allow the slow solutions to pass and still keep the time limit reasonable I used only small number of data and hence this problem is there.

kp wrote:I got AC during contest but now it's interesting to me
what complexity does the "best" solution have?

I don't think that you can do better than a worst-case quadratic solution -- there can be sequences leading to Omega(N^2) different numbers to test for primality. (E.g. the sequence 1 2 4 8 16 ..., but here the numbers grow too rapidly.) I don't think that in the general case you can think of some clever way of checking this in less than N^2 time. (Prove me wrong )

Let B be the upper bound on each of the numbers in the input. IMHO the asymptotically fastest way of doing the primality checks is to compute the sieve of Erathostenes up to NB in O(NB log(NB)) and then to answer the queries in O(1) time each. The total time complexity is thus O(N^2 + NB log (NB)).

(Well, I think that given the 80% constraint in the problem statement and at most 20 test cases, the worst possible inputs could still force this solution to timeout... but nothing significantly better comes to my mind)

If upper bound will be 10000 and amount of numbers is 10000 than
N*B = 10^8. If we use bit sieve of Erathosphenes it take us 12,5 Mb and
complexity O(10^8*log(10^8)). What do you think about it ?

You only need one bit for each odd number up to 100'000'000. This reduces the memory to only 6 MB. The sieve can be computed in under 10 seconds, if implemented well... and this should be the most time-consuming operation.

Your algorithm calculates same things more then once!
It is about to improve.

Try this:

1. Calculate sums of size 2 for every element
2. Increase size by 1
3. Calculate sums of size 3 for every element using
result of step 1
...
and so on...

Please give some I/O.
I have generated prime upto 30000010 and then got WA. Is it due to upper bound or for some other reason ? Is there any tricky input ? plz help.

Actually My siv is fast enough to find mentioned primes. And if the input in 1000 10000 then the sum of the numbers will be really very high. That's why i did find the prime at such a high range.

Can any one give me some Input and Output so that I can check my code with them.