Link : http://acmicpc-live-archive.uva.es/nuev ... php?p=2809
I think that it is a kind of well-known graph problem, but I have completely no idea. Give me some idea please.
N = 10,000 means it can be solved in O(N) time..
Help me on this problem
Moderator: Board moderators
Why do you suppose O(n), is time 1 sec? I know ACM has 60 on most problems, which lets u use O(n^2), O(n^3) on this one, I belive. I also see it as largest connected component. Here is what I would do:
hold a vector of availability [10000]. The m student who don't vote are unavailable and so are the ones who vote them (core students can't vote outside the core). Thus u decrease n (the ones u have to process).
While there are students available, start with any of them and mark the rest: 1 if he's pointed by someone in the core, 2 if he points to someone in the core and 3 if both.
u can use the same availability vector if u want to save memory. Not using recursion is an optimization.
when all "1" students hev been processed, get core size and set the whole core as unavailable. then the ones who point to them as unavailable (they are the remaining "2"s), and so on.
the setting as unavailable should be done until no more changes are made in a pass-through.
Hope that helps...
hold a vector of availability [10000]. The m student who don't vote are unavailable and so are the ones who vote them (core students can't vote outside the core). Thus u decrease n (the ones u have to process).
While there are students available, start with any of them and mark the rest: 1 if he's pointed by someone in the core, 2 if he points to someone in the core and 3 if both.
u can use the same availability vector if u want to save memory. Not using recursion is an optimization.
when all "1" students hev been processed, get core size and set the whole core as unavailable. then the ones who point to them as unavailable (they are the remaining "2"s), and so on.
the setting as unavailable should be done until no more changes are made in a pass-through.
Hope that helps...
Understanding a problem in a natural way will lead to a natural solution
Hi. I don't think this problem deals with connected components - the statement doesn't say that the core should be connected.
My algorithm:
1. Build a directed graph, if x voted for y, represent it with the edge x->y.
2. Find vertices with zero out-degree, and all vertices from which they can be reached - using DFS, in O(n) time. Remove all such vertices and associated edges from the graph.
3. While possible, find and remove all vertices with zero in-degree. This step can be probably implemented with a DFS too, in O(n) time.
4. Print the number of remaining vertices in the graph.
My algorithm:
1. Build a directed graph, if x voted for y, represent it with the edge x->y.
2. Find vertices with zero out-degree, and all vertices from which they can be reached - using DFS, in O(n) time. Remove all such vertices and associated edges from the graph.
3. While possible, find and remove all vertices with zero in-degree. This step can be probably implemented with a DFS too, in O(n) time.
4. Print the number of remaining vertices in the graph.
Hi mf. Take this simple example: 1->2->3->1; 4->5->6->4. Here u have 2 cores, but your algorithm will print 6 (as if it were one single core), since none of theese node will be eliminated. You are asked for 'the largest core of people', that is why it's about largest connected component (in this case 3).
My suggestion of eliminations was the same, but I don't really know how u would do it in O(n). The way I see it, it's n*(n+1)/2 in worst case scenario, because at one pass-through u may eliminate 1 node, which leads to the elimination of another 1 node and so on... this way u pass through all remaining nodes n times (in worst case scenario): n + n-1 + ... + 2 + 1.
This is an example: 1->2->3->4->...->n.
1st search => n goes out. then u have to search again
2nd search => n-1 goes out, and so on.
search should be interrupted when no elimination was made (as I wrote before).
I think no matter how u search, there is a worst case scenario for which the elimination algo does n*(n+1)/2 operations.
If u have a different opinion pls write back.
However, in our problem this is not a concern, since any node taken out reduces the complexity of further processing.
My suggestion of eliminations was the same, but I don't really know how u would do it in O(n). The way I see it, it's n*(n+1)/2 in worst case scenario, because at one pass-through u may eliminate 1 node, which leads to the elimination of another 1 node and so on... this way u pass through all remaining nodes n times (in worst case scenario): n + n-1 + ... + 2 + 1.
This is an example: 1->2->3->4->...->n.
1st search => n goes out. then u have to search again
2nd search => n-1 goes out, and so on.
search should be interrupted when no elimination was made (as I wrote before).
I think no matter how u search, there is a worst case scenario for which the elimination algo does n*(n+1)/2 operations.
If u have a different opinion pls write back.
However, in our problem this is not a concern, since any node taken out reduces the complexity of further processing.
Understanding a problem in a natural way will lead to a natural solution
If I understand the problem correctly, when you have two cores, say A and B, then their union A+B is also always a core. It doesn't contradict the core's definition, does it?
So, IMHO, the largest core is the union of all possible cores, and the problem is just to find how many people are members of any core.
Doing a DFS in this step, I'm using the edges in the "incoming" lists. This is a standard DFS, and it takes O(n+e) = O(n+3n) = O(n) time.
So, IMHO, the largest core is the union of all possible cores, and the problem is just to find how many people are members of any core.
For each vertex I keep two adjacency lists - one for outgoing (normal) edges, and one for incoming (reversed) edges.My suggestion of eliminations was the same, but I don't really know how u would do it in O(n).
Doing a DFS in this step, I'm using the edges in the "incoming" lists. This is a standard DFS, and it takes O(n+e) = O(n+3n) = O(n) time.
Maybe mf is right!
If there are two distinct cores, their union is also core!
(voted, like someone in it, liked by someone in it!)
To tell the truth, I participated in this contest(2003 ACM ICPC ASIA Regional Seoul). In our contest, N = 10000 means that the problem has O(N) or O(N log N) bound, and I heard that this problem can be solved in O(N) time bound in the closing ceremony of the contest.
When I first read this problem, I thought that it might be a simple graph problem, with DFS or BFS(because of restict to N). However, I didn't find union of two cores is also core at then, I couldn't solve it.
Thanks for your help
If there are two distinct cores, their union is also core!
(voted, like someone in it, liked by someone in it!)
To tell the truth, I participated in this contest(2003 ACM ICPC ASIA Regional Seoul). In our contest, N = 10000 means that the problem has O(N) or O(N log N) bound, and I heard that this problem can be solved in O(N) time bound in the closing ceremony of the contest.
When I first read this problem, I thought that it might be a simple graph problem, with DFS or BFS(because of restict to N). However, I didn't find union of two cores is also core at then, I couldn't solve it.
Thanks for your help

Sorry for my poor English.
Interesting... Indeed the way the problem is stated, no restriction is made so that cores would be related. The example makes no such restriction either. I find this odd because somehow I feel this was not the intent of the author. Maybe because the problem says cores (plural) and in fact there is only one core (and thus largest) in this interpretation.
It would be nice if someone who judged or had some inside relation to this problem would clarify this.
As for the lists algorithm, I think it would take longer to build the lists and then cross them than eliminating nodes step by step (those not addressed & those that do not address). The node matrix could be expanded by 1row + 1column that count how many references go to & from that node. Where theese numbers are 0, the node is cut out and nodes that address (are addressed by) it decrease theese counts.
It would be nice if someone who judged or had some inside relation to this problem would clarify this.
As for the lists algorithm, I think it would take longer to build the lists and then cross them than eliminating nodes step by step (those not addressed & those that do not address). The node matrix could be expanded by 1row + 1column that count how many references go to & from that node. Where theese numbers are 0, the node is cut out and nodes that address (are addressed by) it decrease theese counts.
Understanding a problem in a natural way will lead to a natural solution