Stuck on certain DP problems.
Moderator: Board moderators
Stuck on certain DP problems.
Problems that ask you to partition a set into two mutually disjoint subsets such that the difference between the cumulative weights of the items in each subset is as small as possible. I have an idea of what ought to be done but I can never convince myself of the DP formulation.
Examples are:
10032  Tug of War
562  Dividing Coins
I think that the decision at each stage is what subset the ith item goes into. I would then have to remember the total weight of items in each subset and I would want to minimize the difference between their weights.
That's my idea but I'm not comfortable with it. There are problems when I try to write a sensible recurrence based on it. Does anyone have a different way of looking at this type of problem?
Thanks for any help.
Examples are:
10032  Tug of War
562  Dividing Coins
I think that the decision at each stage is what subset the ith item goes into. I would then have to remember the total weight of items in each subset and I would want to minimize the difference between their weights.
That's my idea but I'm not comfortable with it. There are problems when I try to write a sensible recurrence based on it. Does anyone have a different way of looking at this type of problem?
Thanks for any help.
Re: Stuck on certain DP problems.
The problem you paraphrased is a straightforward knapsack problem. You create a boolean vector indicating all possible cumulative weights, then pick the one closest to the middle. To fill in the table, take each element in turn and add its weight to every cumulative weight in the table, to form a new table. You can do this in place if you consider the weights in the table in decreasing order.uha wrote:Problems that ask you to partition a set into two mutually disjoint subsets such that the difference between the cumulative weights of the items in each subset is as small as possible. I have an idea of what ought to be done but I can never convince myself of the DP formulation.
Examples are:
10032  Tug of War
562  Dividing Coins
I think that the decision at each stage is what subset the ith item goes into. I would then have to remember the total weight of items in each subset and I would want to minimize the difference between their weights.
That's my idea but I'm not comfortable with it. There are problems when I try to write a sensible recurrence based on it. Does anyone have a different way of looking at this type of problem?
Thanks for any help.
Tugofwar is a little bit more complicated than that because it also requires that the subsets have equal cardinality. So instead of tabulating just the possible cumulative weights, you must tabulate all combinations of (number of people, cumulative weight). At the end you pick the closest cumulative weight for which the number of people is correct.

 Guru
 Posts: 647
 Joined: Wed Jun 26, 2002 10:12 pm
 Location: Hong Kong and New York City
 Contact:
Regular DP doesn't work on those solutions though, I've gotten TLE taking the naive DP method as Professor Cormack described.. it seems unforunate that a careful pruning is the intended solution.. =/
Last edited by Larry on Sun Apr 11, 2004 12:05 am, edited 1 time in total.
(Minor point: it is Cormack, not Cormac  my userid is shortened.)Larry wrote:Regular DP doesn't work on those solutions though, I've gotten TLE taking the naive DP method as Professor Cormac described.. it seems unforunate that a careful pruning is the intended solution.. =/
I can only speak of "intent" with respect to Tug of War, because I composed the problem. My intent was that it be a straightforward DP problem. The Waterloo environment gives 30 seconds per test case.
When the problem was adapted for uva, it was rewritten to have multiple input cases per run, with a total time limit of 10 seconds (on about the same speed of machine). As it happens, my solution, modified only to handle the multiple input, runs in 9.5 seconds. If I modify it so as to avoid reinitializing the 500,000 element table, it runs in 8.5 seconds.
So the process of adaptation, inadvertently I think, tightened the time limit so as to exclude solutions that used the correct algorithm but happened to be slower by a small constant factor. This is unfortunate.

 Guru
 Posts: 647
 Joined: Wed Jun 26, 2002 10:12 pm
 Location: Hong Kong and New York City
 Contact:
You're right.. I actually have the code accepted in 0.4 seconds. I mistakenly remembered another problem.
I'll try again for the former problem, because I just realized I originally did this in Java.. so I'll rewrite it in C and see if it works..
I actually learned this techique from topcoder, I apologize for misleading anyone!
(I talked to you a few times in topcoder a long time ago, so I apologize for misremember your name!)
I'll try again for the former problem, because I just realized I originally did this in Java.. so I'll rewrite it in C and see if it works..
I actually learned this techique from topcoder, I apologize for misleading anyone!
(I talked to you a few times in topcoder a long time ago, so I apologize for misremember your name!)
stuck on certain dp problems
can somebody elaborately explain how to solve 562(dividing coins).
I cant convince myself similar to uha
thaanx in advance
I cant convince myself similar to uha
thaanx in advance
Re: stuck on certain dp problems
int MAX_SUM = 500*100ranjit wrote:can somebody elaborately explain how to solve 562(dividing coins).
I cant convince myself similar to uha
thaanx in advance
bool possible_sum[MAX_SUM+1]
possible_sum[0] = true; (all others false)
for each coin
for s from MAX_SUM  coin downto 0
if possible_sum[s] then possible_sum[s+coin] = true
find s such that possible_sum[s] is true and s
is closest to 1/2 the total of all coins
game theory and dynamic programming
hi,
there is a problem 10404 which involves creation of a similar boolean vector.
i wrote a code using dynamic programming but got tle.it was O(n).
i feel that the problem is more mathematical(may be number theory).
hope someone can help.
bye.
there is a problem 10404 which involves creation of a similar boolean vector.
i wrote a code using dynamic programming but got tle.it was O(n).
i feel that the problem is more mathematical(may be number theory).
hope someone can help.
bye.
Can I have your attention please
Hello Ranjit
Do go through the board, you'll find lots of
info regarding 10404.
I dont know if DP is a good choice here,
but do try other methods!
Suman
Do go through the board, you'll find lots of
info regarding 10404.
I dont know if DP is a good choice here,
but do try other methods!
Suman

 Experienced poster
 Posts: 145
 Joined: Sat Feb 23, 2002 2:00 am
 Location: Paris, France
 Contact:
Pb 562, other algorithm, but ...
Good morning everybody
I've solved (AC in 1.4 sec) 562 (thanks little joey ) with another algorithm, but there is one thing that I don't understand :
I've solved (AC in 1.4 sec) 562 (thanks little joey ) with another algorithm, but there is one thing that I don't understand :
Code: Select all
int MAX_SUM = 500*100
bool possible_differenceMAX_SUM+1]
possible_difference [first coin]= 1; (all other 0)
for each coin from second to last {
new_possible_difference [0...MAX_SUM
Last edited by Julien Cornebise on Wed Jul 21, 2004 12:29 pm, edited 1 time in total.

 Guru
 Posts: 1080
 Joined: Thu Dec 19, 2002 7:37 pm
511 (Do you know the way to San Jose?) is not (yet) submittable
I guess you mean 562.
Your pseudo code is a little blurry, if you don't mind me saying so. I suspect range errors in expressions like new_possible_difference[d+coin], when d goes from 0 to MAX_SUM+1, while the array has only MAX_SUM+1 elements...
Maybe you could (temporarily) give your actual code.
But besides that, my array possible[1..50000] has exacly 50000 elements (it's in Pascal) and calculates the answer without WA...
Oh yeah, another thing: let TOTSUM be the sum of all coin values, then you only have to consider all possibilities upto TOTSUM div 2, because possible is equal to possible[TOTSUMi]. That speeds thing up compared to always considering 50000 values.
I guess you mean 562.
Your pseudo code is a little blurry, if you don't mind me saying so. I suspect range errors in expressions like new_possible_difference[d+coin], when d goes from 0 to MAX_SUM+1, while the array has only MAX_SUM+1 elements...
Maybe you could (temporarily) give your actual code.
But besides that, my array possible[1..50000] has exacly 50000 elements (it's in Pascal) and calculates the answer without WA...
Oh yeah, another thing: let TOTSUM be the sum of all coin values, then you only have to consider all possibilities upto TOTSUM div 2, because possible is equal to possible[TOTSUMi]. That speeds thing up compared to always considering 50000 values.

 Experienced poster
 Posts: 145
 Joined: Sat Feb 23, 2002 2:00 am
 Location: Paris, France
 Contact:
Hi Little Joey.
I admit my pseudo code is a bit messy. Sorry about that :/
Here's temporarily my source code (I'll remove it as soon as somebody will answer). It might have a few outofbounds troubles, but nothing extremly important.
I'm surprised, because I thought one could use MAXCOST+1 instead of MAXCOST*MAX....
[cpp]
#include <stdio.h>
#include <string.h>
#include <functional>
#include <algorithm>
using namespace std;
#define MAX 101
#define MAXCOST 501
int nbpieces;
int pieces[MAX];
int * diff, * newdiff, * temp;
int main() {
int nbscenar;
int i,j;
newdiff = (int*)calloc(MAX*MAXCOST, sizeof(int));
diff = (int*)calloc(MAX*MAXCOST, sizeof(int));
for (scanf("%d", &nbscenar); nbscenar>0; nbscenar) {
// read input (coins)
scanf("%d", &nbpieces);
for (i=0; i<nbpieces; i++)
scanf("%d", &pieces);
memset(diff, 0, MAX*MAXCOST*sizeof(int));
memset(newdiff, 0, MAX*MAXCOST*sizeof(int));
if (nbpieces <= 0) { printf("0\n"); continue; } // case 0 coins
diff[pieces[0]] = 1; // with only 1 coin, the only difference possible is the amount of this coin
for (i=1; i<nbpieces; i++) {
for (j=0; j<MAX*MAXCOST; j++)
// QUESTION : Why here do I have to go up to MAX*MAXCOST ? And not only up to MAXCOST ??? Only differences up to MAXCOST seem useful to me, but it gives me WA if i use the line below :
// for (j=0; j<=MAXCOST; j++)
if (diff[j] != 0) {
diff[j] = 0; // to avoid having to reset diff each time.
newdiff[pieces < j ? jpieces : piecesj] = 1;
newdiff[pieces + j] = 1;
}
// new becomes old, ready for next loop
temp = newdiff;
newdiff = diff;
diff = temp;
}
// find the mininum difference that can be done
for (i=0; diff==0; i++);
printf("%d\n", i);
}
return 0;
}
[/cpp]
I admit my pseudo code is a bit messy. Sorry about that :/
Here's temporarily my source code (I'll remove it as soon as somebody will answer). It might have a few outofbounds troubles, but nothing extremly important.
I'm surprised, because I thought one could use MAXCOST+1 instead of MAXCOST*MAX....
[cpp]
#include <stdio.h>
#include <string.h>
#include <functional>
#include <algorithm>
using namespace std;
#define MAX 101
#define MAXCOST 501
int nbpieces;
int pieces[MAX];
int * diff, * newdiff, * temp;
int main() {
int nbscenar;
int i,j;
newdiff = (int*)calloc(MAX*MAXCOST, sizeof(int));
diff = (int*)calloc(MAX*MAXCOST, sizeof(int));
for (scanf("%d", &nbscenar); nbscenar>0; nbscenar) {
// read input (coins)
scanf("%d", &nbpieces);
for (i=0; i<nbpieces; i++)
scanf("%d", &pieces);
memset(diff, 0, MAX*MAXCOST*sizeof(int));
memset(newdiff, 0, MAX*MAXCOST*sizeof(int));
if (nbpieces <= 0) { printf("0\n"); continue; } // case 0 coins
diff[pieces[0]] = 1; // with only 1 coin, the only difference possible is the amount of this coin
for (i=1; i<nbpieces; i++) {
for (j=0; j<MAX*MAXCOST; j++)
// QUESTION : Why here do I have to go up to MAX*MAXCOST ? And not only up to MAXCOST ??? Only differences up to MAXCOST seem useful to me, but it gives me WA if i use the line below :
// for (j=0; j<=MAXCOST; j++)
if (diff[j] != 0) {
diff[j] = 0; // to avoid having to reset diff each time.
newdiff[pieces < j ? jpieces : piecesj] = 1;
newdiff[pieces + j] = 1;
}
// new becomes old, ready for next loop
temp = newdiff;
newdiff = diff;
diff = temp;
}
// find the mininum difference that can be done
for (i=0; diff==0; i++);
printf("%d\n", i);
}
return 0;
}
[/cpp]