Table Decorations

You have $$R$$ red balls, $$G$$ green balls, and $$B$$ blue balls (with $$0 \leq R,G,B \leq 2 \cdot 10^9$$). You would like to construct a set of tables where each table consists of exactly three balls. At a single table, we cannot have all three balls being identical.

What is the maximum number of tables you can make?

  1. Strategy
  2. Greedy
  3. Graph?
  4. Erdos-Gallai Theorem (Graph Construction)?
  5. Find Matching Lower and Upper Bounds
I consider this type of problem to be a strategy problem. Informally, a strategy problem is one where you must come up with a procedure to minimize or maximize some quantity under a given set of constraints. Usually, the procedure is "greedy" in some way, and can be stated in terms of simple rules, like an informal algorithm.

We will work through some problem-solving processes below that we can follow to generate these insights, and hopefully to solve the problem.
I find strategy problems sometimes difficult because I am not good at finding the right insights to solve the problem. We cannot simply employ past experience (such as "Dynamic Programming", or "Graph Theory") to solve the problem. But we need to genuinely think about the solution to this particular problem.

How do we handle such strategy problems? Well, if we are trying to "find an optimal strategy", this task can be broken down into two seperate tasks:
  1. Find a strategy
  2. Prove that the strategy is optimal (or, prove that this is the best you can do)
Sometimes finding a strategy is easy, but proving its optimality is hard. This is often the case when there is a neat "pattern" in the answers. Other times, finding a strategy is hard, but given a strategy, a proof of its optimality is not so difficult. And sometimes you must do both at the same time.

Our approach to this problem will be to generate some insights about the specific problem (through examples). Then we will consider special cases of this problem (for example, what if one of the colours is gone?) and try to generalize our ideas. Lastly, we will come up with strategies that work in certain cases and prove that there can be no better strategies.

So lets begin.
The first way to gather some insight about the problem is to look at examples. Try drawing examples with different small values of $$(R,G,B)$$ and seeing what possible "operations" you can perform. For example, at any point in time, we can either:
  1. Make a table with one red, one green, and one blue ball;
  2. Make a table with two reds, and one green;
  3. Make a table with two reds, and one blue;
  4. Make a table with two greens, and one red;
  5. etc.
Starting with these examples helps to understand the nature of the problem.
As you look at these examples write down any ideas, observations, insights, or thoughts that come to mind. Also keep in mind possible strategies that you might make.
One simplification we can make is to sort the colors. Since the colors don't really matter (they are symmetric), let us assume that $$R \geq G \geq B$$. This will definitely not change the answer, and in our code we can just swap the numbers around before implementing our final solution.
Sorting things often makes life a lot easier when dealing with greedy or strategy problems. When the objects are all symmetric except for one value, we might as well sort by this value. It doesn't change the answer, but it allows us to simplify our thoughts and focus on the objects in a more well-defined order. (I will show some related problems below)
If the entire problem seems a bit hard to manage. Let's think of some special cases. What if only two colors were given. In particular, since we have sorted the colors, we have the following case: "Consider the case where $$B = 0$$, but $$R \neq 0$$ and $$G \neq 0$$."
In the simplified case, what kinds of strategies can we do? For each table, we can:
  1. Take two reds and one green; or
  2. Take two greens and one red
Again, we assume that the colors are sorted (i.e.: that $$R \geq G > 0$$). Intuitively we might have some strategies:
  1. Always take two reds and one green;
  2. Always take two greens and one red;
  3. Sometimes take two reds, sometimes take two green
Let's focus on the two "extreme" strategies (because they are the simplest). For example, let's focus on the first one, because it seems a bit more logical (since $$R \geq G$$). Is the strategy: "Always take two reds and one green" ever optimal? When would it be optimal? When would it be bad?

If $$R = G$$ then this strategy is really bad. Quickly, $$R$$ will decrease by 2, and $$G$$ will decrease by 1, and then we will have that $$R \lt G$$ after the first move.

But if $$R \gt\gt G$$, like for example, if $$R \geq 2G$$ then this strategy seems more reasonable. If $$R$$ is really big, then we might as well take pairs of $$R$$ with a single green, right? This will eventually take up all of the greens, and we will be left with some reds left. Inuitively, this seems like it should work. (Try this in some examples.) But how do we know this for sure? This is the part where we must prove it!
In the simplified case where $$B = 0, G > 0,$$ and $$R \geq 2G$$, the optimal strategy is to always take two reds and one green.
To prove this we count how many tables will be made by this strategy. Then, we will count "what is the maximum number of tables you could make by any strategy" in this case. And we will see that these numbers match (so therefore this strategy must be optimal).

So, how many tables will be made by this strategy? We notice that, whenever $$R \geq 2G$$, if we keep taking two reds, and one green, we will eventually run out of greens first. That is, after making $$n$$ tables, then we will have $$R' := R - 2n$$ reds left (taking two each time), and $$G' := G - n$$ greens left (taking one each time). And since $$R \geq 2G$$, we have that: \[R' = R - 2n \geq 2G - 2n = 2(G-n) = 2G'\] So $$R' \geq 2G'$$. There will always be twice as many reds as greens throughout this entire process. So the greens will hit zero first.

This tells us that, if $$n^*$$ is the number of tables we make in total, then $$G - n^* = 0$$, and so we make exactly $$n^* = G$$ tables in total (we make exactly as many tables as the number of original green balls).

And what is the maximum number of tables we could make by any strategy? Notice that in any strategy, if we make $$n$$ tables, then each of those tables must take at least one green ball (obviously, otherwise it would have to take three red balls, which is not allowed). So the number of tables can be at most $$G$$ (one green ball per table)! So if $$n^*$$ is the maximum number of tables, then $$n^* \leq G$$ by any strategy!.

So, the maximum number of tables we could possibly get (by any strategy) is no more than $$G$$. And the number of tables we got by our strategy is exactly $$G$$. So, by definition this strategy is optimal!
The approach used to find Lemma 1 can be generalized. We looked at examples and found a strategy that seemed to work (based on intuition and observations). Once we had a fixed strategy (at least for this special case), we proved that it was optimal afterward.

The proof of optimality came from "finding matching bounds". We showed an upper bound (maximum) for the number of tables we can achieve in ANY strategy (in this case it was $$G$$) using simple arguments. Then we matched this upper bound with the lower bound (minimum) achievable with our algorithm (again, $$G$$). By having a strategy and a matching upper bound, we have shown optimality.

This concept of matching lower and upper bounds is generally (in this context) called duality. In problem-solving, exploiting duality is often a nice way to help tackle problems. It allows us to switch between "working backwards" and "working forwards". In this case, "working forwards" means "looking for a strategy that works", and "working backwards" means "finding a limit on how well any strategy can do". In general, this shows up in many contexts, and allows us to "meet in the middle" when tackling a hard problem. (See some of the related problems below).
In the simplified case where $$B=0, G>0, R \geq 2G$$, the maximum/optimal number of tables achievable is exactly $$G$$. (This follows from the proof of Lemma 1.)
We have now shown that, if $$R \geq 2G$$ then the optimal strategy is to always take two reds and one green. And this will always yield exactly $$G$$ tables, which is optimal (by Lemma 1).

Let us restrict our attention now to the case where $$R \lt 2G$$ (and recall that $$R \geq G$$ is still assumed). We already saw an example where the above strategy fails (for example, if $$R = G$$, then it can fail pretty easily). Can we find a better strategy that works in this case?

In this special case ($$G \leq R \lt 2G$$), because the numbers are more "balanced" (i.e.: $$R$$ and $$G$$ are relatively close to one another), we intuitively would want a more balanced approach. One idea that comes to mind is to alternate: Take two reds and a green, then take two greens and a red, and then repeat. After looking at some more examples (where this seems to fail), a more refined approach would be as follows:
  1. If there are more reds than greens, take two reds and one green.
  2. If there are more greens than reds, then take two greens and one red.
  3. If they are equal, pick arbitrarily (for example, pick two reds and one green).
  4. If there are no more possible moves to be made, then we're done.
  5. Otherwise, repeat.
Now this intuitively feels like it should work. Again, now that we have a strategy, we should prove or disprove that it works!
If $$R \geq G$$ but $$R \lt 2G$$ then the optimal strategy is described by the algorithm above.
We again prove this by counting how many tables the above strategy should give us, and comparing this to the maximum number of tables achievable by any strategy. By showing that these numbers match, we will have shown that the strategy is optimal.

We start with the "upper bound". What is the maximum number of tables achievable by any strategy? Besides the upper bound mentioned above, we also notice that since every table takes 3 balls, and there are $$R+G$$ balls in total, there can be at most $$\left\lfloor \frac{R+G}{3} \right\rfloor$$ tables used in total (trivially). So $$\left\lfloor \frac{R+G}{3} \right\rfloor$$ is an upper limit on the number of tables.

Does this strategy always achieve $$\left\lfloor \frac{R+G}{3} \right\rfloor$$ tables? Consider the algorithm. It will perform Step 1 repeatedly (take two reds and one green) for a time. Since $$R \lt 2G$$, and since the number of red balls is depleting by 2 every time a green ball depletes by one, we will eventually end up in a case where the number of red balls is equal to the number of green balls, and both are non-zero. Mathematically, if we let $$k := R-G$$, then after $$k$$ rounds of Step 1, there will be $$R' := R - 2k$$ red balls left (since we take two red balls each time), and $$G' := G - k$$ green balls left (since we take one green ball each time). Then notice that: \[\begin{array}{rcl} R' & = & R - 2k \\ & = & R - 2(R-G) \\ & = & 2G - R \\ & = & G - (G - R) \\ & = & G - k \\ & = & G' \end{array}\] So, after $$R-G$$ steps, the two "piles" will become equal. After this, the algorithm will "alternate" performing Step 1 and Step 2. It will take two reds and a green, followed by two greens and a red, and so on. Consider when this process ends. We will (and you can prove this) either have one red and one green left, or just one red, or just one green left. In particular, we have at most 2 balls leftover.

This is actually optimal. Let $$r$$ be the "remainder" (the number of balls left over) after this process. Let $$n$$ be the number of tables made in this process. Notice that, each table we made took three balls (by definition), and we had $$R+G$$ balls in total to begin with, so we have the equation that: \[r = (R+G) - 3n\] Also notice that $$0 \leq r \lt 3$$ (as mentioned above). So it must be the case that: \[n = \frac{(R+G) - r}{3} = \left\lfloor \frac{R+G}{3} \right\rfloor\] And this matches our upper bound, so this strategy is optimal by definition of being optimal.
After looking at a special case (i.e.: $$R \geq 2G$$), we developed some strategies and partial results. We were then able to simplify our problem by ignoring this case, and we were also able to reuse some of the techniques and ideas we learned in the first case. Problem Simplification, as a problem-solving technique, is helpful for this reason: we can solve an easy problem, and learn some things (either results or techniques) that help us to solve the harder version of the problem!

If you ever get stuck while solving a problem, try to simplify it and consider special cases. This can be done by restricting some of the parameters, ignoring them altogether, or just changing the question / rules a bit.
We have completely solved the problem if $$B = 0$$! We were able to look at examples and intuition to come up with a good strategy (after trying some that failed). And we were able to use upper bounds to prove that our strategy is optimal. In our case, we found the following strategy:
  1. Assume $$R \geq G$$.
  2. If $$R \geq 2G$$ then the answer is $$G$$ (which can be achieved by repeatedly making tables with 2 red and 1 green ball) by Lemma 1;
  3. Otherwise, if $$R \geq G$$ but $$R \lt 2G$$ then the answer is $$\left\lfloor\frac{R+G}{3}\right\rfloor$$ (achieved by the "balanced" approach in Algorithm 1) by Lemma 2
And we're done!
Hopefully the reader has learned something new about how to tackle strategy problems. Next we need to solve the entire problem (with $$B \neq 0$$). In the next Idea Section, we will reuse some of the techniques learned here, as well as some of the results, to solve the entire problem.
It is highly recommended that the reader reads Idea #1 (the previous section) before reading this present Idea. There, we introduced many problem-solving techniques that can be used to solve this problem. We also solved a special case of the present problem: if $$B=0$$.

If the reader has already read Idea 1, I encourage the reader to try and derive the solution for the general case now. If you get stuck, feel free to return here and continue reading! But it is worth trying first!
As in the previous section, we assume that $$R \geq G \geq B$$.
In the previous section, we discovered that the answer partially depended on how much bigger $$R$$ (the largest pile) is than $$2G$$. Now, in the general case, let's try and make a similar observation. What if $$R$$ is really big in comparison to $$2G$$ or $$2B$$ or $$2G + 2B$$, etc.? Do we have any idea for a strategy, or upper bounds that work in this case?
It seems to make sense to always use a pair of red balls whenever $$R$$ is really big. So we might use the following strategy:
  1. If there are more green balls than blue balls, take two reds and a green.
  2. If there are more blue balls than green balls, take two reds and a blue.
  3. If there are equal blues and greens, pick arbitrarily.
  4. Repeat until no more moves can be made.
This seems like a reasonable strategy (when $$R$$ is really big). Can we prove it? Can we find some upper bounds to match it? Exactly "how big" is "really big"? When is this strategy optimal?
How many tables can we build with this strategy? Recall that we always take two red balls and one [green or blue] ball. Under this strategy, green balls and blue balls seem to be pretty much symmetric. So we might consider them combined. Moreover, we notice that we must stop this strategy either when we run out of red balls or when we run out of both green and blue balls. Since we are assuming that $$R$$ is really big, we can assume that the green and blue balls run out before the red balls do.

Under this strategy, each table uses up exactly one green or a blue ball. This means that the total number of tables we will build is exactly equal to $$G+B$$ (the total number of green or blue balls), assuming we have a "really big" number of reds balls (and that the red balls do not run out). And how big is "really big"? Well, we just need enough red balls so that they do not run out after $$G+B$$ tables. For each table we take 2 red balls, so after placing $$k$$ tables (for any positive integer $$k$$), we will have $$R - 2k$$ red balls left (starting with $$R$$, removing 2 for each of the $$k$$ tables). And we need the number of red balls to survive, so we want that $$R - 2k \geq 0$$ for all $$k$$ until $$k = G+B$$. In particular, we want: \[R - 2(G+B) \geq 0\] since we set $$k = (G+B)$$. This is the same as: \[R \geq 2(G+B)\] by rearranging the formula. Based on all of this, it seems to imply that, whenever $$R \geq 2(G+B)$$, we might as well perform this strategy. In particular, we can say that $$R$$ is really big whenever $$R \geq 2(G+B)$$. And whenever that is true, we can apply the strategy of "always taking two red balls and one other non-red ball" as described above.
We now have a reasonable strategy in the case when $$R \geq 2(G+B)$$. It seems like it is always good to take two reds plus one non-red ball in each table. This yields $$G+B$$ tables. Is this optimal? Can we find an "upper bound" on the maximum number tables that matches this number (e.g.: a lemma that says we can never get more than $$G+B$$ tables by any strategy)? If we can do this, we will have proven that this strategy is optimal when $$R \geq 2(G+B)$$.
We now have a strategy and a lemma that we hope is true: "You can never get more than $$G+B$$ tables whenever $$R \geq 2(G+B)$$". Maybe it is true? Let's try to prove it!
Any table must take at least one green or one blue ball. So, there can be no more than $$G+B$$ tables in total by any strategy.
Consider any strategy, with any number of balls. Notice that, we can never have a table that takes three red balls (by the problem description), so every table must take at least one green ball or one blue ball (or else it would be a contradiction). This means, if we look at the total number of blue or green balls ($$G+B$$), each table in any strategy decreases this number by at least 1 (maybe 2 or 3, but definitely at least 1). So there can never be more than $$G+B$$ tables, because eventually this number will go to zero!

So, under any possible strategy, under any possible set of input balls, the answer must be no bigger than $$G+B$$. So $$G+B$$ is an upper bound.
In the special case when $$R \geq 2(G+B)$$, the strategy of always taking two red balls and one other ball is optimal. In this case, you achieve exactly $$G+B$$ tables.
We know, in this case, that the number of green/blue balls will run out first. And each table takes exactly one green/blue ball (as discussed above). So this strategy yields exactly $$G+B$$ tables. And by the previous lemma, no strategy can EVER get more than $$G+B$$ tables. So, by definition of "optimal", this strategy will always be optimal (when $$R \geq 2(G+B)$$).
Hooray! We have proven a simple lemma and shown how to solve the problem in a special case (when $$R$$ is big). This looks very familiar. In Idea Section #1 we had a very similar lemma. It is often a good idea to see how our special cases generalize to other cases. In fact, if you have been following since Idea Section #1, we will actually be using similar methods to solve the rest of the problem.
We can now assume that $$R \lt 2(G+B)$$ since we know how to solve the problem whenever $$R \geq 2(G+B)$$. Now, is there a way to solve the problem in this case? Can we think of a strategy that "might" work when the balls are more balanced?
Note that this is a similar approach as taken in Idea Section #1. We suspect that taking the balls in a balanced way might lead to an optimal solution. So we might do something like this:
  1. If $$R \geq (G+B)$$, take two red balls and either a green or a blue ball (whichever one is bigger?)
  2. If $$R \lt (G+B)$$, take one red ball and two other balls (blue+green, blue+blue, or green+green, whichever works)
  3. If there are no more moves left, do nothing
This seems like a reasonable strategy. But I'm not exactly sure if it works. As usual, let's try to prove it optimal by counting how many tables this will yield (in terms of $$R,G,$$ and $$B$$), and by finding some upper bound on the number of tables achievable.
The above algorithm yields exactly $$\left\lfloor\frac{R+G+B}{3}\right\rfloor$$ tables. Moreover, this is optimal.
We first show that this algorithm yields exactly $$\left\lfloor\frac{R+G+B}{3}\right\rfloor$$ tables.

Consider the algorithm. Suppose at the beginning we have that $$R \gt (G+B)$$. Then the algorithm will perform Step 1 (take two red balls and one other ball) repeatedly for a time. Eventually the number of red balls will become equal to the number of [green+blue] balls. Why? After making $$k$$ tables with Step 1, there will be $$R - 2k$$ red balls left (since each one takes two reds), and the number of [green+blue] balls will be $$(G+B) - k$$ (since there were $$G+B$$ to begin with, and we take away one ball for each of the $$k$$ tables). Now, we set $$R - 2k = (G+B) - k$$ and we find that \[k = R - (G+B).\] That is, once $$k = R - (G+B)$$ tables have been placed, the number of red balls will exactly equal the number of [green+blue] balls.

Otherwise, if $$(G+B) \gt R$$ is true at the beginning, then the algorithm will take step 2 repeatedly. We can also prove that eventually the two piles will become equal after some number of tables (the proof is analogous to above).

In any case, eventually the number of red balls will equal the number of green + blue balls. After this, the algorithm will alternate taking Step 1 (take two reds + one blue/green for a table) and then Step 2 (take one red + one blue/green for a table). Throughout this process, the balls will stay "balanced".

Eventually, when the process ends, we can see that there must only be 0, 1, or 2 balls leftover (altogether, from any color). I omit the proof, but it can be seen because the balls stay "nearly equal" throughout the process.

Let $$r$$ be the number of balls leftover. And let $$n$$ be the total number of tables made. Then, we originally had $$R+G+B$$ balls in total, and each table took three of these balls, so we must have that: \[r = (R+G+B) - 3n\] or (when rearranged) \[(R+G+B) = 3n + r\] And since $$0 \leq r \leq 2$$, we must have that $$r = (R+G+B) \mod \ 3$$, and that \[n = \left\lfloor\frac{R+G+B}{3}\right\rfloor\] (this is by definition of division and modulus; i.e.: "The Division Theorem").

So we have shown that this algorithm achieves exactly $$\left\lfloor\frac{R+G+B}{3}\right\rfloor$$ tables. We now need to prove that this is optimal by proving that this is also an upper bound for any strategy. To prove this we use a simple argument: under any strategy, every table must take exactly three balls! So, if any strategy makes $$k$$ tables, then it must use exactly $$3k$$ balls. And since we have no more than $$R+G+B$$ balls to use in total, any strategy is limited by: \[3k \leq R+G+B\] or, in other words: \[k \leq \left\lfloor\frac{R+G+B}{3}\right\rfloor\] Since our algorithm achieves exactly this much, our algorithm must be optimal.
This proof is very similar to the proof of our second strategy in Idea Section #1 (for the special case when $$B=0$$). In fact, everything is identical, except we replaced $$G$$ with $$G+B$$. This is nice! Sometimes looking at special cases makes it very easy to solve the entire problem. In this case, we could reuse the entire proof (changing only one slight thing)!

This can be useful whenever you get stuck.
We can combine all of our above knowledge to produce one final, simple algorithm for this problem.
  1. Given $$R,G,B \geq 0$$.
  2. If $$R \geq 2(G+B)$$, the answer is $$G+B$$ (which can be achieved by always taking two red balls and one [green or blue] ball).
  3. If $$R \lt 2(G+B)$$, the answer is $$\left\lfloor \frac{R+G+B}{3}\right \rfloor$$ (which can be achieved by the balanced approach described above)

This problem is a nice example of a greedy / strategy problem. In these kinds of problems, we are given a description of some process (often a type of game) and we must find a strategy that is guaranteed to win or to be optimal.

In Idea Section #1 above, I describe some problem solving techniques that can be used to approach these kinds of problems, and we consider a special case of the given problem (when the number of blue balls is zero). In Idea Section #2 we use those same techniques to solve the entire problem. If you are interested in learning how to reason and think about these kinds of problems, it is recommended that the reader take a look at one or both of these sections. Otherwise, we describe a shorter summary here!

There are generally two parts to finding strategies: 1) Finding the strategy, and 2) Proving that the strategy is optimal. By thinking of these as two separate tasks, it sometimes makes the problem easier. That turns out to be helpful in this problem.

We first "work backwards" and ask: "How would we prove or know that a strategy is optimal"? Immediately, we can provide some pretty simple upper bounds (limits) on the number of tables any algorithm or strategy can achieve. In particular:

  1. The number of tables will never be more than $$\left\lfloor\frac{R+G+B}{3}\right\rfloor$$. This is because, every table must take up exactly three balls, and we only have $$R+G+B$$ balls in total, so we can't have more than $$(R+G+B) / 3$$ tables in total. This is a limit for any strategy.
  2. No table can have all red balls (this is from the problem statement). An equivalent way of saying this is: Every table must use at least one green or one blue ball. So consider the quantity $$G+B$$ (the total number of green or blue balls, combined). Every table must decrease this number by at least 1 (maybe 2 or 3, but definitely at least 1). So there can be no more than $$G+B$$ tables made in total by any strategy, otherwise this number would go below zero.
  3. Similarly, no table can have all green balls, so there can be no more than $$R+B$$ tables in total.
  4. And finally there can be no more than $$R+G$$ tables in total, by similar reasoning.
So any strategy will achieve no more than $$\left\lfloor\frac{R+G+B}{3}\right\rfloor$$, or $$B+G$$, or $$R+G$$ or $$R+B$$ tables.

How do these help to "prove that a strategy is optimal" when we find one? Well, suppose we have some strategy. If the strategy guarantees that it can achieve any one of these bounds then it must be optimal. For example, suppose we have a strategy $$A$$ that guarantees that we can make exactly $$R+G$$ tables. Then, because every possible strategy can achieve no more than $$R+G$$ tables, we know that every possible strategy must do equal to or worse than the strategy $$A$$. So that means that $$A$$ must be optimal (by definition of optimal)! And this would hold if we could find a strategy that matches any of those bounds.

Now, there might not always be such a strategy. Maybe the best one never achieves any of those upper bounds (i.e.: the best strategy might, for example, always give $$R+G-1$$ or something like that). This can happen. However, in this particular problem, this never happens. There are many strategies that always achieve \[\min\left\{\left\lfloor\frac{R+G+B}{3}\right\rfloor, B+G, R+G, R+B\right\}\] tables, and are therefore optimal by reasoning above.

In a contest situation, most Division 1 coders wouldn't even try to find such a strategy. Once we have those upper bounds, we might "convince" ourselves (or guess) that an optimal strategy always achieves one of these bounds, so we might quickly write a program that reads in $$R,G,B$$ and prints out $$\min\left\{\left\lfloor\frac{R+G+B}{3}\right\rfloor, B+G, R+G, R+B\right\}$$ (without even knowing what the strategy is to achieve it). And this would be Accepted (watch out for overflow!), because the problem only asks you for the number of tables that such a strategy achieves, rather than to actually describe the strategy itself. But this is risky, because you don't even know if such a strategy exists. However, it saves time, especially if you can convince yourself that this is "intuitively" correct.

In Idea Section #2 above we actually found such an algorithm/strategy to achieve these upper bounds. Here it is:

  1. Read $$R,G,B$$
  2. Sort them so that $$R \geq G \geq B$$ without loss of generality.
  3. If $$R \geq 2(G+B)$$ then repeatedly build tables using two red balls and a single green or blue ball. We can prove that this will yield exactly $$G+B$$ tables (see the proofs above), and is therefore optimal by our matching upper bound.
  4. If $$R \lt 2(G+B)$$ then take a balanced approach. Whenever $$R \geq (G+B)$$ take two red balls and one green or blue ball. Whenever $$R \lt (G+B)$$, take one red ball and two other balls. We can prove that this will yield exactly $$\left\lfloor\frac{R+G+B}{3}\right\rfloor$$ tables, and is therefore optimal by our matching upper bound.

And that's all. We were able to work backwards to find a reasonable upper bound and then work fowards to find the strategy that achieves this.

    (All of the following results assume $$R \geq G \geq B$$)
  1. In the simplified case, whenever $$B=0$$, if $$R \geq 2G$$ the optimal strategy is to always take two reds and one green. This was our first interesting result. This always yielded exactly $$G$$ tables.
  2. In the simplified case, whenever $$B=0$$, if $$R \lt 2G$$ the optimal strategy is to take two reds and one green whenever $$R \geq G$$ or take two greens and one red whenever $$G \geq R$$. Our second result, showing that a "balanced" approach works. This always yields $$\left\lfloor \frac{R+G}{3} \right\rfloor$$ tables, which is optimal.
  3. In the general case, if $$R \geq 2(G+B)$$ the optimal strategy is to always take two red balls and one other ball. The green and blue balls are treated the same. This always yields $$G+B$$ balls which is optimal.
  4. In the general case, if $$R \lt 2(G+B)$$ the optimal strategy is to always take two red balls and one other ball whenever $$R \geq (G+B)$$ or to take one red ball and two other balls whenever $$R \lt (G+B)$$. Our final result. This always yields $$\left\lfloor\frac{R+G+B}{3}\right\rfloor$$ tables, which is optimal.
  5. No strategy can achieve more than $$\min\left\{\left\lfloor\frac{R+G+B}{3}\right\rfloor, B+G, R+G, R+B\right\}$$ tables. Moreover, there always exists a strategy that achieves this number. Therefore, this is always optimal. This is the simplest characterization of the final answer. You can write this solution in one line of code!
  1. Recognizing that this problem is asking for a "strategy" helps to point us in the right direction. Having solved other strategy problems, recognizing that you can break the problem into two parts (finding a strategy and finding bounds) is also very useful. These are things you learn as you solve more problems and learn from others.
  2. With many problems (strategy problems in specific), looking at some examples is one of the best ways to generate insights.
  3. It's always good to write down your observations and ideas, especially as you consider examples.
  4. We made a lot of simplifying assumptions to make life easier. We started by assuming the balls were sorted so that $$R \geq G \geq B$$. Then we looked at special cases of the problem: "What happens if $$B=0$$?" or "What happens if $$R \geq 2G$$?" or "What happens if $$R \leq 2G$$?". We later combined these and asked "What happens if $$R \geq 2(G+B)$$?" and so on. These questions were exactly the set of questions necessary to build up the solution from simpler ideas. Simplifying a problem into special cases is often a great way to make progress when you feel stuck. It's somewhat of an art to be able to ask yourself good simple questions to come to the right answers.
  5. Once we solved simple cases we were careful to carry over the techniques and results to the general case. It turned out that every proof/result in the general case was analogous to some proof/result in the special case. In fact, every lemma in Idea Section #2 can come from something in Idea Section #1 by replacing $$G$$ with $$(G+B)$$! Generalization is useful in problem-solving!
  6. Near the end, we knew what we wanted to be true (based on our previous intuition). We had a strategy that achieved $$G+B$$ tables, and we hoped that it was optimal. It turns out that this was optimal, so the proof logically followed.
  • "Without loss of generality": This is a phrase you hear often in mathematics. In general, when some items are "symmetric" (i.e.: indistringuishable by certain important properties) its sometimes nice to pick an arbitrary ordering on them that does not break the symmetry in any unreasonable way. In our case, we sorted the ball types so that $$R \geq G \geq B$$. You will find that sorting often makes life easier in many problems, especially when the original order does not matter.
  • Matching lower and upper bounds: This is a concept that appears a lot. If you can find a "certificate" saying "no object in the set has value greater than $$v$$" (i.e.: an upper bound) and you find an object in the set with value exactly or at least $$v$$ (i.e.: a lower bound) then the pair ("certificate" and "object") show that $$v$$ is the optimal value. See the related problems below, but this "Duality" comes up in many problems. For example, the Max-Flow-Min-Cut Theorem states that the maximum value of any st-flow in a network is equal to the minimum value of any st-cut in the network. This is an example of duality. (If you don't know what these mean, see the related problems below. Sometimes finding one
  • Problem Simplification helps us not to get stuck: We noticed that the simpler problem (when $$B=0$$) was much easier to solve. By considering this case, we were able to make at least some progress without getting stuck. We found the added bonus that all of the results we proved in the special case could be generalized to results in the general case. This allows us to "kill two birds with one stone". This is often the case in real life (in contests and in research problems).
  • Greedy / Strategy / Construction
  • Math / Formulas
  • Duality / Generate and Test / Minimax