Table Decorations
You have $$R$$ red balls, $$G$$ green balls, and $$B$$ blue balls (with $$0 \leq R,G,B \leq 2 \cdot 10^9$$). You would like to construct a set of
What is the maximum number of tables you can make?
 Strategy
 Greedy
 Graph?
 ErdosGallai Theorem (Graph Construction)?
 Find Matching Lower and Upper Bounds
We will work through some problemsolving processes below that we can follow to generate these insights, and hopefully to solve the problem.
How do we handle such strategy problems? Well, if we are trying to "find an optimal strategy", this task can be broken down into two seperate tasks:
 Find a strategy
 Prove that the strategy is optimal (or, prove that this is the best you can do)
Our approach to this problem will be to generate some insights about the specific problem (through examples). Then we will consider special cases of this problem (for example, what if one of the colours is gone?) and try to generalize our ideas. Lastly, we will come up with strategies that work in certain cases and
So lets begin.
 Make a table with one red, one green, and one blue ball;
 Make a table with two reds, and one green;
 Make a table with two reds, and one blue;
 Make a table with two greens, and one red;
 etc.
 Take two reds and one green; or
 Take two greens and one red
 Always take two reds and one green;
 Always take two greens and one red;
 Sometimes take two reds, sometimes take two green
If $$R = G$$ then this strategy is really bad. Quickly, $$R$$ will decrease by 2, and $$G$$ will decrease by 1, and then we will have that $$R \lt G$$ after the first move.
But if $$R \gt\gt G$$, like for example, if $$R \geq 2G$$ then this strategy seems more reasonable. If $$R$$ is really big, then we might as well take pairs of $$R$$ with a single green, right? This will eventually take up all of the greens, and we will be left with some reds left.
So, how many tables will be made by this strategy? We notice that, whenever $$R \geq 2G$$, if we keep taking two reds, and one green, we will eventually run out of
This tells us that, if $$n^*$$ is the number of tables we make in total, then $$G  n^* = 0$$, and so we make exactly $$n^* = G$$ tables in total (we make exactly as many tables as the number of original green balls).
And what is the maximum number of tables we could make by any strategy? Notice that in any strategy, if we make $$n$$ tables, then each of those tables must take at least one green ball (obviously, otherwise it would have to take three red balls, which is not allowed). So the number of tables can be at most $$G$$ (one green ball per table)! So if $$n^*$$ is the maximum number of tables, then $$n^* \leq G$$ by any strategy!.
So, the maximum number of tables we could possibly get (by any strategy) is no more than $$G$$. And the number of tables we got by our strategy is exactly $$G$$. So,
The proof of optimality came from "finding matching bounds". We showed an upper bound (maximum) for the number of tables we can achieve in
This concept of matching lower and upper bounds is generally (in this context) called
Let us restrict our attention now to the case where $$R \lt 2G$$ (and recall that $$R \geq G$$ is still assumed). We already saw an example where the above strategy fails (for example, if $$R = G$$, then it can fail pretty easily). Can we find a better strategy that works in this case?
 If there are more reds than greens, take two reds and one green.
 If there are more greens than reds, then take two greens and one red.
 If they are equal, pick arbitrarily (for example, pick two reds and one green).
 If there are no more possible moves to be made, then we're done.
 Otherwise, repeat.
We start with the "upper bound". What is the maximum number of tables achievable by any strategy? Besides the upper bound mentioned above, we also notice that since every table takes 3 balls, and there are $$R+G$$ balls in total, there can be at most $$\left\lfloor \frac{R+G}{3} \right\rfloor$$ tables used in total (trivially). So $$\left\lfloor \frac{R+G}{3} \right\rfloor$$ is an upper limit on the number of tables.
Does this strategy always achieve $$\left\lfloor \frac{R+G}{3} \right\rfloor$$ tables? Consider the algorithm. It will perform Step 1 repeatedly (take two reds and one green) for a time. Since $$R \lt 2G$$, and since the number of red balls is depleting by 2 every time a green ball depletes by one, we will eventually end up in a case where the number of red balls is equal to the number of green balls, and both are nonzero. Mathematically, if we let $$k := RG$$, then after $$k$$ rounds of Step 1, there will be $$R' := R  2k$$ red balls left (since we take two red balls each time), and $$G' := G  k$$ green balls left (since we take one green ball each time). Then notice that: \[\begin{array}{rcl} R' & = & R  2k \\ & = & R  2(RG) \\ & = & 2G  R \\ & = & G  (G  R) \\ & = & G  k \\ & = & G' \end{array}\] So, after $$RG$$ steps, the two "piles" will become equal. After this, the algorithm will "alternate" performing Step 1 and Step 2. It will take two reds and a green, followed by two greens and a red, and so on. Consider when this process ends. We will (and you can prove this) either have one red and one green left, or just one red, or just one green left. In particular, we have at most 2 balls leftover.
This is actually optimal. Let $$r$$ be the "remainder" (the number of balls left over) after this process. Let $$n$$ be the number of tables made in this process. Notice that, each table we made took three balls (by definition), and we had $$R+G$$ balls in total to begin with, so we have the equation that: \[r = (R+G)  3n\] Also notice that $$0 \leq r \lt 3$$ (as mentioned above). So it must be the case that: \[n = \frac{(R+G)  r}{3} = \left\lfloor \frac{R+G}{3} \right\rfloor\] And this matches our upper bound, so this strategy is optimal by definition of being optimal.
If you ever get stuck while solving a problem, try to simplify it and consider special cases. This can be done by restricting some of the parameters, ignoring them altogether, or just changing the question / rules a bit.
 Assume $$R \geq G$$.
 If $$R \geq 2G$$ then the answer is $$G$$ (which can be achieved by repeatedly making tables with 2 red and 1 green ball) by Lemma 1;
 Otherwise, if $$R \geq G$$ but $$R \lt 2G$$ then the answer is $$\left\lfloor\frac{R+G}{3}\right\rfloor$$ (achieved by the "balanced" approach in Algorithm 1) by Lemma 2
If the reader has already read Idea 1, I encourage the reader to try and derive the solution for the general case now. If you get stuck, feel free to return here and continue reading! But it is worth trying first!
 If there are more green balls than blue balls, take two reds and a green.
 If there are more blue balls than green balls, take two reds and a blue.
 If there are equal blues and greens, pick arbitrarily.
 Repeat until no more moves can be made.
Under this strategy, each table uses up exactly one green or a blue ball. This means that the total number of tables we will build is exactly equal to $$G+B$$ (the total number of green or blue balls), assuming we have a "really big" number of reds balls (and that the red balls do not run out). And how big is "really big"? Well, we just need enough red balls so that they do not run out after $$G+B$$ tables. For each table we take 2 red balls, so after placing $$k$$ tables (for any positive integer $$k$$), we will have $$R  2k$$ red balls left (starting with $$R$$, removing 2 for each of the $$k$$ tables). And we need the number of red balls to survive, so we want that $$R  2k \geq 0$$ for all $$k$$ until $$k = G+B$$. In particular, we want: \[R  2(G+B) \geq 0\] since we set $$k = (G+B)$$. This is the same as: \[R \geq 2(G+B)\] by rearranging the formula. Based on all of this, it seems to imply that, whenever $$R \geq 2(G+B)$$, we might as well perform this strategy. In particular, we can say that $$R$$ is
So, under any possible strategy, under any possible set of input balls, the answer must be no bigger than $$G+B$$. So $$G+B$$ is an upper bound.
 If $$R \geq (G+B)$$, take two red balls and either a green or a blue ball (whichever one is bigger?)
 If $$R \lt (G+B)$$, take one red ball and two other balls (blue+green, blue+blue, or green+green, whichever works)
 If there are no more moves left, do nothing
Consider the algorithm. Suppose at the beginning we have that $$R \gt (G+B)$$. Then the algorithm will perform Step 1 (take two red balls and one other ball) repeatedly for a time. Eventually the number of red balls will become equal to the number of [green+blue] balls. Why? After making $$k$$ tables with Step 1, there will be $$R  2k$$ red balls left (since each one takes two reds), and the number of [green+blue] balls will be $$(G+B)  k$$ (since there were $$G+B$$ to begin with, and we take away one ball for each of the $$k$$ tables). Now, we set $$R  2k = (G+B)  k$$ and we find that \[k = R  (G+B).\] That is, once $$k = R  (G+B)$$ tables have been placed, the number of red balls will exactly equal the number of [green+blue] balls.
Otherwise, if $$(G+B) \gt R$$ is true at the beginning, then the algorithm will take step 2 repeatedly. We can also prove that eventually the two piles will become equal after some number of tables (the proof is analogous to above).
In any case, eventually the number of red balls will equal the number of green + blue balls. After this, the algorithm will alternate taking Step 1 (take two reds + one blue/green for a table) and then Step 2 (take one red + one blue/green for a table). Throughout this process, the balls will stay "balanced".
Eventually, when the process ends, we can see that there must only be 0, 1, or 2 balls leftover (altogether, from any color). I omit the proof, but it can be seen because the balls stay "nearly equal" throughout the process.
Let $$r$$ be the number of balls leftover. And let $$n$$ be the total number of tables made. Then, we originally had $$R+G+B$$ balls in total, and each table took three of these balls, so we must have that: \[r = (R+G+B)  3n\] or (when rearranged) \[(R+G+B) = 3n + r\] And since $$0 \leq r \leq 2$$, we must have that $$r = (R+G+B) \mod \ 3$$, and that \[n = \left\lfloor\frac{R+G+B}{3}\right\rfloor\] (this is by definition of division and modulus; i.e.: "The Division Theorem").
So we have shown that this algorithm achieves exactly $$\left\lfloor\frac{R+G+B}{3}\right\rfloor$$ tables. We now need to prove that this is optimal by proving that this is also an upper bound for any strategy. To prove this we use a simple argument: under any strategy, every table must take exactly three balls! So, if any strategy makes $$k$$ tables, then it must use exactly $$3k$$ balls. And since we have no more than $$R+G+B$$ balls to use in total,
This can be useful whenever you get stuck.
 Given $$R,G,B \geq 0$$.
 If $$R \geq 2(G+B)$$, the answer is $$G+B$$ (which can be achieved by always taking two red balls and one [green or blue] ball).
 If $$R \lt 2(G+B)$$, the answer is $$\left\lfloor \frac{R+G+B}{3}\right \rfloor$$ (which can be achieved by the balanced approach described above)
This problem is a nice example of a greedy / strategy problem. In these kinds of problems, we are given a description of some process (often a type of game) and we must find a strategy that is guaranteed to win or to be optimal.
In Idea Section #1 above, I describe some problem solving techniques that can be used to approach these kinds of problems, and we consider a special case of the given problem (when the number of blue balls is zero). In Idea Section #2 we use those same techniques to solve the entire problem. If you are interested in learning how to reason and think about these kinds of problems, it is recommended that the reader take a look at one or both of these sections. Otherwise, we describe a shorter summary here!
There are generally two parts to finding strategies: 1) Finding the strategy, and 2) Proving that the strategy is optimal. By thinking of these as two separate tasks, it sometimes makes the problem easier. That turns out to be helpful in this problem.
We first "work backwards" and ask: "How would we prove or know that a strategy is optimal"? Immediately, we can provide some pretty simple upper bounds (limits) on the number of tables any algorithm or strategy can achieve. In particular:
 The number of tables will never be more than $$\left\lfloor\frac{R+G+B}{3}\right\rfloor$$. This is because, every table must take up exactly three balls, and we only have $$R+G+B$$ balls in total, so we can't have more than $$(R+G+B) / 3$$ tables in total. This is a limit for any strategy.
 No table can have all red balls (this is from the problem statement). An equivalent way of saying this is: Every table must use at least one green or one blue ball. So consider the quantity $$G+B$$ (the total number of green or blue balls, combined). Every table must decrease this number by at least 1 (maybe 2 or 3, but definitely at least 1). So there can be no more than $$G+B$$ tables made in total by any strategy, otherwise this number would go below zero.
 Similarly, no table can have all green balls, so there can be no more than $$R+B$$ tables in total.
 And finally there can be no more than $$R+G$$ tables in total, by similar reasoning.
How do these help to "prove that a strategy is optimal" when we find one? Well, suppose we have some strategy. If the strategy guarantees that it can achieve any one of these bounds then it must be optimal. For example, suppose we have a strategy $$A$$ that guarantees that we can make exactly $$R+G$$ tables. Then, because every possible strategy can achieve no more than $$R+G$$ tables, we know that every possible strategy must do equal to or worse than the strategy $$A$$. So that means that $$A$$ must be optimal (by definition of optimal)! And this would hold if we could find a strategy that matches any of those bounds.
Now, there might not always be such a strategy. Maybe the best one never achieves any of those upper bounds (i.e.: the best strategy might, for example, always give $$R+G1$$ or something like that). This can happen. However, in this particular problem, this never happens. There are many strategies that always achieve \[\min\left\{\left\lfloor\frac{R+G+B}{3}\right\rfloor, B+G, R+G, R+B\right\}\] tables, and are therefore optimal by reasoning above.
In a contest situation, most Division 1 coders wouldn't even try to find such a strategy. Once we have those upper bounds, we might "convince" ourselves (or guess) that an optimal strategy always achieves one of these bounds, so we might quickly write a program that reads in $$R,G,B$$ and prints out $$\min\left\{\left\lfloor\frac{R+G+B}{3}\right\rfloor, B+G, R+G, R+B\right\}$$ (without even knowing what the strategy is to achieve it). And this would be Accepted (watch out for overflow!), because the problem only asks you for the number of tables that such a strategy achieves, rather than to actually describe the strategy itself. But this is risky, because you don't even know if such a strategy exists. However, it saves time, especially if you can convince yourself that this is "intuitively" correct.
In Idea Section #2 above we actually found such an algorithm/strategy to achieve these upper bounds. Here it is:
 Read $$R,G,B$$
 Sort them so that $$R \geq G \geq B$$ without loss of generality.
 If $$R \geq 2(G+B)$$ then repeatedly build tables using two red balls and a single green or blue ball. We can prove that this will yield exactly $$G+B$$ tables (see the proofs above), and is therefore optimal by our matching upper bound.
 If $$R \lt 2(G+B)$$ then take a balanced approach. Whenever $$R \geq (G+B)$$ take two red balls and one green or blue ball. Whenever $$R \lt (G+B)$$, take one red ball and two other balls. We can prove that this will yield exactly $$\left\lfloor\frac{R+G+B}{3}\right\rfloor$$ tables, and is therefore optimal by our matching upper bound.
And that's all. We were able to work backwards to find a reasonable upper bound and then work fowards to find the strategy that achieves this.

(All of the following results assume $$R \geq G \geq B$$)
 In the simplified case, whenever $$B=0$$, if $$R \geq 2G$$ the optimal strategy is to always take two reds and one green. This was our first interesting result. This always yielded exactly $$G$$ tables.
 In the simplified case, whenever $$B=0$$, if $$R \lt 2G$$ the optimal strategy is to take two reds and one green whenever $$R \geq G$$ or take two greens and one red whenever $$G \geq R$$. Our second result, showing that a "balanced" approach works. This always yields $$\left\lfloor \frac{R+G}{3} \right\rfloor$$ tables, which is optimal.
 In the general case, if $$R \geq 2(G+B)$$ the optimal strategy is to always take two red balls and one other ball. The green and blue balls are treated the same. This always yields $$G+B$$ balls which is optimal.
 In the general case, if $$R \lt 2(G+B)$$ the optimal strategy is to always take two red balls and one other ball whenever $$R \geq (G+B)$$ or to take one red ball and two other balls whenever $$R \lt (G+B)$$. Our final result. This always yields $$\left\lfloor\frac{R+G+B}{3}\right\rfloor$$ tables, which is optimal.
 No strategy can achieve more than $$\min\left\{\left\lfloor\frac{R+G+B}{3}\right\rfloor, B+G, R+G, R+B\right\}$$ tables. Moreover, there always exists a strategy that achieves this number. Therefore, this is always optimal. This is the simplest characterization of the final answer. You can write this solution in one line of code!
 Recognizing that this problem is asking for a "strategy" helps to point us in the right direction. Having solved other strategy problems, recognizing that you can break the problem into two parts (finding a strategy and finding bounds) is also very useful. These are things you learn as you solve more problems and learn from others.
 With many problems (strategy problems in specific), looking at some examples is one of the best ways to generate insights.
 It's always good to write down your observations and ideas, especially as you consider examples.
 We made a lot of simplifying assumptions to make life easier. We started by assuming the balls were sorted so that $$R \geq G \geq B$$. Then we looked at special cases of the problem: "What happens if $$B=0$$?" or "What happens if $$R \geq 2G$$?" or "What happens if $$R \leq 2G$$?". We later combined these and asked "What happens if $$R \geq 2(G+B)$$?" and so on. These questions were exactly the set of questions necessary to build up the solution from simpler ideas. Simplifying a problem into special cases is often a great way to make progress when you feel stuck. It's somewhat of an art to be able to ask yourself good simple questions to come to the right answers.
 Once we solved simple cases we were careful to carry over the techniques and results to the general case. It turned out that every proof/result in the general case was analogous to some proof/result in the special case. In fact, every lemma in Idea Section #2 can come from something in Idea Section #1 by replacing $$G$$ with $$(G+B)$$! Generalization is useful in problemsolving!
 Near the end, we knew what we wanted to be true (based on our previous intuition). We had a strategy that achieved $$G+B$$ tables, and we hoped that it was optimal. It turns out that this was optimal, so the proof logically followed.
 "Without loss of generality": This is a phrase you hear often in mathematics. In general, when some items are "symmetric" (i.e.: indistringuishable by certain important properties) its sometimes nice to pick an arbitrary ordering on them that does not break the symmetry in any unreasonable way. In our case, we sorted the ball types so that $$R \geq G \geq B$$. You will find that sorting often makes life easier in many problems, especially when the original order does not matter.
 Matching lower and upper bounds: This is a concept that appears a lot. If you can find a "certificate" saying "no object in the set has value greater than $$v$$" (i.e.: an upper bound) and you find an object in the set with value exactly or at least $$v$$ (i.e.: a lower bound) then the pair ("certificate" and "object") show that $$v$$ is the optimal value. See the related problems below, but this "Duality" comes up in many problems. For example, the MaxFlowMinCut Theorem states that the maximum value of any stflow in a network is equal to the minimum value of any stcut in the network. This is an example of duality. (If you don't know what these mean, see the related problems below. Sometimes finding one
 Problem Simplification helps us not to get stuck: We noticed that the simpler problem (when $$B=0$$) was much easier to solve. By considering this case, we were able to make at least some progress without getting stuck. We found the added bonus that all of the results we proved in the special case could be generalized to results in the general case. This allows us to "kill two birds with one stone". This is often the case in real life (in contests and in research problems).
 Greedy / Strategy / Construction
 Math / Formulas
 Duality / Generate and Test / Minimax