Levko And Strings

We are given a string $$S$$ of length $$N \leq 2,000$$. Let $$T$$ be any string of length $$N$$. The beauty of $$T$$ relative to $$S$$ is the number of pairs $$(i,j)$$ so that $$1 \leq i \leq j \leq N$$ and $$T[i..j] > S[i..j]$$, where $$>$$ means lexicographically. That is, the beauty of a string is the number of substrings which compare greater than the corresponding substring of $$S$$. Given $$S$$ and an integer $$K$$, how many strings are there with a beauty equal to $$K$$, relative to the string $$S$$?
  1. Substrings
  2. String matching / suffix arrays?
  3. DP
  4. Small $$N$$,$$K$$, maybe $$O(NK)$$ solution
  5. Counting
The constraints and my experiences seem to imply some kind of Dynamic Programming approach where I can count the strings based on smaller strings. The table/function will probably look like $$dp[n][k]$$. I'm not sure, yet, what the dimensions or the values would actually mean though.
At this point, I decided to follow my intuition about the dp. I couldn't easily describe the dimensions or how the sub-problems would actually be combined. Furthermore, any ideas I did have (at least mentally) for the dp seemed like they will be $$O(N^2K)$$ in the end. Instead of getting stuck here though, I decided to fully formulate a dynamic programming solution (i.e.: write out the function and what the sub-problems actually are). I was hoping that, once I got a correct answer, that I could optimize the solution later to make sure it ran efficiently.
Sometimes it is easy to come up with a dynamic programming solution (or other kind of solution) that is correct but asymptotically too slow, for a given problem. Many people, myself included, will often get stuck thinking "Well there's no point in implementing this DP solution because it is definitely too slow". Usually, this "too slow" comes from a single extra (often redundant) dimension in your state-space. For example, maybe you have dp[a][b][c], but it turns out (for whatever reason) that the only valid "c" values always satisfy the equation: c := a+b+3 Then you do not need the extra dimension. And you can reduce the state-space and (likely) the overall complexity of your solution. Other times, maybe the state-space is correctly characterized (i.e.: no redundant dimensions), but the transitions are costly in some way. For instance, instead of thinking of dp[n] as the sum of values from 1..n (which would require combining n things to compute it), we might write: dp[n] = (n'th value) + dp[n-1]. That would (implicitly) sum over all values (i.e.: by induction, if dp[n-1] is also the sum of values from 1..n-1). In this case, we did not reuse the over-lapping sub-problems nicely, and a minor adjustment could fix it. In all cases, I think it's best to simply write down the mathematical formula for the state-space and the dp. I also think it's wise to also write out PRECISELY what the dp means, eg: dp[n][k] = number of string of length n that have beauty exactly k when compared against S[1..n]. This is precise, and can be verified. Once all of this is written down, and the formulas are derived we can often find the key observation necessary to make the dp "fast" simply by inspecting the formula. For example, we might see somewhere in the formula, a big "sum". Then we can look at ways of turning this "big sum" into (maybe) a mini-dp that may take only one-step per state to compute. Or we might find a lot of over-lapping sub-problems. In any case, start by formulating whatever dp comes to mind intuitively. Then, once it is fully formulated, modify and "bash" it, until it is efficient or clean enough to work. If I had done this here, I would have solved the problem a lot sooner.
The dp I finally came up with, was based on the following observations.
Let's suppose we have a $$T$$. What properties of $$T$$ do we need to guarantee in order to ensure that it has a beauty of exactly $$K$$.
Can we characterize how all valid $$T$$ would look?
Suppose we have some string $$T$$ with beauty $$k$$. Then there are $$k$$ sub-strings which compare greater than their respective substring in $$S$$. Each such substring looks as follows: It compares equal on a certain number of characters; then there is some character which is greater; then the remaining characters can be anything. Formally, for a pair of indices $$(i,j)$$ contributing to the beauty: there exists an index $$x$$ so that $$i \leq x \leq j$$, $$T[i..x-1] = S[i..x-1]$$, and $$T[x] > S[x]$$ and the characters $$T[x+1..j]$$ have no specific relation to $$S[x+1..j]$$. In some sense, it is that single character $$T[x]$$ being greater than $$S[x]$$ that characterizes the substring and the pair $$(i,j)$$. So, we observe that we should focus on particular occurrences of characters that compare greater
Given a set of elements which may be "useful" or "good" (whatever that means in a given context), it is often a good idea to look at the "extreme" elements. For example, if the set is a set of numbers, look at the minimum element; or if (in this case), the set of "useful" objects is a set of characters (i.e.: we are concerned with the characters that compare greater), look at the first one (i.e.: one with minimal index).
The above key observation and problem solving process lead to a possible characterization of the dp states and transitions between them. They would be based on the concept of picking a first "differing" character, and then recursively using the results of other dp states. This solution is described further below. The above information should be enough to provide an intuition for how to solve this problem.
Note: I had a couple failed attempts at a solution before arriving at a final dp. Recall (from the last observations and problem solving processes in the idea above), that we want to characterize our dp solution strings ($$T$$) based on their first character that is bigger than the corresponding character in $$S$$. More generally, let's characterize them by the first character that differs from the corresponding character in $$S$$. That is, it may be bigger or smaller, and we might have to handle these cases differently. This leads to a definition (and characterization).
For any string $$T$$, we say a "block" in $$T$$ is any substring $$T[i..j]$$ so that $$T[i..j-1] = S[i..j-1]$$ but $$T[j] \neq S[j]$$. A "big block" is any such $$T[i..j]$$ where $$T[j] > S[j]$$. And a "small block" is any such $$T[i..j]$$ where $$T[j] < S[j]$$.
Every string $$T \neq S$$ can be uniquely decomposed into a sequence of (non-overlapping) blocks.
Let $$j_1$$ be the index of the first character that differs (i.e.: $$T[j_1] \neq S[j_1]$$). Then all characters $$T[1..j_1-1] = S[1..j_1-1]$$ since $$T[j_1]$$ was the first character to differ. Thus, $$T[1..j_1]$$ is a block. And then we decompose $$T[j_1+1..n]$$ similarly (I guess it would be by induction or by repetition), and we have a sequence of blocks, as desired.
Now we attempt to formulate the dp, using the block decomposition. I write it out as a recursive function.
Let $$T(n,k)$$ be a function which counts the number of strings that have a beauty of $$k$$ when matched against $$S[n..N]$$.
If we can somehow compute $$T(n,k)$$ correctly for all $$1 \leq n \leq N$$ and $$0 \leq k \leq K$$, then we're done, and the answer would be $$T(1,K)$$.
We now compute $$T(n,k)$$ based on the block decomposition as described above. I work through the formulation step-by-step.
Every substring counted by $$T(n,k)$$ will either compare exactly equal to $$S[n..N]$$ or will have at least one differing character. If it has no differing characters, it necessarily has 0 beauty. And if it has at least one differing character, we consider the first differing character, which can occur in any position $$n, n+1, n+2,..., N$$. So we get that: $$T(n,k) = \sum_{x=n}^{N}{\left(\text{# of strings with beauty k and which have first differing character at x}\right)} + \left\{1 \text{ if } k==0\right\}$$ where the last "+ 1" part comes from the single string that has no differing characters (namely, if $$T=S[n..N]$$ itself). So, besides the "+1", we will focus on computing the main sum directly (with the differing characters).
So, how many strings have beauty $$k$$ and a first differing character (among $$n..N$$) in a particular position $$x$$? (We use counting / combinatorics here.) The characters $$n..x-1$$ must all be equal to the corresponding characters in $$S$$. So there is only one choice for these. Then, character $$x$$ will either compare greater or smaller. And we count/handle these cases separately. If $$T[x] > S[x]$$, then there are $$(`z`-S[x])$$ choices (using ASCII subtraction; i.e.: $$`z`-`z` = 0$$, and $$`z`-`y` = 1$$, etc.) for character $$T[x]$$. The remaining characters $$T[x+1..N]$$ can be anything, as long as the total beauty ends up being $$k$$. How much beauty do we get from using this first block? Well, any substring $$T[i..j]$$ that has $$i \leq x$$ and $$j \geq x$$ will contribute to the beauty (because it will start with some equal characters, and then compare greater exactly at position $$x$$, so it will be greater overall). And since we are considering only the indices $$n..N$$ we get exactly $$(x-n+1)$$ choices for $$i$$, and $$(N-x+1)$$ choices for $$j$$. So, this block contributes exactly: $$(N-x+1)*(x-n+1)$$ to the beauty. And notice, all other indices $$(i,j)$$ that contribute to the beauty must have that $$i>x$$ strictly. So we could use $$T(x+1,k-(N-x+1)*(x-n+1))$$ to count the number of ways to fix the remaining characters $$T[x+1..N]$$ to fill up the remaining beauty. So, in total the number of strings which have a beauty $$k$$ among the indices $$n..N$$ with a first differing character $$T[x] > S[x]$$ for $$(x \geq n)$$ is exactly: $$(`z`-S[x])*T(x+1, k-(N-x+1)*(x-n+1))$$. Similarly, if $$T[x] < S[x]$$ then there are $$(S[x] - `a`)$$ choices for $$T[x]$$, this first block adds 0 beauty, and we must choose the remaining characters in $$T[x+1..N]$$ to contribute exactly $$k$$ beauty. So, there are exactly: $$(S[x]-`a`)*T(x+1,k)$$ strings which have a beauty $$k$$ among the indices $$n..N$$ with a first differing character $$T[x] < S[x]$$ for $$(x \geq n)$$.
Thus, for $$T(n,k)$$ we get the following formula.
For $$1 \leq n \leq N$$, $$T(n,k) = \sum_{x=n}^{N}{(`z`-S[x])T(x+1, k-(N-x+1)*(x-n+1)) + (S[x]-`a`)T(x+1,k)} + \left\{1 \text{ if } k==0\right\}$$
This formula comes directly from the key observations above. If $$x \geq n$$ is the index of the first differing character, then either $$T[x] > S[x]$$ or $$T[x] < S[x]$$. In the first case, we get $$(`z`-S[x])$$ choices and $$(N-x+1)*(x-n+1)$$ beauty, and then recur $$T(x+1, k-(N-x+1)*(x-n+1))$$. In the second case, we have $$(S[x]-`a`)$$ choices and 0 beauty, and then we recur on $$T(x+1,k)$$. The "+1" term comes from the string $$T=S$$ which has 0 beauty and has no differing character $$x$$.
Hence, we now have a (recursive) formula to compute the answer to our question (which would be $$T(1,K)$$). Also, the base case(s) for the above formula is/are: $$T(n,k) = 0$$ if $$n<0$$ or $$k<0$$, also $$T(N+1,0) = 1$$ (the empty string), and $$T(N+1,k) = 0$$ for all $$k \geq 1$$.
So we have a correct formulation for $$T(n,k)$$, which can be solved recursively. What is the total running time needed to compute $$T(1,K)$$ if we use dynamic programming to memoize the states (i.e.: if we don't ever have to recompute a state twice)?
If carried out directly, the running time of the computation is $$\Theta(N^2K)$$.
There are $$\Theta(N \times K)$$ states (possible inputs to the $$T(.,.)$$ function). For each state, $$(n,k)$$, it takes at worst $$\Theta(N-n) = \Theta(N)$$ time to compute the sum for all $$x$$ as described in the formula. So, in the end, it takes $$\Theta(N^2K)$$ time.
This is too slow. However, I would say that this is 90% of the solution. I will describe in the next "idea" how to optimize this dynamic programming approach to run in time. The reader is encouraged to try getting this above solution to work first, before reading the next section (and you may end up coming up with the better / faster solution anyway).
NOTE: The bulk of the solution was described in Idea 2 above. The reader should read that first. This section merely describes how to improve the DP solution to run within the time-limits. Otherwise, the reader can also simply read the "Review" section below, to see the final solution.
Recall (from the lemma in Idea 2 above), that for $$1 \leq n \leq N$$, $$T(n,k) = \sum_{x=n}^{N}{(`z`-S[x])T(x+1, k-(N-x+1)*(x-n+1)) + (S[x]-`a`)T(x+1,k)} + \left\{1 \text{ if } k==0\right\}$$. For the first term, $$k - (N-x+1)*(x-n+1)$$ is a parabola which shrinks rapidly for each $$x$$, before coming back up again. And if you look at the numbers (either by simulation or just intuitively), for most $$x$$, $$k - (N-x+1)*(x-n+1) < 0$$. So, we are just adding up a lot of zeroes.
Is there any way to re-order the sum so that we can avoid doing extra work (i.e.: adding up a lot of 0's)?
We somehow want to "amortize" the costs of the work done. That is, for a fixed state $$(n,k)$$, there may be $$O(N)$$ work done; but we want to show that the total sum of all work done is relatively small, such as $$O(NK)$$ rather than $$\Theta(N^2K)$$.
I want to transform/rearrange the sum to ensure that I am not adding lots of zeroes and hopefully find a better bound on the amount of work I have to do. For this, instead of asking: "for a fixed $$(n,k)$$, how much work must be done", I will ask "for a specific $$x$$, which states $$(n,k)$$ will receive a non-zero contribution from $$x$$ in the sum when computing $$T(n,k)$$?" Specifically, I am looking at the first part of the sum, in the formula for $$T(x+1,k - (N-x+1)*(x-n+1))$$ which I think is the major part that can be optimized. So, when is $$k - (N-x+1)*(x-n+1) \geq 0$$? (I.e.: whenever it is smaller than 0, the answer is obviously zero, so there is no need to count it) By rearranging the inequality, $$k - (N-x+1)*(x-n+1) \geq 0$$ if and only if $$k \geq (N-x+1)*(x-n+1)$$ if and only if $$\frac{k}{N-x+1} - 1 \geq (x-n)$$. The right hand side is the "difference" between $$x$$ and $$n$$. So, for a fixed $$x$$, the $$n$$ to which it might contribute a positive value would be those that are no more than $$\frac{k}{N-x+1} - 1$$ away. For example, when $$x=N$$, it will contribute to $$\frac{k}{N-N+1} - 1 = k-1$$ different $$n$$ values; when $$x=N-1$$ it will contribute to $$\frac{k}{2}-1$$ different $$n$$'s; when $$x=N-2$$, it will contribute to $$\frac{k}{3}-1$$ of them, etc. So, all the $$x$$ contribute a total of $$O(k + \frac{k}{2} + \frac{k}{3} + \cdots + \frac{k}{N}) = O(k\log(N))$$ work in total for any fixed $$k$$. The $$\log(N)$$ part comes from the fact that $$1 + \frac{1}{2} + \frac{1}{3} + ... + \frac{1}{N} = H_N$$ is the $$N^{th}$$ "Harmonic Number", which can be approximated by $$\Theta(\log(N))$$. This would mean, in an amortized fashion, the total amount of work needed to be done (assuming we only added up those $$x$$ which contributed non-zero values to the sum for specific $$(n,k)$$), would be $$\sum_{k=0}^{K}{O(k\log(N))} = O(K^2\log(N))$$, which would run in time. Obviously, I have skipped over a lot of details on HOW that must be done. Also, I have not accounted for "all" of the sum. There is still a $$T(x+1,k)$$ term that needs to be amortized as well. This is much easier, because we can simply keep some kind of running sum of all $$T(x,k)$$ values, and just add that as well to the sum.
The key is to "work backwards" as described above in the problem solving process. Instead of computing the formula directly for each $$(n,k)$$, keep a table $$dp[n][k]$$ which stores $$T(n,k)$$ (and it will be filled up partially). Initialize it to 0. And work backwards, for each pair $$(x,k)$$. For a fixed $$(x,k)$$ assuming that $$dp[x][k]$$ is known, we only update the table entries $$dp[n][k]$$ ($$n < x$$) for which the index $$x$$ actually contributes (i.e.: only those for which $$k - (N-x+1)*(x-n+1)$$ is positive). You also need to keep a running sum $$sum[k]$$ and add $$dp[x][k]$$ to that once it is known as well (this deals with the $$T(x+1,k)$$ term in the formula). And as long as you are decreasing $$x$$ on each iteration, we will always know $$dp[x][k]$$ by the time it is needed, and we can use it to update the $$dp[n][k]$$ for smaller $$n$$, and keep going. (When I say "update", I mean, add the corresponding term to the sum, based on the formula described above for $$T(n,k)$$.) For clarity, here is the pseudo-ish code.
Let S[1..N] be the given string.
Let dp[n][k] be a 2D-array, initialized all to 0.
Let sum[n][k] be a 2D-array, initialized all to 0.
dp[N+1][0] = 1
sum[N+1][0] = 0   // sum[n][k] := sum { (S[x]-'a')*dp[x+1][k]  :  x = n..N }

// T(n,k) = sum_{x=n}^{N}{(`z`-S[x])T(x+1, k-(N-x+1)*(x-n+1)) + (S[x]-`a`)T(x+1,k)} + {1 if k==0}
// For each (x,k), we assume dp[x][k] and sum[x][k] are already computed.
// Then we update dp[n][k] for all relevant n
I used $$x_2 := x-1$$ in the above formula, because we were updating all $$n < x$$, but not $$x$$ itself. So I found it easier to replace $$x$$ with $$x-1$$ in the formulas. Everything else remains the same though.

This solution works in $$O(NK) + O(K^2\log(N))$$ time by the earlier analysis.
This problem is a fairly hard example of Dynamic Programming. It requires not only coming up with a recursive function to count the number of strings, but it also requires computing this function "Bottom-up" (in an iterative fashion) rather than "Top-down" (in a purely-recursive fashion), because we need to exploit the structure of the formula in order to make it run in time. The dp can be described as follows: Let $$dp[n][k]$$ be the number of strings which have beauty $$k$$ when compared against $$S[n..N]$$ (This is the same as the recursive function $$T(n,k)$$ that I use above in Ideas 1 - 3). Each string with beauty $$k$$ must have a "first differing character" in some position $$x$$ (assuming $$k \neq 0$$). All characters before that must compare exactly equal to the corresponding characters of $$S$$. If this "differing character" compares greater (i.e.: $$T[x] > S[x]$$), then there are $$('z'-S[x])$$ ways to choose this character, and it adds exactly $$(x-n+1)*(N-x+1)$$ units to the beauty (i.e.: any substring $$T[i..j]$$ with $$n \leq i \leq x \leq j \leq N$$ will compare "greater" than the corresponding substring $$S[i..j]$$, so we have exactly $$(x-n+1)$$ choices for $$i$$ and $$(N-x+1)$$ choices for $$j$$). So, then we must choose the remaining characters ($$T[x+1..N]$$) to have exactly $$k - (x-n+1)*(N-x+1)$$ beauty, which is an equivalent sub-problem (it is equal to dp[x+1][k - (x-n+1)*(N-x+1)]). A similar formula holds if the first differing character compares smaller than the corresponding character in $$S$$, so that $$T[x] < S[x]$$. So, we come to the formula: $$dp[n][k] = sum {('z'-S[x])*dp[x+1][k - (x-n+1)*(N-x+1)] + (S[x]-'a')*dp[x+1][k]) : x = n..N} + (1 if k==0)$$ where the last "+1" comes from the fact that $$T[n..N] = S[n..N]$$ might be a possibility (in which case it has no differing character). By inspection, this dp formula takes $$\Theta(N^2K)$$ time to compute, which would yield Time Limit Exceeded. However, we notice that for a given $$x$$, the term $$k - (x-n+1)*(N-x+1) < 0$$ whenever $$n$$ is really small compared to $$x$$. So we work backwards, and once $$dp[n][k]$$ has been computed, we treat it as $$dp[x][k]$$, and use it to update all $$dp[n'][k]$$ with $$n' < x$$ (based on the dp formula), and break early once $$n'$$ becomes too small. We handling the other parts of the sum similarly. It can be proven that, if computed this way, the overall running time would be $$\Theta(NK + K^2\log(N))$$ which is Accepted.
  1. For a pair of indices $$(i,j)$$ contributing to the beauty: there exists an index $$x$$ so that $$i \leq x \leq j$$, $$T[i..x-1]=S[i..x-1]$$ , and $$T[x]>S[x]$$ and the characters $$T[x+1..j]$$ have no specific relation to $$S[x+1..j]$$. This is the most intuitive observation, but it is also the key to characterizing the solution, if understood correctly.
  2. Every string $$T\neq S$$ can be uniquely decomposed into a sequence of (non-overlapping) blocks, where a block is a substring $$T[i..x]$$ with $$T[i..x-1] = S[i..x-1]$$ and $$T[x] \neq S[x]$$. Either $$T[x] > S[x]$$ or $$T[x] < S[x]$$. This key observation provided the recursive structure necessary to construct the dynamic programming solution. We recur based on the start of the current "block".
  3. For $$1 \leq n \leq N$$, $$T(n,k) = \sum_{x=n}^{N}{(`z`-S[x])T(x+1, k-(N-x+1)*(x-n+1)) + (S[x]-`a`)T(x+1,k)} + \left\{1 \text{ if } k==0\right\}$$. This was the key formula (the DP). It was a result of careful combinatorial arguments, and a result of the recursive structure described above.
  4. The function $$k-(N-x+1)*(x-n+1)$$ is a parabola as a function of $$x$$ (for a fixed $$(n,k)$$), and it often goes below zero. For a fixed $$(x,k)$$, it will go to zero whenever the difference $$(x-n)$$ is greater than $$\frac{k}{N-x+1}$$. So, if we skip all $$(n,k,x)$$ so that $$k-(N-x+1)*(x-n+1)$$ is under zero, we don't have to do as much work. This was important to amortize the running time of the algorithm.
  1. This problem looks like a Dynamic Programming problem; maybe I can dp digit by digit. The dp will probably look something like $$dp[n][k]$$.
  2. Most of the immediate DP ideas that came to my mind seemed too slow. However, instead of getting caught up on these details, I decided to focus on deriving at least one correct DP formulation, regardless of how inefficient it was. This was crucial. I would not have been able to solve the problem otherwise, because there was no "better DP formula". In the end, I used the same formula, but I just changed the way it was computed to make the running time better.
  3. To determine the formula, I thought "suppose we have a string $$T$$ with beauty $$k$$. Then how would it look? What properties would it have?" This was a natural set of questions to ask. And it helped characterize the solution.
  4. "Pursuing Extreme Values" is a common problem-solving technique. "If you have to walk to the store, you will always take a 'first' step." Similarly, if there are blocks in the string, we focused on the first block. The rest could be handled by recursion.
  5. In order to optimize the DP, I knew that I would need to use some kind of "amortized" analysis, which is a concept I've learned from experience (e.g.: See Thomas Cormen's "Introduction to Algorithms")
  6. The dp formula was written as a function $$T(n,k)$$, but it relied on summing over values of $$T(x+1,k)$$ for certain $$x \geq n$$. Instead of asking: "How can I compute $$T(n,k)$$ from the $$T(x+1,k)$$ values", I asked "Given some values for $$T(x+1,k)$$, how does it contribute to various $$T(n,k)$$ sums?". This "backward" transformation, and the order of computation, became crucial in optimizing the dp so that it would run efficiently. The reason for this was, given $$(n,k)$$ the set of $$x$$ that actually contribute a non-zero value to the sum $$T(n,k)$$ was parabolic (there was a bunch of $$x$$ that did contribute, then some that didn't contribute, and then some more that did again). However, for a fixed $$x$$ and $$k$$, the set of $$n$$ whose sum $$T(x+1,k)$$ contributed to was linear. That is, there is a computable, contiguous, range of $$n$$ such that $$T(n,k)$$ relied on $$x$$. This made it easier to "stop" early, when updating the sums, and also brought down the entire amortized running time to $$O(K^2\log(N))$$ instead of $$O(KN^2)$$. This was a crucial application of the problem-solving process.
  • With "Hard DP" problems, the solution is often "State Optimization". This means that you have to come up with a basic DP formula that works (but may be too slow or take up too much memory), and then exploit some structure of the formula (usually overlapping subproblems, redundant state-variables, using one of your dimensions in the answer rather than as the dimension) to reduce either the state-space or the amount of transitions/computations needed to compute the answer for a single state. This problem was exactly that. The best way to approach this is to ignore the optimizations until you have a working formula, and then spend all your time and effort working on the optimizations. This is a classic example of Dijkstra's "Separation of Concerns" principle.
  • "Reversing the DP" in various ways is often useful. For example, instead of computing a top-down dp (i.e.: solving it by recursion), we can do a bottom-up dp (i.e.: filling in the table row-by-row). As another example, sometimes instead of looking at "the first element in the set" first (in my case, the first differing character), it may pay off to look at "the last element in the set" first, and formulating your DP that way. And as a last example (the type that was used in this problem), instead of asking "How can the answer for state A be computed from states X,Y, and Z?", we might ask "How does the answer to state X contribute to the answers for states A,B,C?" Often, the DP can be done either way. Sometimes, "working backwards" like this just helps you to think about the problem differently. But there are cases, like in this problem, where the problem is just asymmetric and it is actually only plausible (or more efficient) to compute your dp in one direction rather than the other. Many problems appear symmetric, or at least don't immediately stand out as being asymmetric; but I often get stuck on hard DP problems simply because I am thinking about it in the wrong direction. So as a rule, it is good to think about DP problems from multiple angles. The "angles" I used as examples above are probably the most common ones (or at least the ones I use most often). And this "work backwards" reasoning extends to all problem-solving. That is why I have it as a "Problem-Solving Technique / Process Point".
  • Dynamic Programming / Hard DP / DP with State Optimizations / Bottom-Up DP
  • Strings / DP on Strings and Digits
  • Combinatorics / Counting DP / Number of Ways / Size of Set
  • Combinatorics / Counting DP / Number of Ways / Size of Set
  • Math / Formulas / Recurrences