# Dreamoon And Strings

We are given a string $$s$$ and a string $$p$$ (with $$1 \leq |s| \leq 2,000$$ and $$1 \leq |p| \leq 500$$). For an integer $$x$$, we must delete exactly $$x$$ characters from $$s$$, and we would like to maximize the number of non-overlapping occurrences of $$p$$ in the resulting string. Let us call this number of occurrences $$ans(x)$$.

Given $$s$$ and $$p$$, report $$ans(0), ans(1), ans(2), ..., ans(|s|)$$.

(Please see the problem statement for further clarification)

1. Strings, string matching
2. Dynamic Programming (DP)
3. Longest common subsequence
4. KMP (Knuth-Morris-Pratt) string matching algorithm
5. $$\Theta(|s||p|)$$ algorithm, or $$\Theta(|s|^2)$$, maybe $$\Theta(|s|^2|p|)$$
6. Greedy observations
7. DP with State Optimization
Notation: We will assume 1-indexed strings for this analysis. In particular, if $$S$$ is a string of length $$n$$, then $$S[1]$$ is the first character and $$S[n]$$ is the last character. Also, $$S[i..j]$$ denotes the substring consisting of $$S[i], S[i+1], S[i+2], ..., S[j]$$.

Strings are "ordered" data structures, similar to arrays. That is, a solution to a counting or an optimization problem on $$S[1..n]$$ (where $$n$$ is the length of the string) can often be broken down into a counting or optimization problem on $$S[1..n-1], S[1..n-2], ..., S[1..1]$$, etc. Whenever these "sub-problems" occur, we should immediately think of the possibility of Dynamic Programming.

Alternatively, string problems come with a host of data structures and string-matching algorithms: such as KMP (Knuth-Morris-Pratt), Suffix Arrays, and Suffix Automata. So when we encounter string problems, we should also sometimes be thinking about these.

To be honest, I did not fully understand the problem when I first read it. An easy way to come to a better understanding is to look at examples. Write out the sample test cases and try to simulate the process described in the problem-statement. In fact, Sample Test Case #2 (s = "axbaxxb", p="ab") is a really good one for understanding the problem. Then try some others. Only when you fully understand what the problem is asking, then you can move on to actually trying to solve it.

My next approach was to try a simplified version of the problem. What if $$x = 0$$?

Thinking about this case really helped pin-point the type of approach needed to solve this problem. First of all, hidden within this problem was the problem of "finding at least one occurrence of $$p$$ in $$s$$". This is exactly the "String Matching" problem, since we cannot delete any characters. This can be solved (naively) in $$\Theta(|s||p|)$$ time-complexity. How? By trying all possible start positions, and checking if $$p$$ exists at that starting-position. If $$p$$ were longer, we would need to use a linear time string matching algorithm, such as KMP or Suffix Arrays.

Now, how do we answer the question: "Find the maximum number of non-overlapping occurrences of $$p$$ in $$s$$?" My first thought was to do it greedily: Try to match $$p$$ at the first character of $$s$$. If this works, then we count this as 1 occurrence and then continue on from the next character after the match. Otherwise, if it doesn't work, we move the "first character" forward, and try again. We then try to do as many matches as possible. Does this work? Let's formalize it.

In the case where $$x=0$$, we can find the maximum number of non-overlapping occurrences Greedily. Take the first index where a match occurs with $$p$$, and then repeat.
Proof Omitted. But it relies on the fact that, if you have a solution that doesn't use the globally earliest "match", you can always replace one of the matches in your solution (i.e.: the first one used) with the globally earliest one without changing anything else.
Note: This is somewhat similar to the "naive string matching" algorithm we described above, except we are essentially finding all occurrences of $$p$$, or, finding the "first occurrence" repeatedly.
Now, the simplification above was very useful because it introduced some ideas (naive string matching and checking greedily for occurrences) that will possibly be needed to solve the final problem. We should keep them in the back of our minds, and it is good to write down these observations, but still the question remains:
How do we solve this problem over all choices of $$x$$?
One way to proceed would be to look at further examples and try to understand how deleting characters changes the problem. (Sample Test Case #2 is still useful here!)

Another way to proceed is to think about a related problem that we've seen before. One problem that came to mind was the "Longest Common Subsequence" (LCS) Problem: to find the longest string that occurs in both $$s$$ and $$p$$ if you are allowed to arbitrarily delete characters. This is somewhat related because our current problem allows deletion of characters in $$s$$. It is different because we want to find entire occurrences of $$p$$. (I'm not sure if that makes this problem easier or harder.)

Recall that the LCS problem can be solved by Dynamic Programming. In particular, we let $$dp[i][j]$$ denote the longest common subsequence occurring in $$s[i..n]$$ with $$p[j..m]$$ where $$n := |s|$$ and $$m := |p|$$. Then, to compute $$dp[i][j]$$, we either "match" characters $$s[i]$$ and $$p[j]$$ or we skip $$s[i]$$ or we skip $$p[j]$$. Altogether we get the recurrence:

$dp[i][j] = max(dp[i+1][j], \ \ dp[i][j+1], \ \ 1 + dp[i+1][j+1] \text{ if } s[i]=p[j])$

This idea illustrates the way we can use dynamic programming to solve the present problem. The only difference is that we also need the "number of deletions" to be a part of our state.

Let $$dp[i][j][x]$$ be the maximum number of non-overlapping occurrences we can get with the strings $$s[i..n]$$ and $$p[j..m]$$ with exactly $$x$$ deletions.

Then, there are a few different cases to think of when deriving a recurrence relation for the DP. We can either "match characters $$s[i]$$ and $$p[j]$$", delete character $$s[i]$$, or start a new block of pattern $$p$$ (i.e.: if $$s[i]$$ and $$p[j]$$ don't match, or if we have matched all the characters in $$p$$).

Note: I'll leave this as an exercise for the reader. It's a little tricky to get all the cases right.

What's wrong with this DP approach?
The above DP approach seems to work (if you can formulate it correctly). But the number of states is large; we have a table $$dp[i][j][x]$$, where $$1 \leq i \leq n$$, $$1 \leq j \leq m$$, and $$0 \leq x \leq n$$. This makes are $$\Theta(n^2m)$$ states where $$n \approx 2,000$$ and $$m \approx 500$$ in the worst case. This is far too slow, and would take up too much memory. (It might pass with lots of optimizations, but I abandoned this idea in favour of "State Optimization".)

We have now learned that a Dynamic Programming approach seems to work for this problem. However, the formulation we came up with was too inefficient to compute. Experience tells us that we can still use Dynamic Programming, but we have to modify the way we think about the problem, or exploit some other structure to the problem in order to create an efficient DP solution.

Many people call this "State Optimization". In the next Idea Section, we will see how to use this to solve the problem correctly.

In the previous Idea Section, we described the thought process that can be used to recognize that the problem requires Dynamic Programming. We came to a solution of the form: $$dp[i][j][x]$$, representing the maximum number of non-overlapping occurrences we can get in $$s[i..n]$$ with $$p[j..m]$$ using exactly $$x$$ deletions.

However, the most intuitive DP is actually far too slow. Here we describe how to derive a better DP formulation.

Here are some classic "State Optimization" techniques (see Competitive Programming 2 by Steven and Felix Halim, Chapter 8.4.4-5):
1. Change the State Representation - sometimes by storing different information inside the DP table ("state"), we can formulate something that is more efficient.
2. Drop Redundant Parameters - sometimes one of the parameters in our DP state can easily be computed from the other ones; so we can eliminate this "redundant" (useless) parameter from the table.
3. Use a Parameter as the Answer - sometimes we have a dp table of the form: $$dp[i][j] := 1$$ or $$dp[i][j] = 0$$ denoting whether or not some state $$(i,j)$$ can be achieved; we can often construct a better dp of the form: $$dp[i] = j$$ meaning, what is the minimum $$j$$ value so that state $$(i,j)$$ is achieved. By moving one of our state-parameters to the "answer", we can reduce state-space and maybe reduce the overall complexity.
There are probably a few other techniques. I will describe some in the Related Problems Section below. The first two come directly from the Halim book. The last one is something I've seen in practice.
DP State Optimization is not easy. It sometimes requires deep insight into the problem in order to find the correct observations needed to make a better state. But, keep the above ideas in mind, and we will use good problem-solving techniques to try and find these observations more quickly.
Try some examples. Write out some tables of solutions. Recall the above dynamic programming formulation of $$dp[i][j][x]$$ and try to see how we would fill in the table on some examples. Try to look for alternate state representations or redundant parameters as we are writing out examples. This may spark some insights.
We can often exploit symmetry in the constraints. As we consider examples, look for specific aspects of the problem that force "certain conditions to be true". If we can find such constraints, it often leads to a better state representation.
It also helps to write down the explicit formula for the DP. We did not write one down above, but I tried to explicitly write down the formula when I was solving this problem. It was hard to do this, but by the time I wrote a formula (recurrence relation) down on paper, I had actually realized some key observations about its structure, and I was able to optimize the state to come up with the correct answer.
We know we want to reduce the state space. Can we drop one parameter in $$dp[i][j][x]$$? For example, can we try something of the form: $$dp[i][j]$$ or $$dp[i][x]$$ or $$dp[j][x]$$ to denote similar states? Can we still construct a dp solution with this information. How would the recurrence look? Are we missing any information? Was any of it redundant?
In Idea Section 1 we recognized that this problem is similar to the Longest Common Subsequence Problem. Let's think about this problem. How is our current problem different than that problem? Is it simpler? Is it harder? Is it more constrained?
Consider an "optimal solution" (i.e.: maximum number of occurrences of $$p$$ for a given $$x$$). Are there any properties of this optimal solution that make it optimal? Is there any other structure to an optimal solution that we can use to work-backwards? For example, where do the deletions occur? Where does the string $$p$$ occur? Etc.
Write down any "observations" or "intuitions" we might have, as we execute these problem-solving techniques!
.................................................................................

Try these problem-solving techniques yourself. Maybe this will lead to the solution!

However, these questions we are asking may not lead to any good ideas. Don't be discouraged if this is the case. The whole point of these problem-solving techniques is to try and get our minds to work more efficiently towards an answer. But it's okay if we don't find anything in particular!

If we can't find anything useful, maybe we have at least a couple of observations or ideas. At this point, it's fine to continue on reading the analysis!

Using some of the problem solving techniques above, we can hopefully generate some insights.

The most important thing needed to derive a good DP solution is to recognize that the string $$p$$ must match exactly at a given occurrence. That is, let's suppose we've decided that we must have an occurrence of the string $$p$$ starting at $$s[i], ...$$. Then, since we must match $$p$$ exactly, we would consecutively match all the characters $$s[i], s[i+1], s[i+2]$$ etc., while they match with $$p$$. If some letter doesn't match with the corresponding letter of $$p$$, then it must be deleted (otherwise, we can't possibly match here!). And then we continue until we've matched all characters in $$p$$. Notice that this tells us exactly which characters need to be deleted to achieve this occurrence of $$p$$. Other characters my be deleted, but they would occur elsewhere.

(Note: this is similar to the "greedy" idea we described in Idea Section 1)

Based on the above arguments, we realize that we might be able to focus on the occurrences of $$p$$, in the final string $$s$$. In particular, we try to find out which indices $$i$$ the string $$p$$ can start on; and given that information, this tells us where some of the deletions must occur.
In this problem, it seems like the "deletions" are strongly correlated with the set of indices where we place $$p$$ in $$s$$. Looking back to our "State Optimization" concept (and all the related problem solving techniques), we realize that we can get a better state representation by exploiting this connection.

We now have a better idea for a DP.

We originally had $$dp[i][j][x]$$ as our dp-state, meaning the maximum number of occurrences we could achieve in $$s[i..n]$$ using $$p[j..m]$$ with $$x$$ deletions. Based on the above arguments, we realize that the $$j$$ parameter does not really matter. We can assume we always start on $$j=1$$ (i.e.: we start on a full occurrence of $$p$$). Then, if we decide that we will place an occurrence of $$p$$ starting at $$s[i]...$$ then we simply assume that we will match all of $$p$$, taking the characters that match, and deleting the ones that don't (see the most recent Key Observation). Otherwise, we can skip character $$s[i]$$ altogether, or we can delete character $$s[i]$$.

This yields the following:

Let $$dp[i][x]$$ denote the maximum number of occurrences we can get among $$s[i..n]$$ with exactly $$x$$ deletions.

Let $$deletions(i)$$ denote the minimum number of deletions we would need if we started a match of $$p$$ at character $$s[i]...$$ and "greedily" matched all consecutive characters, deleting the ones that don't match, until we have fully matched $$p$$.

We will also call this the greedy algorithm for finding a match starting at character $$s[i]$$.

With these definitions, I encourage the reader to try and come up with a recurrence relation for $$dp[i][x]$$ before reading the next formula.
.................................................................................
$$dp[i][x] = \max\left\{ \ dp[i+1][x], \ \ dp[i+1][x-1], \ \ 1 + dp[{i+m+deletions(i)}][{x-deletions(i)}] \ \right\}$$

(Note: This lemma looks slightly complicated. However, it may actually be easier to understand by deriving the recurrence relation independently. I encourage the reader to do so.)

The first part of the "max" function is $$dp[i+1][x]$$; this corresponds to simply "skipping" the character $$s[i]$$ and moving on to the next one.

The second part of the function is $$dp[i+1][x-1]$$, corresponding to "deleting" the character $$s[i]$$ and moving onto the next one (this takes 1 deletion, so we have $$x-1$$ left).

The most complicated part of the recurrence is the last part: $$dp[i+m+deletions(i)][x-deletions(i)]$$. What does this mean? Well, this corresponds to starting a full occurrence of $$p$$, beginning with character $$s[i]$$.

Based on our arguments above, we know that, if we chose to "greedily" match characters of $$s[i]$$, we would still have to delete some characters (the ones that don't match with the corresponding letter in $$p$$). We defined $$deletions(i)$$ as this "minimum number of characters we need to delete" until we get a match, if we start at $$s[i]$$. So, if we did this greedy method, we would need exactly $$deletions(i)$$ deletions (with $$x-deletions(i)$$ left-over for later, obviously), and we would move from character $$s[i]$$ to $$s[i + m + deletions(i)]$$ (since we match all $$m$$ characters of $$p$$, plus delete a few of them; $$m + deletions(i)$$ is therefore how many places we move forward in total).

So this gives us $$1$$ occurrence, plus whatever we can get later. This is exactly: $$1 + dp[i+m+deletions(i)][x-deletions(i)]$$.

Now, we know that these are all viable options. We haven't yet proved that these are the only options for an optimal solution. We can see (intuitively) that $$s[i]$$ must be skipped/deleted, or it must start a match in $$p$$. In the last case, we have shown a "greedy" algorithm for finding a match. But are there other (non-greedy) approaches to finding the best match of $$p$$ starting at $$s[i]$$? The answer is, maybe, but they will be no better (i.e.: they will never give you more final occurrences of $$p$$) than the greedy approach. We will prove this in the next lemma.

All-in-all, assuming the "greedy" approach for the third part always yields an optimal solution, then this recurrence relation holds. This is the maximum number of occurrences of $$p$$ we can get in $$s[i..n]$$ using exactly $$x$$ deletions.

Next it suffices to show that this greedy matching for computing $$deletions(i)$$ will always yield an optimal solution (i.e.: this will always lead to the maximum number of occurrences, once we've decided that the first occurrence must happen at $$i$$). Here is the lemma.

Consider the maximum number of occurrences of $$p$$ we can get in sub-string $$s[i..n]$$ using exactly $$x$$ deletions, assuming that we must start our first occurrence of $$p$$ on character $$s[i]$$.

Then our first match, at $$s[i]$$ can be found greedily, and this will be optimal.

Suppose we have any optimal solution in $$s[i..n]$$ using $$x$$ deletions, with the first occurrence of $$p$$ starting at $$s[i]$$. And suppose it does not use the greedy algorithm to find this first match of $$p$$. Let's look at the first (left-most) character that differs between the greedy algorithm and this solution. In particular, the greedy algorithm decided to delete this character whereas the optimal solution decided to take it (match it with $$p$$), or vice versa. We consider these as two cases.

1) suppose the greedy algorithm deletes the character, but the optimal solution takes the character. Since the greedy algorithm decided to delete this character, then this means that this character does not match the corresponding character of $$p$$ (by definition of the greedy algorithm). But then the optimal solution decided to take this character, which means it does match the corresponding character of $$p$$. But this is a contradiction! They cannot "match" and "not match" at the same time! So this can't happen.

2) suppose the greedy algorithm takes the character, but the optimal solution deletes it. Since the greedy algorithm takes it, it must match the corresponding character of $$p$$. But the optimal solution skips it, so it must find an equivalent character later on, deleting everything in between (since it eventually must match this character of $$p$$ if it ever hopes to fully match $$p$$). So, starting from this optimal solution, we could instead, delete this later character, and use the first occurrence of this character (the one used by the greedy algorithm). This would yield another optimal solution with the same number of deletions (since we "deleted" one, and removed one deletion); and it would be "closer" to the greedy algorithm (i.e.: it still makes the same choice as the greedy algorithm for this particular character).

Based on the above arguments, we can actually repeatedly find the "first difference" between the greedy algorithm and the optimal solution (for example by induction, etc.) and "fix" it, to get something that is more similar to the greedy algorithm. We do this until we get a solution that is identical to the greedy solution, and uses the same number of deletions as the original optimal solution, and ends earlier or on the same character as the optimal solution.

So, in an optimal solution in $$s[i..n]$$ using $$x$$ deletions, with the first occurrence of $$p$$ starting at $$s[i], we can always replace the first occurrence of$$p$$with the one that would be found by the greedy algorithm, without changing the number of deletions or the total number of occurrences of$$p$$. Thus, our greedy algorithm can always lead to an optimal solution. Altogether we have proven the correctness of the dp formulation. What is the complexity of the Algorithm? The$$dp$$array now has two parameters$$i$$and$$x$$, with$$1 \leq i \leq n$$and$$0 \leq x \leq n$$. So there are$$\Theta(n^2)$$states. At first glance, the computation of$$deletions(i)$$also takes$$\Theta(n+m)$$work, for a total of$$\Theta(n^3 + n^2m)$$time-complexity. But we notice that we can "pre-compute" or "memoize" the values of$$deletions(i)$$. Each of these takes$$\Theta(n+m)$$time to compute, so this leads to$$\Theta(n^2 + nm)$$complexity for precomputation, and$$\Theta(1)$$look-up time afterwards. So, when computing the$$dp[i][x]$$we only need to do a constant number of operations. We have$$dp[i+1][x]$$,$$dp[i+1][x-1]$$and$$1 + dp[i+m+deletions(i)][x-deletions(i)]$$which each take$$\Theta(1)$$time if$$deletions(i)$$is pre-computed. Hence, the overall algorithm takes$$\Theta(n^2 + nm)$$time-complexity. Pre-computation is (in some ways) another form of "DP State Optimization". The answer for a fixed$$x$$will be in$$dp[0][x]$$. (If you are interested in learning the thought-process needed to solve this problem, take a look at the Idea Sections above. The first one outlines ideas leading to a DP approach. The second one shows how to use DP "State Optimization" to turn it into a complete solution. Instead, if you only want to know the final answer, please continue on to read the Solution Summary here.) All-in-all, the solution to this problem was to use Dynamic Programming. To derive an efficient DP solution we needed to use some insights about the problem. We let$$dp[i][x]$$denote the maximum number of occurrences of$$p$$we can achieve among the sub-string$$s[i..n]$$using exactly$$x$$deletions. The choices for a given state are fairly simple. We either: a) skip character$$s[i]$$; b) delete character$$s[i]$$; or c) start a new match (occurrence of$$p$$) at character$$s[i]$$. Now, the last choice to "start a new match at character$$s[i]$$", doesn't seem to be well-defined. If we start the occurrence of$$p$$at$$s[i]$$there can be many characters before the end of the occurrence that will either be deleted or matched. The key insight to simplify this sub-problem is this: "If we choose to start an occurrence of$$p$$at$$s[i]$$, we can greedily match the rest of the characters of$$p$$." The "greedy" matching process works like this: we match$$p[1]$$with$$s[i]$$,$$p[2]$$with$$s[i+1]$$, and so on, until such a point where the characters do not match (i.e.:$$p[j] \neq s[i+j-1]$$for some$$j \geq 2$$). At this point, we must delete this character of$$s$$, and continue to match the remainder of$$p$$with the remainder of$$s$$. This "greedy" process will eventually find a match of$$p$$or we will hit the end of string$$s$$. If we do indeed find a match of$$p$$, we will have deleted some fixed number of characters of$$s$$. This number of characters, we denote by$$deleted(i)$$(which means: "starting the matching process at character$$s[i]$$, how many deletions did the greedy algorithm need before it found a complete match for$$p$$"). This "greedy" algorithm for matching starting at a character$$s[i]$$turns out to be "optimal" for this$$i$$and this$$x$$. We can actually prove that, if we decide to start a matching of$$p$$at character$$s[i]$$, then there is a solution that uses the greedy algorithm at this particular occurrence of$$p$$(starting at$$s[i]$$) that uses$$x$$deletions in total, and achieves the maximum number of occurrences of$$p$$. (This is proved in Idea #2 above.) Altogether, this "greedy" algorithm shows that there is one well-defined choice for a particular character$$s[i]$$. That is, if we have$$dp[i][x]$$(as defined above), then$$dp[i][x]$$is the maximum of: 1.$$dp[i+1][x]$$- if we skip character$$s[i]$$2.$$dp[i+1][x-1]$$- if we delete character$$s[i]$$(and use up one of our deletions) 3.$$1 + dp[i+deletions(i)+m][x-deletions(i)]$$- if we start an occurrence of$$p$$here, and use the greedy algorithm to match it; ($$deletions(i) + m$$is the total number of characters either matched or skipped for this occurrence) Lastly, we can pre-compute$$deletions(i)$$for each$$i$$by manually applying the greedy algorithm defined above. For a fixed$$i$$we start matching$$p$$at character$$s[i]$$, and delete any characters of$$s$$that don't match! We count the number of deletions in total and save this as$$deletions(i)$$. We then start over ("reinsert" the characters), and compute$$deletions(i+1)$$. The pre-computation of$$deletions(i)$$takes$$\Theta(nm)$$time. Once$$deletions(i)$$is computed, we can compute$$dp[i][x]$$bottom up. There are$$\Theta(n^2)$$states, and we can compute the transitions for the dp in$$\Theta(1)$$time. So the overall complexity is$$\Theta(nm + n^2)$$time. Accepted! 1. Let$$dp[i][j][x]$$be the maximum number of non-overlapping occurrences we can get with the strings$$s[i..n]$$and$$p[j..m]$$with exactly$$x$$deletions. Then this can be computed by a recurrence relation. This was our first try at a DP in idea one. We derived this by thinking about strings as ordered objects, with sub-strings composing sub-problems. We also thought about the "Longest Common Subsequence" problem, as a related problem (with a related DP). However, this has too many states, so it would lead to MLE and TLE. 2. There is never a reason to "partially match"$$p$$with some substring of$$s$$. If we are going to start a match at some character$$s[i]$$, we might as well match all of$$p$$. This was the "key insight" that eventually leads to a greedy algorithm for matching. This, in turn, allows us to "optimize" our DP and eliminate one of the states. 3. Suppose some solution has an occurrence of$$p$$starting with character$$s[i]$$. Then the following "Greedy Algorithm" is "optimal": Match$$s[i]$$with$$p[1]$$,$$s[i+1]$$with$$p[2]$$and so on. If some character in$$s$$doesn't match, just delete it, and continue. Repeat this until we find a full match of$$p$$. This greedy algorithm tells us the minimum number of deletions we would need to find a match of$$p$$starting at$$s[i]$$. This can actually be used among the final dp. 4. Let$$deletions(i)$$denote the number of deletions that the greedy algorithm would take if starting at characters$$s[i]$$. Furthermore, let$$dp[i][x]$$denote the maximum number of occurrences of$$p$$we can get among the string$$s[i..n]$$with$$x$$deletions. Then$$dp[i][x] = \max\left\{ \ dp[i+1][x], \ \ dp[i+1][x-1], \ \ 1 + dp[{i+m+deletions(i)}][{x-deletions(i)}] \ \right\}$$. This is the final derivation of the DP solution. It has$$O(n^2)$$states and$$O(1)$$computations per state (assuming the next key observation). 5.$$deletions(i)$$can be pre-computed in$$\Theta(nm)$$time. This pre-computation can be achieved by just applying the greedy algorithm naively, once for each$$s[i]$$(independently). 1. Recognizing that this is a DP problem comes easy... only if you have experience with DP problems. Keep in mind that ordered data (arrays, strings, directed acyclic graphs, trees, etc.) often have easily-definable sub-problems. There are also similar problems (e.g.: Longest Common Subsequence) that can be solved by DP. It also takes some experience to recognize when you can use State Optimization to optimize your DP approach. See Competitive Programming 2 by Steven and Felix Halim (Chapter 8.4) for a nice discussion on advanced DP techniques. (Maybe I will write a tutorial on this soon!) 2. Looking at examples helps to understand the problem. Sample Test Case #2 was very helpful. It's also good to write out "tables" and simulate the DP when you come up with one. Don't spend too much time on this, but this is useful when trying to find patterns (especially when trying to apply DP State Optimization). 3. We can ask: "For a fixed$$x$$, what does the solution look like?" One specific simplification is: "For$$x=0$$, what does the solution look like?" This helps to generate ideas needed to solve the general problem. 4. There was some "regularity" (structure) in the problem. In particular, compare this problem to the Longest Common Subsequence Problem. The only difference between the two problems is that this current problem does NOT allow partial matches of the string$$p$$. This adds more constraints / conditions to the solution, which may make it easier (or harder) to solve. 5. With Dynamic Programming, it's good to write down the formula on paper, so that coding becomes easy. It also helps when you write down an "Inefficient" DP solution, when you can sometimes find a pattern just by looking at the formula. This can make it easy to apply "State Optimization". 6. "Transforming" the problem can often be useful when trying to apply DP State Optimization. Sometimes we want our DP to solve an auxilliary problem, and then use this related problem as a sub-routine to solve our current problem. Sometimes we "transform" the state or change what our dp array computes. 7. Thinking about the optimal solution (for a fixed$$x) and what makes it optimal can sometimes be helpful. This can usually lead to interesting patterns or constraints that can be exploited.
8. Read a blog post! It's great looking at tutorials after thinking about a problem for a long time. This helps you to see the insights that you missed, and it will help you to draw connections for later. Also, even if you did solve the problem, this introduces some other ideas and ways to solve it! (PS: Thanks for reading)
• Advanced and Intermediate DP problems often require "State Optimization". It's a good idea to try some of these.
• We can often optimize things by pre-computation.
• Dynamic Programming / Intermediate DP / DP with State Optimization
• Strings / String DP / Naive String Matching
• Greedy / Observations / Exploit Structure and Conditions