Description
In this episode we talk about different paradigms and approaches to algorithmic design: the Divide and Conquer Algorithm, the Greedy Algorithm, and the Dynamic Programming Algorithm, which remembers the subproblems that it has seen and solved before so as not to repeat doing the same thing over again.
Show Notes
This episode of the Basecs podcast is based on Vaidehi's blog post, Less Repetition, More Dynamic Programming from her basecs blog series.
Transcript
[00:00:02] SY: Welcome to the Base.cs Podcast where we explore the basics of computer science concepts. I’m your host Saron, Founder of CodeNewbie.
[00:00:09] VJ: And I’m Vaidehi Joshi, Author and Developer.
[00:00:12] SY: And she is the brilliant mind behind the Base.cs Blog Series. Today, we’re talking about…
[00:00:16] VJ: Dynamic programming.
[00:00:18] SY: This season of the Base.cs Podcast is brought to you by Educative. Educative has a ton of online coding courses, including how to become a machine-learning engineer and a practical guide to Kubernetes. The comprehensive and interactive text-based courses give you in demand tech skills. And since it’s text-based, there’s no scrubbing back and forth video to find the content you need. So learn better and faster with Educative and get a 20% site-wide discount by going to educative.io/basecs.
[00:00:56] I thought we’ve been programming this whole time and it feels…
[00:01:01] VJ: Nope. Sorry.
[00:01:06] SY: Surprise. And so like I’m wondering what the dynamic because I feel pretty dynamic. I don’t know about you, but I feel pretty dynamic.
[00:01:14] VJ: And energized.
[00:01:14] SY: And so I’m very curious. Yeah, man. I’m very curious about what this idea of dynamic programming means. What does it mean? What are we talking about?
[00:01:23] VJ: So dynamic programming is kind of like a design approach to algorithms. It’s a way of approaching problem solving really. And you’re right. We have been programming this whole time. We’ve covered a lot of things like especially recently, we’ve covered a lot of algorithms.
[00:01:43] SY: Yes.
[00:01:43] VJ: We’ve talked about how they work, but we haven’t really stepped back and done like a big picture view of like, “Here’s like the design approach to how this algorithm works.”
[00:01:52] SY: That’s interesting. I don’t even think about algorithms in terms of design.
[00:01:56] VJ: Out there with some CSS next time and then you’ll be like, “So design-y.”
[00:02:00] SY: There we go.
[00:02:03] VJ: Well, yeah. It is an interesting thing to learn about, but I think you can’t really learn about it until you’ve seen enough different types of algorithms because then you can start to see patterns emerge.
[00:02:14] SY: And how they operate.
[00:02:15] VJ: Yeah. Yeah. And you can start to see how things are similar and different in the way that they are implemented and the way that the algorithms run. So to answer your first question, your initial question, dynamic programming is one of those design approaches. They’re a form of algorithm design. And another term for dynamic programming is dynamic optimization. And those names can be interchangeable, but what they both mean, what dynamic programming and dynamic optimization really is, is an approach to solving complex problems by breaking them down into their smaller parts. And not only do we break them down into smaller parts, we also store the results of the smaller parts in it. Those are effectively like sub-problems. So we break the large problem down into sub-problems and then we store the results of those sub-problems.
[00:03:15] SY: What is the part of that that is dynamic?
[00:03:20] VJ: Without looking it up, if I had to guess why it’s called dynamic programming, I would guess that it’s called that because when you approach the problem you are breaking it into smaller parts and you’re recording, you know, “Oh, this sub-problem, like I can solve it like this and the value of me solving it is this thing.” So you’re basically breaking into parts and you’re changing how you build up the approach to the problem. That’s what I had to guess, but nobody quote me on that because I’m not Wikipedia.
[00:03:49] SY: I am not Wikipedia. Okay. Cool. Okay. So you mentioned that this is a design of an algorithm, which means that there must be other designs of algorithms. What are some of the other designs?
[00:04:00] VJ: There are three approaches and we actually seen the other two, the two that are not dynamic programming really.
[00:04:07] SY: Okay.
[00:04:08] VJ: If we start to learn about them, we’ll see that there are some algorithms that we’ve covered that very easily, like fit into these different approaches. So the first approach that we learned about, I don’t even know how long ago or we’ve been using it.
[00:04:21] SY: Many episodes ago. Yeah.
[00:04:23] VJ: Many episodes ago. We were using something called the divide-and-conquer approach.
[00:04:28] SY: Oh, yes. That sounds really familiar.
[00:04:31] VJ: So the divide-and-conquer algorithm was something we learned back when we were introduced to the merge sort algorithm. I know it was a while ago, but if we think back to when we learned about it, the merge sort algorithm basically took a large collection and it divided it up into little pieces, into the smaller, smallest versions of that set, and then it reconstituted it, it builds it back up and merge them back together and that’s how it did its sorting. So the divide-and-conquer approach is basically the idea of dividing a problem into simpler versions of itself and then figuring out the solution when you have the simpler versions and applying the same solution to the larger problems that you do to the small problems.
[00:05:20] SY: Okay. Cool. So we have dynamic programming. We have divide and conquer. Is there another one?
[00:05:25] VJ: Yes, there’s one more, and we just learned about that like just a few episodes ago, or maybe last episode, Dijkstra’s algorithm.
[00:05:32] SY: Cool.
[00:05:32] VJ: We’ve been talking about that quite a bit. That is a new kind of approach to solving algorithm design problems. That’s called the greedy algorithm.
[00:05:43] SY: Oh, that one sounds bad.
[00:05:47] VJ: I guess the way I think about it is like it’s a little bit impatient, and it’s like, “Just give it to me now.”
[00:05:52] SY: Give it to me now. Sounds like me. Cool.
[00:05:56] VJ: But I guess the formal definition for a greedy algorithm is one where you choose the best possible option to solve the problem in the moment.
[00:06:07] SY: Okay.
[00:06:08] VJ: And that’s often referred to as like the greedy choice. You’ll see that term when you’re learning about greedy algorithms. And the way that greedy algorithms work is that they make the most efficient choice or the best possible choice at each stage while the algorithm is being solved. So while you’re solving the problem, you’re sort of optimistic and you’re like, “I’m going to just choose the best one that I know at the moment.” And eventually, if you’ve written your greedy algorithm correctly, you’ll arrive at the global optimal choice, even though you’ve chosen the greedy choice each time you’ve made a decision as you ran your algorithm.
[00:06:45] SY: Well, if you’re making the optimal choice at each step, wouldn’t you always end up with the globally optimal choice?
[00:06:52] VJ: Well, you’re not making… did I say the most optimal? Sorry. You’re making the greedy choice, right, at each step. You’re just picking the best thing in that moment for you. So like actually a great example is when we did Dijkstra’s algorithm when we were looking at which neighbor node to visit, we were like, “Okay, which one has the lowest weight? Which one cost the least?” And we were like, “Let’s just go visit that.” That makes sense. That’s the best possible option. We want the shortest path, the least cost path rather.
[00:07:20] SY: Yes.
[00:07:20] VJ: Now that’s the best choice in that moment. It’s the greedy choice. It’s the best option at the moment. Eventually, with Dykstra’s algorithm, we do end up with all the shortest paths, but it’s possible that in other situations, if you just pick the best possible choice, you’re not choosing the most optimal one because it could be the best possible choice in the moment. But if you look at the problem as a whole, there might be something better. The important thing here is that you’re not trying out all the paths. It’s not like breadth-first search or you know a graph traversal where you’re like, “I’m going to go check every path.” In Dykstra’s algorithm, we are very specifically only choosing the least cost edge to travel, right?
[00:08:03] SY: Okay.
[00:08:03] VJ: So we were never fully exhausting all of our options. We were just like, “Give me the easiest one.”
[00:08:09] SY: Okay. So that’s kind of like if I’m hungry and I really want a doughnut, and I’m like, “Should I have a donut?” And I have the donut and then I’m like, “Ooh, now I’m thirsty. I need some milk. Okay, I’m going to go get some milk.” And then afterwards I’m like, “You know what? I think I need like a cake.” And then I have a cake. And then I have like a tummy ache at the end. And each decision felt optimal and I was indeed being greedy. But overall, I ended up with a tummy ache.
[00:08:32] VJ: Yeah. That’s a situation when a greedy algorithm…
[00:08:35] SY: Did not end up well.
[00:08:36] VJ: Yeah. Oh, it doesn’t end up well. And basically, like what you’re doing is you’re not going into the fridge or the pantry and you’re not seeing all the possible options. You just went with the thing that was like sitting three feet away from you because you’re like, “Oh, it’s right there. I’m just going to eat it. I want it.”
[00:08:48] SY: Yeah.
[00:08:49] VJ: And so that’s like the optimal choice in the moment, but maybe not the globally optimal choice for you as a human, whole human being.
[00:08:56] SY: I can definitely relate to that greedy algorithm or short.
[00:09:01] VJ: So the thing to note though is that greedy algorithms do work on the problems where at every step there is a choice that is optimal for the problem up to that step. So that’s why Dijkstra works.
[00:09:13] SY: Okay.
[00:09:15] VJ: But you can try to implement some solutions for a problem where you choose the greedy choice and it’s actually not good, like the example you just gave.
[00:09:22] SY: Okay, makes sense.
[00:09:23] VJ: But that’s how they work.
[00:09:24] SY: Okay. So we have divide and conquer. We have the greedy algorithm and now we have this new design, dynamic programming. So how does this compare to, let’s say, greedy algorithm?
[00:09:36] VJ: So dynamic programming is similar to greedy algorithms and that both of the approaches use an optimal substructure. And what I mean by that is they both have to make choices, ideally the optimal choice at each stage when you run the two algorithms, but they do it very differently. And that’s kind of like what sets them apart. With the dynamic programming paradigm, you find the optimal solution for every sub-problem, notably every sub-problem, and then you choose the best option from all the sub-problems. But greedy algorithms, they’re greedy. They don’t search through all the sub-problems. They just search through one. And that by definition means they’re going to be less exhaustive.
[00:10:23] SY: Okay. Is it fair to say? It kind of feels like it sits right in between the other two algorithm designs because with the merge sort, the divide and conquer, it is also breaking down a large problem into little smaller bite-sized pieces. And then with the greedy algorithm, it’s all about picking the most optimal choice in front of it. And this is like still breaking things down, but then picking the optimal across all of the options.
[00:10:46] VJ: Yeah. Actually, that’s a great way to think about it. You’re right. Dynamic programming is similar to divide and conquer because it divides everything into sub-problems, but it’s not as dramatic as divide and conquer because you’re being a little bit smarter as you do it. You’re providing the optimal choice. And the way that we choose the optimal choice is actually kind of cool because dynamic programming doesn’t do it at all. And the greedy algorithms don’t do it at all like that either. They just do it kind of the silly way, not like the intelligent way where you exhaust all your options.
[00:11:16] SY: Okay. So how does dynamic programming actually work? How is it able to make these optimal solutions or these optimal selections across all the possible sub-problems?
[00:11:28] VJ: That’s a great question, and I think the way that I kind of want to answer it is by working through a little problem. It’s like a silly problem, but I think it’ll prove the point of how it does what it does, how dynamic programming works. So let’s say that I have a little math problem. Nothing really complicated, no Dykstra, no nonsense like that. We’re just going to try to calculate what is 5 plus 5 plus 5 plus 5. So if we were doing that kind of math problem, how would you sort of approach it?
[00:12:02] SY: Well, I would, I mean, I just add it up, right? So 5 plus 5, 10 plus 5, 15 plus 5, 20.
[00:12:08] VJ: Great. So you ended up at 20. And then what if I said, “What is 5 plus 5 plus 5 plus 5 plus 5?” So I added another 5. So instead of four, there’s five. And imagine that you just did the calculation that 5 plus 5 plus 5 plus 5 equals 20. You already know that.
[00:12:24] SY: Yeah, I would just add five to that. So it’ll be 25.
[00:12:26] VJ: Exactly. So what you just did is you didn’t recalculate everything, right? You didn’t start from the beginning and say, “Okay, what’s five plus five? Ten. What’s five?” You didn’t do any of that. You just said, “Okay, I already know that four 5s is 20 and now you’re giving me another 5 so I’m going to use my knowledge from the previous problem, 20, and add another 5. We have 25.” So basically what you did is you worked with a sub-problem and you built upon it. So you knew that four 5s gives you 20 and you had a problem that built upon it and part of it had already been solved. You had a sub-problem. And the term for this is an overlapping sub-problem when you see the same sub-problem again and you can reuse the value because you already computed it. You already know what it is.
[00:13:17] SY: Right. No point in doing it all over again.
[00:13:19] VJ: Yeah, exactly. We’re trying to be efficient here. So that’s like an example of dynamic programming, like you broke it down into small steps. As you went through and solved it, you remembered the value. You remembered the value of the sub-problem you already solved. And when the problem got larger and maybe more complicated, you didn’t have to re-compute it. You used your knowledge that you just found out.
[00:13:43] SY: That makes a lot of sense. Okay. So we talked about the 5 plus 5 plus 5 plus 5 plus 5 example. Can we do another example of how it works on something a little bit more computer science-y?
[00:13:55] VJ: Great. We can do that. I was hoping you’d just be happy with the 5 plus 5 plus 5, but I guess that’s really hard to say.
[00:14:01] SY: I need more.
[00:14:02] VJ: Yeah. All right. Here is a quintessential computer science example. It’s actually a kind of a math example again, but like people seem to use it a lot in CS problems. Fibonacci.
[00:14:13] SY: Oh!
[00:14:14] VJ: We talked about this previously in the episode when we were learning about the golden ratio, and to refresh the Fibonacci sequence is one that starts with zero and one and then if you want to derive any number, you basically sum the two previous numbers. So the first two numbers are 0-1, the next one is 0+1, 1, and then the next one is 2. So a lot of the times you can solve this problem recursively where you can basically say, “Okay, the thing you’re doing is the same. You’re just summing the two previous numbers. I can just recursively solve this.” But if you solve it recursively, a lot of the time what you end up doing is repeating yourself because you are effectively calculating the same thing again and again and again. Does that make sense?
[00:15:01] SY: Sort of. I think so. So if we were to put an abstract formula to it, I have my F of N, right? If I’m looking at like an array and my F of N is equal to my F of N minus one, so the number before it, plus my F of N minus two, which is the number before that. So that’s the formula I’m using to calculate each value. But to calculate the value of the one before it. so my FN minus one, I have to calculate that F of N minus one and that’s F of N minus two. And then for me to calculate that one, I have to calculate, its F of N minus one, it’s F of N minus two. So for every F of N that I’m trying to get, I have to keep going back down the chain such that I’m basically solving each one all over again.
[00:15:47] VJ: Right. So that is a good point that you bring up. If I was like, “I’m going to give you the 10th number in the Fibonacci sequence and then tell me everything that comes before it,” or vice versa, you would basically have to recalculate the ninth number, the eighth number, and then do that again for nine where you’d find the eighth number and the seventh number. And so for like the first digit of Fibonacci or the second digit, you’d be doing that 10 times if you’re finding the tenth number, right? You’d be doing it so many times, which is like a little silly because once you know it, why do you have to do it again? But that’s kind of the problem when you use recursion to solve Fibonacci. That can be a problem. And that’s where dynamic programming can do better because if we’re already calculating the first digit, why do we need to calculate it nine more times to find the tenth digit? We don’t need to.
[00:16:37] SY: Yeah. Okay. That makes sense. You mentioned this idea of like saving the sub-problem and having these solutions and not repeating ourselves on having to recalculate all the time. How would we do that?
[00:16:51] VJ: So this is a fun little thing that actually you probably know, and we’ve sort of referenced it in this series. It’s memoization.
[00:17:01] SY: I got memoization.
[00:17:02] VJ: Yeah. It’s just like a memo pad. But yeah, when I say like we can save the values, all I mean is we can just remember what they were by memoizing them, by keeping them save somewhere, sort of like taking notes. And then the cool thing is if you use memoization, if you solve a problem and you’ve broken it down into its sub-problems, if it’s something you haven’t solved before, you can memoize it. You can say, “Oh, I’m going to remember this. This is new for me.” But if you’re solving a problem, breaking it down into sub-problems and you realize, “Oh, I’ve already done this work before,” you can check in your memo pad, your memoize value, and you can say, “Oh, I already know what is the first digit of Fibonacci. I don’t need to solve it again. I already know what is the ninth digit. I don’t need to solve for it again.” You’re basically saving all that work that you had to do in recursion and you eliminate so much of the recursive Fibonacci tree you would have had to build and keep track of simply because you took the time to remember the solutions to each of the sub-problems that you already saw.
[00:18:07] SY: Okay. So basically if I wanted to solve for, in my Fibonacci sequence, if I wanted to solve for F of 10 and I’ve already solved for the two before, so F of nine plus F of eight, and I’ve already solved for F of nine and F of eight, I can just go to my memo pad and pull out those numbers and not have to recalculate what F of nine and F of eight are.
[00:18:27] VJ: Exactly. And this is exactly like what we did earlier with the 5 plus 5 example. If you imagine instead of doing 5 plus 5 and that’s pretty simple because you can visually look at it and be like, “I know the answer.” Let’s say that it was Fibonacci and I was like, “Find the 50th digit, “it wouldn’t be so easy, right?
[00:18:45] SY: Yeah.
[00:18:46] VJ: It might take a while. It might be like, “Oh, we’ll come back next episode.” It’s complicated. But if you’ve already done the work of that previously and you have memoized it, then you can be like, “Easy. I’ll just tell you what it is.”
[00:18:58] SY: Okay. So how does that perform? How much time does that save us? How much more efficient does that make us?
[00:19:06] VJ: That is a great question. But that’s a story for another day.
[00:19:10] SY: Oh, okay.
[00:19:12] VJ: Specifically, we’ll get into that next episode. I know we covered a lot in this one, but there’s a lot more to learn about memoization and there’s like a couple things about dynamic programming that we can still get into. So next episode, we’ll finish up this topic and we’ll learn about memoizations more too.
[00:19:29] SY: Awesome. And that’s the end of today’s show. If you liked what you heard, please leave us a review and make sure to check out Vaidehi’s blog post. Link to that is in your show notes. I also want to give a huge shout-out to Educative, the text-based online coding course platform. My producer and I got a chance to give it a try. I took “become a front-end engineer”. And Levi, what did you take?
[00:19:51] LS: I took “Learn Python from scratch”.
[00:19:54] SY: And what do you think?
[00:19:55] LS: I thought it was really great. So I’ve been learning Python a little bit and this sort of like…
[00:20:00] SY: Because I’m making you.
[00:20:01] LS: Because you’re making me. True. You’re not making me learn Python specifically.
[00:20:05] SY: That’s true.
[00:20:05] LS: But it is what I chose. So I was a little bit familiar, but I found that these courses, they’re laid out in a really intuitive way, there’s like the sections that lead seamlessly one into the other, the Python 1 goes from data types and variables to conditional statements, functions, loops, and then each section has a quiz to make sure that you’re not just blasting through and like the information is going one way and not the other.
[00:20:35] SY: I love the quizzes.
[00:20:36] LS: Yeah. It really called me out on my BS.
[00:20:38] SY: Yes.
[00:20:40] LS: I was like, “Yeah, yeah, I get it. I get it. I get it.” And then I took the quiz and they’re like, “You don’t get it.” And I was like, “Aha!”
[00:20:46] SY: You got me.
[00:20:47] LS: You got me. And then throughout all of these different sorts of sections, you can code within the service itself. So you don’t need like an external coding thing. I should know what that’s called. Do you know what that’s called?
[00:21:02] SY: I’m not going to tell you. I’m not going to give you an IDE.
[00:21:04] LS: No. Yes. That’s what it says, an IDE. Did you know I’m a producer for a coding podcast?
[00:21:11] SY: A technical podcast.
[00:21:12] LS: Yeah.
[00:21:13] SY: Two actually, two technical podcasts.
[00:21:14] LS: That’s true.
[00:21:16] SY: So if you are interested in learning how to code, make sure to check out Educative and you can actually get 20% off by going to educative.io/basecs. That’s B-A-S-E-C-S. Check it out. This episode was edited and mixed by Levi Sharpe.
[00:21:33] VJ: Bye everyone.
[00:21:34] SY: Thanks for listening. See you next week.
Thank you for supporting the show!