Greedy Algorithm

PVSN Raju
5 min readMar 24, 2021

--

Introduction

Let us start our blog with a simple explanation that will give us a clear picture of the Greedy technique. For example take an example of the game of chess we play chess in such a way that we think of the future consequences before making a move in the game whereas in the game of volleyball we have to act or take decisions according to the situation which means making a decision which is assumed to be correct at a situation gives us the best solution, but in some other cases it doesn’t. So the greedy approach is best suited for a immediate situation. Among all the approaches greedy algorithm is the best and the simplest approach compared to all other methods. In a greedy approach the decision is taken on the basis of the present information that is available and without worrying about the future because of the current decision.

Greedy strategy

Greedy algorithms works in different stages. At each stage a decision is made which suit the current situation without thinks about the future which means that some local best is chosen. Many optimization problems can be identified so easily using a greedy algorithm. Some problems have no perfect solution, but a greedy algorithm can provide a solution that is close to optimal.

How do you decide which choice is optimal?

Think that you have a function that needs to be optimized at a given point. A Greedy algorithm makes choices at every step to ensure the function is optimized. The Greedy algorithm will have only one shot to process the optimal solution so it never goes back and reverses its decision once made

Elements of greedy algorithms

Generally there are two basic properties of optimal Greedy algorithms they are:

1) Greedy choice property

2) Optimal substructure

1) Greedy choice property

This explains that the globally optimal solution is obtained by making a locally optimal solution. The decision made by a Greedy algorithm may depend on previous choices but not on the future. It simultaneously makes a greedy choice one after the other and shortens the given problem to a smaller one.

2) Optimal substructure

A problem shows optimal substructure if an optimal solution for the problem contains an optimal solution to its sub-problems also which means we can solve sub-problems and then build up the solutions later to solve the larger problems.

What are the Components of Greedy Algorithms?

Greedy algorithms have the following components −

· Candidate set − A solution is created or made from the current set.

· Selection function — This function is used to choose the best candidate to be added to the solution.

· Feasibility function — This function is used to confirm whether a candidate can be used to contribute to the solution or not.

· Objective function — This function is used to assign a value to a solution or a partial solution.

· Solution function — This function is used to indicate whether a complete solution has been reached or not

What are the steps required for achieving a Greedy Algorithm:

  1. Feasible: Here we check whether it satisfies all the possible conditions or not, in order to obtain at least one solution to our current problems.
  2. Local Optimal Choice: In this, the choice should be the optimum one which is only selected from the present available choice.
  3. Unalterable: Once the decision is made, at any given step the option cannot be altered.

Greedy algorithms have some advantages and disadvantages:

  1. It is quite simple and so easy to come up with a greedy algorithm or even you can create some multiple greedy algorithms for a problem that’s an advantage with the greedy algorithm.

2. By Analysing the current run time for greedy algorithms most of the part will be a lot simpler than different methods like Divide and Conquer. For the Divide and Conquer method, it isn’t certain whether the procedure is quick or moderate. This is on the grounds that at each degree of recursion the size gets more modest and the quantity of sub-problems increases.

3. The troublesome part is that for greedy algorithms you need to work a lot harder to comprehend rightness issues. Indeed, even with the right algorithm, it is difficult to demonstrate why it is right. Demonstrating that a greedy algorithm is right is a greater amount of a workmanship than a science. It includes a great deal of inventiveness.

Areas of Application of Greedy algorithm

Greedy algorithm approach is used to solve various problems, like

· Finding out the shortest distance between the two vertices using Dijkstra’s algorithm.

· Finding out the minimal spanning tree in a graph using the Prim’s or Kruskal’s algorithm, and many more

· Coin change question

· Fractional Knapsack question

· Disjoint sets-UNION by size and UNION by height (or rank)

· Job scheduling algorithm

· Greedy techniques can also be used as an estimation algorithm for complex problems also.

Some of the Examples of greedy algorithm are :

  1. Huffman Code
  2. Minimum Spanning Tree
  3. Job Sequencing
  4. Fractional Knapsack Problem
  5. Activity Selection Problem
  6. Machine scheduling
This is a simple algorithm for Divide And Conquer

Now let us take a problem and find its solution using greedy algorithm method

Problem-1 Given an array F with size n. Assume the array content F[i] indicates the length of the i th file and we want to merge all these files into one single file. Check whether the following algorithm gives the best solution for this problem or not?

Algorithm: Merge the files contiguously. That means select the first two files and merge them. Then select the output of the previous merge and merge with the third file, and keep going…

Note: Given two files A and B with sizes m and n, the complexity of merging is O(m + n).

Solution: This algorithm will not produce the optimal solution. For a counter example, let us consider the following file sizes array.

F = {10,5,100,50,20,15}

As per the above algorithm, we need to merge the first two files (10 and 5 size files), and as a result we get the following list of files. In the list below, 15 indicates the cost of merging two files with sizes 10 and 5.

{15, 100, 50, 20, 15}

Similarly, merging 15 with the next file 100 produces: {115, 50, 20, 15}. For the subsequent steps the list becomes {165, 20, 15}, {185, 15} Finally, {200}

The total cost of merging = Cost of all merging operations = 15 + 115 + 165 + 185 + 200 = 680.

To see whether the above result is optimal or not, consider the order: {5, 10, 15, 20, 50, 100}.

For this example, following the same approach, the total cost of merging = 15 + 30 + 50 + 100 + 200 = 395.

So, the given algorithm is not giving the best (optimal) solution.

--

--