Design and analysis techniques are important for becoming a good programmer

Man is curious being by nature. To quench his thirst of knowledge he makes discoveries, such discoveries which makes the answers easier. Similar is the fate of importance of algorithms.”

-By Kalindi

Design and analysis of algorithm techniques are of supreme importance. Orientation in life as well as in technical aspects makes our life easier. Algorithms are building blocks for any problem which are solvable.

Writing programs and writing better programs are two different things. When we write programs, we just find a possible solution for it. But when we write programs using efficient algorithms, we reduce computational power and make the program better for the real-life problems in industries.

Situational Example:

Ram is having exam of English tomorrow; he is unable to find out the meaning of a particular word. He refers to an online dictionary (unsorted, i.e., all the words are distributed unevenly not ordered according to alphabetical order) using naïve technology he has to turn each page to find out the correct word he was searching for. This will make him lose all his energy on finding that particular word and will waste his precious time.

His friend Shyam comes and applies the concept of bubble sort which helps ram to find the word immediately and now he can study for his exam again and prepare on time.

This is the boon of design and analysis techniques of algorithms, which has made the life of everyone easier. We may not realise it every now and then but its present everywhere.

Let us discuss the problem statement.

Given an array of integers, our motive is to find the sum of contiguous

subarray of numbers having the largest sum.

Note: The subarrays are continuous i.e., elements cannot be skipped in between

and needs to be picked in a continuous fashion without any breaks in between. If all the elements are positive, then the subarray would be the

entire array itself. However, the problem gets interesting when some

negative elements are there in between.

Let us go through the below problem statement and analyse how a small modification which I made in the below algorithm saved 990K iterations and how it was useful.

Consider an array

Input: [-2,1, -3,4, -1,2,1, -5,4]

Output: 6

Explanation: It will have the largest sum when the subarray [4, -1,2,1] is

considered, which will return the sum {4+(-1) +2+1} = 6.

Let us now dive into the Brute force approach which is most commonly

used everywhere and try to implement the optimized version of the same:

Pseudocode of Naïve Brute Force:

void allSubarrays(int arr[], int n) {
 pair<int, int> ind;
 int max_sum = INT_MIN, cur_sum = 0;
 for(int i = 0; i < n; i++) {
 for(int j = i; j < n; j++) {
 cur_sum = 0;
 //Elements of subarray(i, j)
 for(int k = i; k <= j; k++) {
 cur_sum += arr[k];
 if(cur_sum > max_sum) {
 max_sum = cur_sum;
 ind = make_pair(i, j);}}}
 cout << "Max sum : " << max_sum << endl;

Here, the inner-most loop is used to establish the sum of elements starting from index 'i' till index 'j'. A pair object has been used to store the indexes marking the starting and the ending index of the maximum sum subarray.

One can easily note that this particular inner loop is redundant as the sum of the subarray from index 'i' to 'j' can be calculated cumulatively in O(1) instead of creating a loop to calculate it independently in every iteration.

Brute force algorithm follows O(n^3) time complexity and a little modification for calculating the sum of successive elements could reduce the time complexity down to O(n^2) and an additional space complexity of O(n).

So based on my research and thorough understanding of the given

problem statement, I found out a way to compute the sum of elements

from index ‘i’ to index ‘j’ in the O (1).

This approach makes use of cumulative sum array most commonly known

as prefix sum array. Let us take a simple example to understand the working of prefix sum array.

Input: [1, 2, 0, 3, 2, -1]
prefixSum array: [1, 3, 3, 6, 8, 7]

To find the sum of array values from index 0 to 3: 5

Naïve algorithm runs a for loop to sum array values: 1 + 2 + 0 + 3 = 6

This computation required 4 iterations.

Let us perform the same step using prefixSum Array

Answer = prefixSum[3] – prefixSum[0] + arr[0] = 6 – 1 + 1 = 6

This is basically O(1) computation, requiring a single step.

Optimized implementation:

#include <iostream>
#include <vector>
#include <climits>
using namespace std;
// this function returns a pointer to the array.
vector<int> cumulativeSum(int arr[], int n) {
 vector<int> csum;
 for(int i = 1; i < n; i++) {
 csum.push_back(csum[i - 1] + arr[i]);
 return csum;
// O(n^2) cumulative sum approach
void allSubarrays(int arr[], int n) {
 pair<int, int> ind;
 vector<int> csum = cumulativeSum(arr, n);
 int max_sum = INT_MIN, cur_sum = 0;
 for(int i = 0; i < n; i++) {

for(int j = i; j < n; j++) {
 //sum of elements of subarray(i, j)
 cur_sum = csum[j] - csum[i] + arr[i];
 if(cur_sum > max_sum) {
 max_sum = cur_sum;
 ind = make_pair(i, j);
 cout << "Max sum : " << max_sum << endl;
 cout << "Subarray : ";
 for(int i = ind.first; i <= ind.second; i++) {
 cout << arr[i] << " ";

With the help of cur_sum = csum[j] - csum[i] + arr[i]; we were able to compute the sum in constant time i.e, O(1). Hence, we essentially removed the inner most for-loop, making useof prefix Sum array to calculate the sum of the required subarray in

O(1) time. This significantly reduces the time complexity of the problem from

O(n^3) to O(n^2) .

Hence, we essentially removed the inner most for-loop, making use

of prefix Sum array to calculate the sum of the required subarray in

O(1) time.This significantly reduces the time complexity of the problem from

O(n^3) to O(n^2) .

Let’s say, n=100. If the array's size is 100, then O(n^3) will take 10 lakhs iterations to find the optimal solution but O(n^2) algorithm utilize just 10k iterations to do the same task.

Hence, my modification in this algorithm, in a way took 9,90,000

iterations less compared to the brute force approach, which is a

significant improvement in the algorithm’s time complexity.

I was able to find an efficient solution for this because I realised the importance of algorithms and designing it. While studying algorithms we always make an effort to find best solution and that is only possible once we practice and learn it. In industries it is very important to find the best solution and to make things happen in much less time. This world is a rat race and every other industry wants to be better than the other, thus for a programmer it is very important to understand the concepts of designing and analysis of algorithms so that computational power is reduced and efficient solution is found out. In the above algorithm which I have modified, because of my small change 990k iterations were saved, that is much of computational power is saved which is energy saving. In this era everyone needs better and efficient solutions and not being able to provide that as a programmer we are ultimately not doing our job. Study of design and analysis of algorithm is thus very required to become good programmers because, good programmers will ultimately lead to reduced iterations, reduced computational power, time of every stakeholder involved in that process, increased efficiency and saving mother earth by saving energy.

109 views0 comments

Recent Posts

See All

Technology Trends 2021

INTRODUCTION Today, the lines between imagination and reality seem to be fading every day as technological breakthroughs have become the new normal. Human settlement on Mars doesn’t sound crazy anymor

BLOCKCHAIN – A discovery that changed the world

“Man is a curious being by nature. To quench his thirst of knowledge he finds solution to the existing problems.” In 1991 a secure chain of blocks was first described by Stuart Haber and W Scott Storn