It is easy to write a sequential algorithm that sums up a100-element vector:Sum = a1 + a2 + a3 + . . . + a100It would look something likeSet i to 1Set Sum to 0While i < 101 do the followingSum = Sum + a1i = i + 1End of the loopWrite out the value of SumStopIt is pretty obvious that this algorithm will take about 100 units of time where a unit of time is equivalent to the time needed to execute one iteration of the loop. However it is less easy to see how we might exploit the existence of multiple processors to speed up the solution to this problem. Assume that instead of having only a single processor you have 100. Design a parallel algorithm that utilizes these additional resources to speed up the solution to the previous computation. Exactly how much faster would your parallel summation algorithm execute than the sequential one? Did you need all 100 processors? Could you have used more than 100?