Write a function that keeps track of the elapsed time as it executes the sort procedure on a randomly chosen vector. Use that function to write a program that produces a table of the observed running times for a predefined set of sizes, as shown in the following sample run:
The best way to measure elapsed system time for programs of this sort is to use the ANSI clock function, which is exported by the ctime interface. The clock function takes no arguments and returns the amount of time the processing unit of the computer has used in the execution of the current program. The unit of measurement and even the type used to store the result of clock differ depending on the type of machine, but you can always convert the system-dependent clock units into seconds using the following expression:
Unfortunately, calculating the time requirements for a program that runs quickly requires some subtlety because there is no guarantee that the system clock unit is precise enough to measure the elapsed time. For example, if you used this strategy to time the process of sorting 10 integers, the odds are good that the time value of elapsed at the end of the code fragment would be 0. The reason is that the processing unit on most machines can execute many instructions in the space of a single clock tick—almost certainly enough to get the entire sorting process done for a vector of 10 elements. Because the system’s internal clock may not tick in the interim, the values recorded for start and finish are likely to be the same. The best way to get around this problem is to repeat the calculation many times between the two calls to the clock function. For example, if you want to determine how long it takes to sort 10 numbers, you can perform the sort-10-numbers experiment 1000 times in a row and then divide the total elapsed time by 1000. This strategy gives you a timing measurement that is much more accurate.
Already registered? Login
Not Account? Sign up
Enter your email address to reset your password
Back to Login? Click here