Big-O notation is simply a relationship between the input value (number of elements in your case) and the complexity (time complexity in your case, van also be space complexity).
You're correct about the bubble sort. Because it loops n
times inside another loop of n
times, the time complexity is O(n2).
Quicksort is slightly different. It does a number of passes which depends on n
but, in each case, it manages to put all the values lower than the midpoint on the "left" and all values higher than the midpoint on the "right" - both halves are still unsorted but you know that the all the left elements are less than any of the right elements (let's call this the pivot rule).
This basically chops the workload in half for each sub-loop which leads to average case O(log n). Similar to binary search or balanced trees, any algorithm that divides the workload by a factor for each iteration is O(log n).
Combining the two gives you O(n log n).
This wikipedia page actually shows a nice little graphic on the top right that shows quicksort in action. Since a picture is worth a thousand words (and an animation is worth a thousand pictures), you should look at that for a while to understand.
You'll see that it first divides the workspace in two then swaps elements between the two halves until the pivot rule is met.
Because the workload is divided into two totally separate independent areas, quicksort is ripe for parrallel processing with no resource contention. Provided you have enough processors, as soon as you've partitiond the data into two areas, you can give each area to a separate processor for further partitioning. This is not possible with bubble sort since that sort doesn't give you two independent areas.