The semester is over: yesterday was the final exam. It always feels like a big achievement when another semester successfully passes.
Anyway, this past semester I taught linear programming (here is another post on this). Having covered primal/dual simplex and interior point methods, I wanted to spend some time on computational comparisons of these methods. While many textbooks cover this, discussion of parallelism is not deeply covered. This is where an OR Blog came to the rescue!
Michael Trick had a recent post on parallelism, calling the OR community for further improvements in affordable parallel computing. And, the real fun begins with the comments (scroll down to see these). There you can learn more about Tesla Personal Supercomputer (a PC with up to 960 parallel processing cores) and how it relates to optimization routines, CUDA (a parallel computing architecture used in Tesla) and if you take a look at the comments (esp. #2-5) you can learn more about parallel computing in linear programming.
Thanks M. Trick, M. Saltzman and all who have commented!