Swipe to navigate through the chapters of this book
In this chapter, we will introduce the parallel capabilities of IPython that, through applying a set of techniques, reduce execution time drastically. In a non-computational example, if one painter would spend T units of time painting a house, having N painters can reduce the total time to T/N units of time. As will be shown, two ways of scaling the computational units can be chosen: multicore or distributed computing. IPython hides the differences between them to the programmer; the same commands can be used in both. The ways of sending tasks to computing units will be introduced with the direct and balanced interfaces. Finally, an example with a database made up of millions of entries will show the advantages of parallelism.
Please log in to get access to this content
To get access to this content you need the following product:
For a more detailed description please see http://ipyparallel.readthedocs.io/en/stable/intro.html. Last seen July 2016.
More information on ipcluster profiles can be found at http://ipython.readthedocs.io/en/stable/.
Changing this behavior is beyond the scope of this chapter. You can find more details here: http://ipyparallel.readthedocs.io/en/stable/task.html#schedulers. Last seen November 2015.
go back to reference M. Herlihy, N. Shavit, The art of multiprocessor programming (Morgan Kaufmann, 2008) M. Herlihy, N. Shavit, The art of multiprocessor programming (Morgan Kaufmann, 2008)
go back to reference T.K.G.B.G. Coulouris, J. Dollimore, Distributed Systems (Pearson, 2012) T.K.G.B.G. Coulouris, J. Dollimore, Distributed Systems (Pearson, 2012)
- Parallel Computing
- Springer International Publishing
- Sequence number
- Chapter number
- Chapter 11