## Using the BSP cost model to optimise parallel neural network training. Future Generation Computer Systems (1998)

Citations: | 7 - 0 self |

### BibTeX

@MISC{Rogers98usingthe,

author = {R. O. Rogers and D. B. Skillicorn},

title = {Using the BSP cost model to optimise parallel neural network training. Future Generation Computer Systems},

year = {1998}

}

### OpenURL

### Abstract

Abstract. We derive cost formulae for three di erent parallelisation techniques for training supervised networks. These formulae are parameterised by properties of the target computer architecture. It is therefore possible to decide the best match between parallel computer and training technique. One technique, exemplar parallelism, is far superior for almost all parallel computer architectures. Formulae also take into account optimal batch learning as the overall training approach. 1

### Citations

3629 |
Neural Networks: A Comprehensive Foundation (2 nd ed
- Haykin
- 1999
(Show Context)
Citation Context ...or each big superstep. The communication cost for small supersteps is therefore (M,y)(x,1). The complete BSP cost for block parallelism is: CBP =(N , 1) + 2( L AW , 1) x p +(2M +(M , y)(x , 1))g + xl =-=(4)-=- Clearly, block parallelism can be made more e cient by reducing the number of small supersteps. If each processor is assigned an entire layer, the small supersteps can be eliminated. This is known as... |

33 | Theory, practice, and a tool for BSP performance prediction
- Hill, Crumpton, et al.
- 1996
(Show Context)
Citation Context ...ual architectures. 2 Bulk Synchronous Parallelism Bulk Synchronous Parallelism (BSP) is a parallel computation model that facilitates the development and analysis of general-purpose parallel software =-=[5, 9]-=-. AsBSP abstract machine is very simple: it is a set of processor-memory pairs, linked by some interconnection network. This abstraction can be easily implemented by any MIMD machine. BSP programs con... |

15 |
A proposal for a BSP Worldwide standard. BSP Worldwide, http://www.bsp-worldwide.org
- Goudreau, Hill, et al.
- 1996
(Show Context)
Citation Context ...separates data transfer from synchronisation and makes it impossible to write programs that deadlock. The current best implementation of the BSP model is BSPLib, a library callable from C and Fortran =-=[3]-=-. The cost of a BSP superstep can be straightforwardly computed from the program text and two architecture-speci c parameters. The rst of these parameters, g, measures the permeability of the parallel... |

7 |
Flexible data parallel training of neural networks using MIMD computers
- Besch, Pohl
- 1995
(Show Context)
Citation Context ...tectures. Formulae also take into account optimal batch learning as the overall training approach. 1 Introduction Neural network learning is expensive, and hence a natural application for parallelism =-=[1, 2, 8]-=-. Almost all of these papers demonstrate that speedup can be achieved, although it is often disappointing when compared to the resources used. In this paper we answeramuch more interesting question: g... |

5 |
Simulating arti cial neural networks on parallel architectures
- Serbedzija
- 1996
(Show Context)
Citation Context ...tectures. Formulae also take into account optimal batch learning as the overall training approach. 1 Introduction Neural network learning is expensive, and hence a natural application for parallelism =-=[1, 2, 8]-=-. Almost all of these papers demonstrate that speedup can be achieved, although it is often disappointing when compared to the resources used. In this paper we answeramuch more interesting question: g... |

3 |
Questions and answers about BSP. Scienti c Programming
- Skillicorn, Hill, et al.
- 1996
(Show Context)
Citation Context ...ual architectures. 2 Bulk Synchronous Parallelism Bulk Synchronous Parallelism (BSP) is a parallel computation model that facilitates the development and analysis of general-purpose parallel software =-=[5, 9]-=-. AsBSP abstract machine is very simple: it is a set of processor-memory pairs, linked by some interconnection network. This abstraction can be easily implemented by any MIMD machine. BSP programs con... |

2 |
On the distributed implementation of the back-propagation
- Anguita, Canal, et al.
- 1996
(Show Context)
Citation Context ...tectures. Formulae also take into account optimal batch learning as the overall training approach. 1 Introduction Neural network learning is expensive, and hence a natural application for parallelism =-=[1, 2, 8]-=-. Almost all of these papers demonstrate that speedup can be achieved, although it is often disappointing when compared to the resources used. In this paper we answeramuch more interesting question: g... |

2 | A framework for parallel data mining using neural networks
- Rogers
- 1997
(Show Context)
Citation Context ...opy multi-layer perceptron. Despite the use of a speci c network, the results apply to a wide range of supervised neural network algorithms. Detailed derivation of these cost formulae can be found in =-=[6]-=-. We assume that the neural network consists of L layers with M neurons per layer, with each layer fully connected to the neurons in the preceding and succeeding layers. The total number of neurons is... |

2 |
Batch size and training times in supervised and unsupervised networks
- Rogers, Skillicorn
- 1997
(Show Context)
Citation Context ...hs required for convergence increases linearly with the batch size. For batch sizes smaller than B 0 , the number of epochs required for convergence is constant asB decreases. Details can be found in =-=[7]-=-. 6 Using Parallelism and Batch Learning For sequential computation, batch learning provides substantial performance improvement. This is not so clear for a parallel implementation because the multipl... |