In Statistical Machine Translation, we often find forward or backword jumps while translating from a source position to a target position. We propose several position alignment models for estimating these jump probabilities. Our initial jump probability model is a coarse model with no dependencies. The maximum likelihood estimation of the jump probabilities is performed during post word alignment using maximal approximations from the IBM 4 word alignment model output. We systematically add intutive parameters to provide more accurate models and evaluate these models based on their perplexity on a test set. We also report our results with smoothing via linear interpolation to account for data sparseness issues. The improvement in perplexity can provide an indication of how well the additional parameters model the jump probabilities. The best model can then be applied in various MT components like the decoder or in the iterative word-alignment model training itself.