Managed languages improve programmer productivity with type safety and garbage collection, which eliminate memory errors such as dangling pointers, double frees, and buffer overflows. However, programs may still leak memory if programmers forget to eliminate the last reference to an object that will not be used again. Leaks slow programs by increasing collector workload and frequency. Growing leaks crash programs. Instead of crashing, leak pruning extends program availability by predicting and reclaiming leaked objects at run time. Whereas garbage collection over-approximates live objects using reachability, leak pruning predicts dead objects and reclaims them based on how stale they are and the size of stale data structures. Leak pruning preserves semantics because it waits for heap exhaustion before reclaiming objects and then poisons references to objects it reclaims. If the program later tries to access these objects, the virtual machine (VM) throws an internal error. We implement leak pruning in a Java VM, show its overhead is low, and evaluate it on 10 leaking programs. Leak pruning does not help two programs, executes four substantial programs 1.6-35X longer, and executes four programs, including two leaks in Eclipse, for at least 24 hours. In the worst case, leak pruning defers fatal errors. In the best case, programs with unbounded memory requirements execute indefinitely and correctly in bounded memory with consistent throughput.