Numerical computing is a fundamental tool for scientific and engineering applications – but can we really trust the results? Typical numerical computations incorporate error from multiple sources: input uncertainty, sampling (discretization) and roundoff. In a floating point computation, failing to account for sampling and roundoff error may generate results that are apparently exact but are actually wrong. Trusting these results could lead to confusion, or catastrophe! 
Unum computing  promises to eliminate sampling and roundoff error by operating on variable precision intervals called uboxes, which are guaranteed to bound correct results. Unums also promise reduced cost compared to existing floating-point number formats as they require on average fewer bits to store each number, potentially saving both memory and bandwidth.
Despite these great promises, unum computing has yet to be proven for significant scientific applications. This may be due in part to a lack of programming model and library support, and also to limitations in current understanding of suitable numerical methods and their associated costs. Unum computing has also been criticized as impractical or even dangerous for incurring excessively greater costs for certain computations compared to floating-point, and for discounting the need for traditional error analysis. 
In this project, you will critically evaluate unum methods through practical applications in physical simulation. You will contribute to current understanding of unum computing (both benefits and limitations), and so help drive an exciting new area of numerical research.