We can not store a decimal in infinite precision, but there may be some way to represent them just like we represent infinite lists in haskell.
The first idea came to me is to represent a decimal number via something similar to Codata, so that for any given natural number k, we can calculate the decimal number precise to k digits.
But there is some obvious problem, think about the number a = 0.333... and b = 0.666... , if we plus them together, we got ans = 0.999... (a sequence of digits) , but we can never tell whether a + b == 1 in this case.
What I want is, to define the decimal number somehow, so that it support the +, -, *, /, >, == operations, and no matter what +, -, *, / operation we applied to these decimal numbers, we get new decimal numbers, which we can calculate them precise to k digits given any natural number k.
I'm wondering: is there any idea which may work this out?