First we start with non-candidates just to put on record the obvious reasons why they don't solve the problem.
Arbitrary Precision Floating Point
As I understand it, again these systems fail simply because they don't represent all the computable reals.
A consequence of this is that they don't have very good closure properties.
Computable N-ary expansions for some N>1
Computable Continued Fractions
They not only represent all the real numbers, they do so uniquely (if you are using regular continued fractions). Furthermore, the rationals are represented by finite continued fractions.
Unfortunately the combination of these two features is sufficient to guarantee that addition is not effective over this representation (a general algorithm for adding arbitrary regular continued fractions will end up looping without producing any more terms if you apply it to two irrational numbers whose sum is rational).
Possibly you might get an admissible representation by using irregular continued fractions, but then you lose the uniqueness.
You can still use them as a part of an admissible representation (e.g. as a way of representing rationals in a convergent sequence). Or you can use them and put up with occasional non-termination. But anyway, they are inadmissible.
Minsky defines a computable real as one whose decimal (or n-ary) expansion can be "generated sequentially by a turing machine". This definition is suggestive of a representation of computable reals as algorithms for generating positional expansions. Unfortunately, under this representation, the operation of addition is not itself computable.
Consider two algorithms as follows. The first computes the decimal expansion "0.1111111..." and the second the expansion "0.8888...". An algorithm to compute the expansion of the sum of these numbers cannot determine the expansion by evaluating the algorithms which generate the two operands, since this would never yield sufficient information to determine the first digit of the result. It would therefore have to inspect the algorithms and discover in some way the result. In doing so it would effectively be computing whether two algorithms were computing the same number. From such an algorithm a solution to the halting problem could be constructed.
For an explanation of continued fractions and an extended account of their merits see: Henry Baker's Continued Fractions Page
Now I have a problem with continued fractions. Basically, they don't work. Addition is not effective over reals represented as continued fractions. Now Veuillemin acknowledges this in his paper, and then he goes right ahead and uses them anyway. So he must have some trick up his sleeve, and I havn't yet looked close enough to figure out what it is. (e.g. he could use a continued fraction plus an error bound, or a pair of continued fractions, not that I think either of these is a good idea, but they probably could be made to work.)