In my own practice, this implicit conversion from a float to an integer in functions, such as (+) and (-) mostly causes a mistakes and is really useful in rare cases.
Probably, it will be good - to supress the conversion (throw a type error) and force programmer convert it explicitly?
Then there will be always implicit conversion to float only like in Perl/Python etc., and only when writing library interfaces the (int ...) cast is used. I believe throwing errors will confuse more as all other functions also convert from/to int/float as needed.
Thanx for a trick - now my financial calculations will lose the cents less frequently :-)
Simply I've think about absence of such autoconversion for strings and got an idea, that the integers relyes to the floats like to the strings...
Just an idea... :-)