Write a function iavg() that accepts as arguments an array v[] of the widest
size integers supported by a computer, and the array’s size n, and
computes the integer average of the array’s values with loss of precision
less than 1 over the entire range of possible parameter values.
In C or C++ the function prototype would be:
intmax_t iavg (const intmax_t v[], const uintmax_t n)
The solution should use only integer arithmetic, without relying on any
explicit assumptions about internal number representation or the type of
machine arithmetic. Library functions may not be used.
additional questions:
How can the precision margin be narrowed to less than 0.5?