...

Using floor() and ceiling() (have to look up the C++ equivalent from the Math library, you got me

) on the division of the converted value by two I can have the original precision, but split into two values instead of one.

Yes, that is a way to get an extra bit. Theoretically.

The reason I'm going for this rather crude implementation of an audio output is to see how far the onboard capabilities can be pushed.

Hmm.. Considering that the DACs seem to be of the type with long series of resistors, its errors are mostly constant per input value (ignoring various temperature etc. effects here for now), the calibration-and-compensate -way would be something for "how far .. can be pushed". The calibration is certainly not something easy to do, but there really aren't many methods that can give any effective help in practice, let alone easy ones.

Just thinking aloud here: The SoC has decent enough PWM features, and good enough built-in clock. Create heavily filtered PWM output (basically a very slow, but well defined DAC; note, careful with the filter capacitor(s) choice(s)). Use external opamp to get amplified difference between that PWM output and the corresponding normal DAC's output. Sweep through all values. Slowly, one by one, average lots of samples to reduce noise. Do the sweeps a few times. Since the PWM version should be much more linear (smaller errors from some linear), that opamp output will be half-decent estimation of the DAC's error from that "some linear output". Feed that amplified difference to SoC's ADC. They are also crappy as ADCs, but they only need to get like 4-6 MSB's right. The collected ADC results will need some math; finding out a "best fit" with linear through those results, then applying that as gain and offset adjustment for the PWM input-to-count control (for a better match with the DAC). Then repeat the measurements with that better fit. The new results will give errors for the individual steps. Apply those as compensation for the input data before feeding to DAC.

Then repeat all that for the other DAC.

Then repeat all all that at multiple chip temperatures (I hope that SoC had temperature sensor). If the results are good enough at all relevant temperatures, one can stop here and be happy. Otherwise linear interpolation of compensation values between different measured temperature points.

Then recheck in a week or two. Maybe also in few months. Just in case the resistors keep drifting over time. If they do... I'd give up. Someone else might continue checking if the drift is slow and/or predictable enough...

May need a bit of memory for the compensation lookup tables. May be possible to store just some of them, giving a coarser correction table.

Once done, the difference amp and path back to ADC can be removed. Or left there for possible recalibration once the DAC resistors and output buffer drift over time in all possible ways. Or kept just for the looks.

I left purposefully several details out. Left as an exercise to the reader. No guarantees of any actual improvements. Does not reduce noises. Good bits below noise are irrelevant in the intended use case (audio), could be of some use in other cases. But in some of those other cases using the PWM-method might be a better choice.

(Edit: there is also a way to get some improvement with using just the DAC itself to external components to ADC, but it would need some more external components than the above and is a bit trickier with the math...)