mainly just curious as to how the speed and bit width disparity between the Teensy and a chip like this actually plays out
How it actually plays out depends upon how you write the software.
1: Wait for the I/O operation to complete, and block interrupts while waiting
2: Wait for the I/O operation to complete, allowing interrupts (the default)
3: Start the I/O operation and later detect when it's completed, or ended in error.
With SPI and I2C, case #2 is by far the most common way. While waiting, your program can't do any of the other stuff it normally does. But interrupts do still work, so at least the interrupt-based part of any other libraries and code you've written will work while you wait.
So if your program uses any of the 7 serial ports at a relatively high baud rate, the interrupts will allow the serial driver code to move the incoming data from the serial port FIFOs into the larger buffer in memory. But your program won't be doing its normal work to check if new data has arrived in that buffer. If you don't wait too long, the buffer will simply have a little more data available for you to read when you're done waiting. But if you do wait too long, where "too long" depends on how quickly the incoming data speed can fill up that buffer, then you could possibly lose incoming bytes.
Almost everything using interrupts has a similar consequence, where the interrupt code handles the very urgent matters, but your program must still do the other work to make things operate. For another example, if using the Audio library, the interrupts will allow the audio system to keep working without glitching or disrupting your sound. But if the sound is a song or tune where your program changes the pitch and amplitude of synthesis oscillators or effects, waiting for an I/O operation to complete may mean your program makes the next change to audio settings slightly later than it would have otherwise. While those sorts of tiny timing variations aren't audible to humans, again the net effect can depend greatly on how you write your code. If you do the simplest thing, like a fixed delay between making the audio settings changes, those tiny extra delays could cause a cumulative increase in the total time taken. Even that may not matter in most cases, but if your project depends on keeping pace with some other system (or maybe musicians or machines playing other parts in harmony) then small accumulating delays might matter. Even there, good use of millis() or elaspedMillis could compensate for such an issue... so how timing issue really play out all depends on the needs of your project and how you craft the code.