My guess is it is a code organization timing issue.
That is if your code is setup something like:
Code:
SPI1.beginTransaction(...)
SPI1.transfer(???) // start the reading of a register
...
value = SPI1.transfer(???)
SPI1.endtransaction(...)
SPI2.beginTransaction(...)
SPI2.transfer...
...
Then your code is setup to serialize access to the two SPI busses. That is your code is in lock step starting up talking on SPI1, waits for each byte or word to transfer, both send and receive and then finishes, before it starts looking at what is happening on SPI2 buss. Most SPI code is setup like this. Also most library code is setup like this.
But it is not the hardware constraint. Using the default SPI code you can do things like:
Code:
SPI1.beginTransaction(...)
SPI2.beginTransaction(...)
SPI1.transfer(x);
SPI2.transfer(x);
Again the code is still lockstepping the access to the SPI, but it is being done alternating every other byte....
Again this is not a hardware restriction, but more of an ease of use of libraries. The SPI buss (0) on the T3.5 has a read and write queue of 4 items, whereas the SPI1 and SPI2 busses have a queue of one item.
Again I don't believe there is any library code that currently separates the access out for the read queue versus the write queue. If you look at the SPI library for example for the SPI1.trasnfer(mybyte)
You see:
Code:
inline static uint8_t transfer(uint8_t data) {
SPI1_SR = SPI_SR_TCF;
SPI1_PUSHR = data;
while (!(SPI1_SR & SPI_SR_TCF)) ; // wait
return SPI1_POPR;
}
So it clears the TCF(Transfer complete flag) and then pushes your byte onto the queue and waits for the transfer to complete and then pops the returned value... So for example you could create a function that outputs on both queues...
But again only guessing as no real details given