Big chunks of Teensy4 reference manual is secret?!? Unlike Teensy3.

Status
Not open for further replies.

westernsemico

New member
Hi, apologies if "Technical Support & Questions" is the wrong forum to post this in. The answer I'm seeking is technical even if the question deals with business/legal issues.

I was dismayed to find out that a big chunk of the IMXRT1060 documentation has been moved out of the publicly available reference manual and inserted into a secret "Security Reference Manual". Past experience has shown that documents with this status are simply not available to mere mortals under any circumstances, and the web form business about "type your FAE's name here" is effectively a runaround to avoid being blunt about this fact.

This was not the case for Teensy 3.x's CPUs -- as far as I can tell the public manuals for those devices describe all the software-accessible stuff.

This really troubles me. I've gravitated back towards the microcontroller world as a result of the endless proliferation of secret "trusted processors" like the IME/PSP that seem to have become unavoidable in SoC/desktop/server computing. It was a rude awakening to find that secret coprocessors have made it into the microcontroller world too.

Paul, as a high-volume distributor you have access to this "Secret Reference Manual" (I read through this fascinating thread). Obviously you can't disclose any NXP-confidential information, and I'm not asking you to, but I was hoping you could shed light on two questions:

1. Can you explain NXP's reason for keeping the software API for all this security stuff secret? I can't imagine why they would want to do this. If I were considering designing this thing into some high-volume product that would scream to me "security through obscurity". The fact that so few people are able to review the APIs and do penetration testing against these features is a negative, not a positive. Any idea what NXP was thinking? For the DCP there is *maybe* some kind of export control issue, but that doesn't explain the secrecy in any of the other modules outside of the DCP. The only self-consistent explanations I can come up with sound pretty tinfoil-hatty, so I won't include them here. If they're worried about competitors with enough cash to pay for a maskset and make a clone (i.e. several million $) that kind of money is most certainly enough to get the SRM from one of the dozens of developers who received it: you can buy zero-day exploits for those prices and use them to hack the developers who have the SRM, or just bribe one (not Paul!) for a lot less money.

2. Can we be sure that, so long as we don't need HAB or use SNVS, DCP, the TRG, OCOTP_CTRL, or BEE, and we blow the "complete JTAG disable" fuse, that nothing in the secret Security Reference Manual is relevant to the security of products we design around Teensy4? If we can be sure of that, how can we be sure? If not, isn't that a problem?

I'd also be interested in reading any speculation you might have on why this SRM/CSU/IME/PSP/TrustZone stuff seems to be increasingly crammed down our throats, whether we want it or not. I know you don't have too much clout with NXP, but I would sure appreciate it if their products started coming with a "disable anything that isn't publicly documented" eFuse and a publicly documented commitment that blowing that eFuse will do that. Then the people who need these super-secret features can have them, and the rest of us can be sure that we aren't affected by stuff we're not allowed to know about -- or at least as sure of that as we are about any other assumption whose failure would count as a silicon bug committed by NXP.

Hope this post didn't come off as too much of a rant.
 
Last edited:
This kinda does come off as quite a rant. But yeah, I get your viewpoint. Any sort of secret documentation feels pretty bad. On a personal level, I really don't like it either.

Obviously I can't speak for NXP, but I can speculate a bit. My gut feeling is most of the reasoning boils down to the way important decisions are made within giant corporations. It's easy to get overly cynical about this stuff. I'm going to try to resist that temptation, and I hope you will too.

The reality of any company or any organization ultimately comes down to imperfect humans who have to make decisions which tend balance difficult trade-offs. In a perfect world, every decision would be made with deep knowledge of all technical details and perfect clarity of all the future ramifications and with only purest motivation to achieve the best possible outcome. Clearly that's not how the real world works. The reality within large corporations involves individuals who take on enormous responsibility, where the results impact the financial well being of the company and likely trajectory of their career. Internally, corporations are rarely resemble a well oiled machine with every part functioning in perfect unison. Typically there are many different business units with conflicting goals, and sometimes even petty "office politics" squabbles. Most large corporations also have some sort of uniform employee performance evaluation process, which ideally motivates everyone to strive for their best, but in reality also tends to put everyone in a perpetual state of competition with each other.

It all boils down to risk aversion. Within a large corporation, it's very easy to make a decision in a direction that is perceived as lowering risk, and quite difficult to risk one's reputation and career on a decision that seems to be risky. This is just shear speculation on my part, but my guess is the ultimate rationale behind the security reference manual secrecy comes down to perception of risk.

Again, it's easy to be cynical about corporations. But the reality is almost all humans only embrace risk when their stakes are perceived as low. Regardless of the technical issues, I believe it's important to keep in mind this is ultimately a human decision based more on perceptions than a purely technical matter.
 
Now, about the tech side....

First, let me assure you that if you were to sign a NDA with NXP and obtain the security manual, I'm confident you'd find the whole thing pretty underwhelming. I did. The vast majority of the pages are just copypasta of the same material in the public reference manual. Like so much of NXP's documentation, it's heavy on "what" (in rather disjointed form) and extremely light on explaining "how" and "why".

The 2 PDF documents which come with the CST utility and AN4581 provide much more info about HAB & the IMXRT security model than the security manual.

The other unpleasant reality of the security manual is most of it ends up being a moot point, because many parts which aren't copypasta are mostly hardware registers manipulated by NXP's ROM at startup.

But sadly, there are a couple parts of the security manual which I believe NXP really should have put into the public reference manual. The Data Co-Processor (DCP) and True Random Number Generator (TRNG) are the main ones. Not having that info public is particularly silly, since their own SDK has essentially all that info buried inside its publicly available source code.

We already do have support for using TRNG in the Entropy library. Someday I hope to support using the DCP in the CryptoAccel library (which is also greatly in need of a more friendly Arduino-style API).

And as you can see on that security thread, I am (pretty slowly) working towards making the security features available. But to be honest, the slowness is all on me, not NXP. There are only so many coding hours in every day...
 
Having talked about rationale and tech, I'd like to address the philosophical points and directly answer your questions.

First, I want to politely disagree with the "stuff seems to be increasingly crammed down our throats, whether we want it or not" sentiment. These security features are no more compulsory than the UARTs, CAN ports, special timers or any other peripherals. If you don't need a serial port, then you just don't use the UART. If you're not connecting CAN bus, don't use that peripheral. Hardly any project uses *all* the timers! The situation is the same with all the security hardware. None of it is turned on by default. If you don't need it, you just leave it unused.

I know this stuff is frustrating, especially the NDA aspect, but please try to keep the security hardware in perspective. It really is just more peripherals on the chip.


Now, about specific questions.

I know you don't have too much clout with NXP, but I would sure appreciate it if their products started coming with a "disable anything that isn't publicly documented" eFuse and a publicly documented commitment that blowing that eFuse will do that.

I can assure you my pull with NXP is extremely limited, though I do have some contact with NXP people who appear to be decision makers. I have repeatedly begged them to include 4 DACs, or at the very least 2, since even before we released Teensy 4.0. As you can see from the upcoming IMXRT1170 specs, we're getting only 1 DAC. :(

Honestly, I seriously doubt I will ever have any influence in their design decisions. But it I do get that opportunity, DACs are still going to be at the very top of my wish list.

If the opportunity arises, I would ask for the TRNG and DCP documentation to be moved to the public reference manual. But ultimately what really matters are nice libraries to allow actual use of the hardware. I already put that work into the Entropy library, and eventually I'll do CryptoAccel.


The fact that so few people are able to review the APIs and do penetration testing against these features is a negative, not a positive. Any idea what NXP was thinking?

I believe your question revolves around practices for software or network security, whereas the reference manual is about hardware. NXP can't alter the hardware, at least not without changing the masks used to fabricate the silicon, which is extremely expensive and takes a long time, and can't address the many millions of already-fabricated chips soldered to PCBs and already flowing through supply chains.

But admittedly, part of the security model involves how their ROM initializes certain hardware. If you search enough, you'll find security researchers have found bugs in the ROM code NXP shipped in older IMX chips. So your point about 3rd party testing definitely has some validity.


....how can we be sure?

This is a large and ongoing question that pertains to all proprietary silicon, no matter how much documentation is published. Even if NXP publishes that security manual, your question would still apply if you don't necessarily trust that they have fully documented every circuit.

And to be realistic, we already know there are undocumented things. Virtually all chips have them. Often they were features that didn't work, so they're just removed from the documentation and if any bits in documented registers control them, often those bits are just documented as "reserved".

For one concrete example, some time ago we had a bug were soft reboot wasn't working. It didn't matter how Watchdog timers or writing to the ARM AIRCR register or other documented ways of soft rebooting all would crash. Eventually it turned out I made a mistake in initializing the IOMUXC_GPR_GPR16 register. If you look at NXP's documentation, section 11.4.17 on page 366 in the public reference manual (Rev c, 12/2019), you'll see bit 21 is documented as 1. You might believe that documentation means the reset value is 1, since that's the convention used throughout the rest of the manual. But it turns out if you mistakenly set that bit to zero, soft reboot breaks. How that register really works and why it's different for soft vs hard reboot is still a mystery to me.

My point is this question, "how can we be sure?", is a very fundamental issue which can never be 100% answered by proprietary silicon regardless of what documentation is published, and may or may not even be feasible to ask of supposedly open source hardware silicon. This topic has been discussed at length in recent years by people who (as least I believe) are some of the very best in the world, and the consensus seems to be it's a very hard problem.

Pragmatically, hardware comes down to trust. Personally, I believe NXP is trustworthy, even if their corporate policies aren't always pleasant. I don't like the secrecy for some documentation. It does feel pretty bad and I don't want to argue against that.

These 3 messages turned out really long, but hopefully they answer or at least address your questions, and maybe will be findable for others who have similar concerns.
 
Wow, Paul, thanks for taking the time to write such a detailed reply;
I appreciate it.

This kinda does come off as quite a rant.

Yeah, after sleeping on it I re-read what I wrote and it was kinda
cringey. Sorry about that.

This is just shear speculation on my part, but my guess is the
ultimate rationale behind the security reference manual secrecy comes
down to perception of risk.

I suppose. But NXP are run by very smart people; I have to imagine
that whoever made this decision has a boss who understands that while
security-through-obscurity may work sometimes, it does so at the
expense of truly spectacular failures whenever it doesn't.

Another possibility occurred to me after writing my original post: all
the CSU/HAB stuff might have been put in there at the behest of a
single very large customer in the automotive sector. NXP has decided
that the documentation/support burden makes the cost-benefit of any
other customer using it negative, so they'd sort of prefer if we just
didn't use it.

Now, about the tech side....
The Data Co-Processor (DCP) and True Random Number Generator (TRNG)
are the main ones. Not having that info public is particularly silly,

I bet the DCP is an ITAR headache for them. The 1170 is supposed to
have gigabit ethernet, so unless their AES128 core is really awful
(i.e. worse than 1bit/hz) it should be able to do line-rate
encryption. Gigabit line-rate encryption is sort of a red line where
export controls get a lot more annoying.

Now, about the tech side....
since their own SDK has essentially all that info buried inside its
publicly available source code.

Fascinating... :)

The situation is the same with all the security hardware. None of it
is turned on by default. If you don't need it, you just leave it
unused... It really is just more peripherals on the chip.

If that's true it resolves my concern. I really think a statement to
that effect ought to come from NXP, but hearing it from you is the
next best thing.

"Central Security Unit" sure sounded a lot more ominous than "optional
perhipheral".

Even if NXP publishes that security manual, your question would still
apply if you don't necessarily trust that they have fully documented
every circuit.

My point is this question, "how can we be sure?", is a very
fundamental issue which can never be 100% answered by proprietary
silicon regardless of what documentation is published

Let me be very clear on this point. Yes, your silicon vendor can
always lie to your face and hide backdoors in your chips. So long as
doing so and getting found out means their reputation goes up in
flames, I'm satisfied. That's the most assurance we're ever going to
get, so we have to be satisfied with it. It's sort of like nuclear
weapons: we can't defend against them, so we settle for Mutually
Assured Destruction.

In the specific case of these NDA-documented features, all it takes is
a public commitment that "if you don't use X it can't be used against
you, and if this turns out to be wrong it's our flaw". More
succinctly, like you said: "just a perhipheral".

For comparison, Intel quite obviously can't say this about the
Management Engine. I believe that ARM could make this kind of
statement about the TrustZone extensions (assuming your device lets
you provide your own EL3 loader like Rockchip does).

Pragmatically, hardware comes down to trust. Personally, I believe
NXP is trustworthy,

I believe their statements to be trustworthy, which is why I think it
is reasonable for us to expect a "just a peripheral" statement from
them.
 
In the late 1990s I went to hear a talk by Bob Pease, who is kind of a personal hero of mine. I probably never would have become reasonably proficient with analog circuitry if were not for the many excellent app notes and tech articles Bob wrote. Of course his talk was actually an all-day seminar by National Semiconductor, where you got to hear ~15 really interesting minutes from Bob, followed by an hour and a half of ho-hum presentations on National's upcoming chips from uniformly boring marketing folks, with 15 minute breaks.

Bob's no-nonsense and slightly irreverent style I'm sure made National's corporate folks cringe. One of the things he said that day really stuck with me. According to Bob, all analog chips are designed around a contractual sales agreement with 1 huge customer. The IC engineers pour in tons of work to optimize the chip for exactly that customer's requirements. Then some time later, the poor marketing folks have to write a work of fiction (the datasheet) that the chip was designed for general purpose use for everyone. Bob said, live on stage, that every datasheet is first and foremost a marketing document. It's only purpose is to convince you to buy the chip.

My point is you're probably never going to hear any official statement from a company like NXP that their security features are really just a set of peripherals. The days of analog legends like Bob have passed. Everything is a sales pitch. Figure 7-1 in the public reference manual on page 176 is a prime example, where it really just communicates a bunch of features using arrows make little or no sense. The words directly underneath that diagram are a perfect example of the sort of thing Bob was talking about!

All platforms built using this chip share a general need for security, though the specific
security requirements vary greatly from platform to platform. For example, portable
consumer devices need to protect a different type and cost of assets than automotive or
industrial platforms. Each market must be protected against different kinds of attacks.
The platform designers need an appropriate set of counter measures to meet the security
needs of their specific platform.

Except for HAB and JTAG, it really is just a collection of peripherals. HAB is their software in ROM which checks a digital signature you append to your code, and can initialize those various security peripherals for you. JTAG is pretty much everything you expect from JTAG, with an optional mode to require a password. Everything else is just a peripheral that never does anything unless you enable it. Even HAB doesn't actually do anything other than log (in RAM) the results of the signature check unless you burn a fuse to enable it.

Despite the CSU's ominous-sounding name, it really is just what they say on page 183. But there too, it's a sales pitch with terms like "enabling the peripherals to be separated into distinct security domains" (the reality is a fairly simple peripheral and it's up to you to write the operating system kernel or RTOS which implements distinct security domains using that peripheral to do the low-level work of block non-kernel accesses to specific areas).

On Teensy, we always leave the ARM core in supervisor mode because the programming model is "bare metal". But if you *really* wanted to write some sort of an operating system, where a kernel runs in supervisor mode and normal programs run in user mode and need to call kernel API to access peripherals, you certainly could. And you could use the MPU built into the ARM core (which is publicly documented on ARM's website and in Joseph Yiu's "Definitve Guide..." books) to keep user level code from accessing peripherals. But ARM's MPU offers only 16 configurable regions based on address ranges, whereas the CSU is designed for finer control over which peripherals and buses are restricted access to non-kernel code. The CSU also lets you lock settings (documented or at least described on page 183), where ARM's MPU is always reprogrammable by kernel-level code.

This security stuff really is just optional peripherals, and if you get the security manual, I can assure you the finer details are pretty underwhelming. Then again, if you're designing a tamper resistant product, some of those peripherals like the "Zeroizable Secret key" and "Power Glitch Detectors" mentioned in figure 7-1 on page 176 could be really useful and probably let you accomplish such a product more securely than if you try to implement that sort of thing only in software. And sadly, to get access to the register level detail for those features, you do need a NDA with NXP.
 
LOL,

Bob Pease, that's a name I haven't heard for a long time.

Always enjoyed his stories in ElectronicDesign.

he had a gift to make it all understandable for sure.

Regards,
 
"I was dismayed to find out that a big chunk of the IMXRT1060 documentation has been moved out of the publicly available reference manual and inserted into a secret "Security Reference Manual". Past experience has shown that documents with this status are simply not available to mere mortals under any circumstances, and the web form business about "type your FAE's name here" is effectively a runaround to avoid being blunt about this fact."

NXP may not have a choice. In many cases the use of encryption software and hardware is controlled by the US government export rules.

NXP has an obligation to ensure to keep track of who they divulge the information to.

Kevin
 
Status
Not open for further replies.
Back
Top