Hello List,
On the Mumble session of today, we began to deal with a new topic: as one of our aims is to make semiconductor manufacturing transparent, our products may end up in critical systems (infrastructure, government, important NGOs, ...). Therefore the security, trustworthiness and traceability of our devices is important. The topic grown out of a thread on our Matrix room, that I copy below to give some context:
leviathan: "Should however not be such a problem [SiFive receiving huge cash from Intel, Intel ME allegedly used by NSA as a backdoor and proven as severe security vulnerability] with our project because the process as well as the node are public on GitHub ... Free silicon is basically impossible to bug"
eegerferenc: Unfortunately, it is quite possible. The chip design on Github is trusted, but then it needs to be realized by a foundry anyway. Possible vectors: replacing GDS while being submitted, hacking into the foundry's server that stores it, hacking into the mask writing machine, replacing the mask in-transit between the mask supplier and the foundry, replacing the chip in-transit while being delivered to the customer (NSA is known to do this), physically infiltrating the mask supplier or the fab and replacing the chip/mask, coercing the fab operator or one of its employees to cooperate and keep his mouth, ... the list is endless. Publishing the tech and HDL on the net even makes it easier to make tampered clones. Not to mention that up to now, there is no procedure for verifying the authenticity of a deep-submicron LSI at the gate-level in a non-destructive way. Maybe a solution is to throw apart security-relevant processing on multiple dies and fabricate them in countries that are known to never cooperate.
leviathan: I was thinking of having an internal diagnosis core in our chips which checks for possible attack vectors in combination with Clifford Wolfs formal verification tool and OpenCV for verifying that the wires are all there, where they're supposed to be this way, one would only have to open up some samples from a shipment from a fab
eegerferenc: "having an internal diagnosis core in our chips": sounds a bit ME-ish. Also, it transforms the problem, not eliminates it: the security core also can be targeted, verifying it needs... recursion. "Clifford Wolfs formal verification tool": I cannot comment on it as I did not yet seen it. Can you please provide some details? "OpenCV for verifying that the wires are all there": it *sounds* okay for the 1um node. However, when we begin to make chips really capable of carrying out sensitive communication and processing, e.g. smartphone or smartwatch or tablet SoCs, then we'll need to go down below 200nm to meet the speed and power requirements. At that level, CV verification requires at least an electron microscope (not a typical household item even for a hacker...). In addition, there are already in the wild some methods that use unorthodox doping schemes to optically conceal the function of the circuit used as anti-reverse-engineering measure (https://eprint.iacr.org/2014/508.pdf), that can be exploited to insert a trojan and even to trick the trojan-detection mechanism ( https://sharps.org/wp-content/uploads/BECKER-CHES.pdf). Since detection of these requires destructive tests that can be done only using very advanced (and expensive) equipment, we arrive at the paradox "the first step to trust no-one is to trust someone". Also, "one would only have to open up some samples from a shipment from a fab" has two problems: at first, the "shipment" suggests a situation when the chip first goes B2B to a distributor, where the verification is done, then B2C to the end-user, during that it is subject to replacement (and of course, the user needs to trust the distributor). Second, "open up some samples" for destructive analysis cannot be a 100% screening. In organizational environments (government, critical infrastructure company, NGO), it is normal to have a no-security internal network with a strong firewall (or even no outgoing link at all). The customer purchases for example 100 units, we ship 105, 5 are decapped and then 100 is delivered. And if only one of the 105 units was tampered prior the distributor's sampling and it slipped trough, game over. Sorry for being the devil's advocate, but I think if one wants to be secure, then one shall not stop asking stupid what-if questions. The worst enemy of security is when one believes one is secure.
Proceedings of the Mumble session:
- The most critical point of the supply chain is the delivery of the LS IC from our foundry to the end-user because of the risk of a replacement-in-transit.
- It is suggested to have some sort of "seal" on our packaging (label, etc.) --> discarded, even the most sophisticated holograms can be counterfeited easily by a foreign intelligence agency
- Deliver the product directly by ourselves or have the customer to come for it --> OK for "high-security clients" (gov't, crit. infrastructure, NGO), but not scalable for privacy-demanding end-users --> 100% assurance (direct delivery) for high-sec, "reasonable but not less" assurance for everyone else --> reasonably uncloneable identification of our chips is necessary to allow customers to see if the physical chip they get is indeed what we shipped and not a lookalike copy (use everything we can and have, but don't invent the Spanish wax). Rationale: a less-than-100%-but-still-very-hard security mechanism may be worth to subvert if it gives access to a whole government or gigacompany, but not if it gives access to a single individual only. The purpose is not to make it impossible the faking of our chips sent to private customers, because it is impossible. Just to make it hard and expensive enough to make it impractical for the "target every single person one-by-one" use case.
- Suggestion: establish a shared secret with the client during the order, then have it programmed into a non-volatile register on the chip in such way that reading it out once destroys it, like "opening" an electronic "seal" --> an attacker can read it and then program the obtained value into a fake copy
- Suggestion: measure some hardly controllable electrical parameter as a physically uncloneable function --> discarded, since a fuctioning chip needs the electrical parameters in a narrow range, it gives only limited entropy
- Suggestion: have on-chip a crypto engine capable of doing RSA key generation and signing, along with a TRNG and a probing-resistant non-volatile register. During fab EOL, the engine uses the TRNG to derive a keypair and stores it into the register. After EOL but before delivery, we read out the pubkey and send it to the customer via an authenticated (but not necessarily secret) channel. Upon delivery, the customer presents a random challenge to the crypto engine and it returns it signed by the private key, which then the customer verifies. --> The register must be constructed in a way that is reasonably resistant against de-processing and microprobing attacks (for ex., using vertical flash, booby-trapped wire mesh, differential logic, etc..., but not for example phosphorus coating :-) ). Also, the crypto engine must be designed very carefully not to leak the private key via side channels (power draw, emissions, etc.).
- Future plan (help wanted): involve some (preferably more) people with deep knowledge in HW security to co-operate on the subject and also to have it reviewed.
Regards, Ferenc