Hello List!
Just a small reminder for our next Mumble Sessions on this Sunday, April 14 at 9 p.m. Hong Kong Time - this is 13:00 UTC.
Please join us as usual at our Mumble Server murmur.libresilicon.com at Port 64738, the Channel is IC.
On the agenda are (at least) this topics
- test wafer status - popcorn - other stuff around
Happy to hear from you! Hagen
Hello List,
On the Mumble session of today, we began to deal with a new topic: as one of our aims is to make semiconductor manufacturing transparent, our products may end up in critical systems (infrastructure, government, important NGOs, ...). Therefore the security, trustworthiness and traceability of our devices is important. The topic grown out of a thread on our Matrix room, that I copy below to give some context:
leviathan: "Should however not be such a problem [SiFive receiving huge cash from Intel, Intel ME allegedly used by NSA as a backdoor and proven as severe security vulnerability] with our project because the process as well as the node are public on GitHub ... Free silicon is basically impossible to bug"
eegerferenc: Unfortunately, it is quite possible. The chip design on Github is trusted, but then it needs to be realized by a foundry anyway. Possible vectors: replacing GDS while being submitted, hacking into the foundry's server that stores it, hacking into the mask writing machine, replacing the mask in-transit between the mask supplier and the foundry, replacing the chip in-transit while being delivered to the customer (NSA is known to do this), physically infiltrating the mask supplier or the fab and replacing the chip/mask, coercing the fab operator or one of its employees to cooperate and keep his mouth, ... the list is endless. Publishing the tech and HDL on the net even makes it easier to make tampered clones. Not to mention that up to now, there is no procedure for verifying the authenticity of a deep-submicron LSI at the gate-level in a non-destructive way. Maybe a solution is to throw apart security-relevant processing on multiple dies and fabricate them in countries that are known to never cooperate.
leviathan: I was thinking of having an internal diagnosis core in our chips which checks for possible attack vectors in combination with Clifford Wolfs formal verification tool and OpenCV for verifying that the wires are all there, where they're supposed to be this way, one would only have to open up some samples from a shipment from a fab
eegerferenc: "having an internal diagnosis core in our chips": sounds a bit ME-ish. Also, it transforms the problem, not eliminates it: the security core also can be targeted, verifying it needs... recursion. "Clifford Wolfs formal verification tool": I cannot comment on it as I did not yet seen it. Can you please provide some details? "OpenCV for verifying that the wires are all there": it *sounds* okay for the 1um node. However, when we begin to make chips really capable of carrying out sensitive communication and processing, e.g. smartphone or smartwatch or tablet SoCs, then we'll need to go down below 200nm to meet the speed and power requirements. At that level, CV verification requires at least an electron microscope (not a typical household item even for a hacker...). In addition, there are already in the wild some methods that use unorthodox doping schemes to optically conceal the function of the circuit used as anti-reverse-engineering measure (https://eprint.iacr.org/2014/508.pdf), that can be exploited to insert a trojan and even to trick the trojan-detection mechanism ( https://sharps.org/wp-content/uploads/BECKER-CHES.pdf). Since detection of these requires destructive tests that can be done only using very advanced (and expensive) equipment, we arrive at the paradox "the first step to trust no-one is to trust someone". Also, "one would only have to open up some samples from a shipment from a fab" has two problems: at first, the "shipment" suggests a situation when the chip first goes B2B to a distributor, where the verification is done, then B2C to the end-user, during that it is subject to replacement (and of course, the user needs to trust the distributor). Second, "open up some samples" for destructive analysis cannot be a 100% screening. In organizational environments (government, critical infrastructure company, NGO), it is normal to have a no-security internal network with a strong firewall (or even no outgoing link at all). The customer purchases for example 100 units, we ship 105, 5 are decapped and then 100 is delivered. And if only one of the 105 units was tampered prior the distributor's sampling and it slipped trough, game over. Sorry for being the devil's advocate, but I think if one wants to be secure, then one shall not stop asking stupid what-if questions. The worst enemy of security is when one believes one is secure.
Proceedings of the Mumble session:
- The most critical point of the supply chain is the delivery of the LS IC from our foundry to the end-user because of the risk of a replacement-in-transit.
- It is suggested to have some sort of "seal" on our packaging (label, etc.) --> discarded, even the most sophisticated holograms can be counterfeited easily by a foreign intelligence agency
- Deliver the product directly by ourselves or have the customer to come for it --> OK for "high-security clients" (gov't, crit. infrastructure, NGO), but not scalable for privacy-demanding end-users --> 100% assurance (direct delivery) for high-sec, "reasonable but not less" assurance for everyone else --> reasonably uncloneable identification of our chips is necessary to allow customers to see if the physical chip they get is indeed what we shipped and not a lookalike copy (use everything we can and have, but don't invent the Spanish wax). Rationale: a less-than-100%-but-still-very-hard security mechanism may be worth to subvert if it gives access to a whole government or gigacompany, but not if it gives access to a single individual only. The purpose is not to make it impossible the faking of our chips sent to private customers, because it is impossible. Just to make it hard and expensive enough to make it impractical for the "target every single person one-by-one" use case.
- Suggestion: establish a shared secret with the client during the order, then have it programmed into a non-volatile register on the chip in such way that reading it out once destroys it, like "opening" an electronic "seal" --> an attacker can read it and then program the obtained value into a fake copy
- Suggestion: measure some hardly controllable electrical parameter as a physically uncloneable function --> discarded, since a fuctioning chip needs the electrical parameters in a narrow range, it gives only limited entropy
- Suggestion: have on-chip a crypto engine capable of doing RSA key generation and signing, along with a TRNG and a probing-resistant non-volatile register. During fab EOL, the engine uses the TRNG to derive a keypair and stores it into the register. After EOL but before delivery, we read out the pubkey and send it to the customer via an authenticated (but not necessarily secret) channel. Upon delivery, the customer presents a random challenge to the crypto engine and it returns it signed by the private key, which then the customer verifies. --> The register must be constructed in a way that is reasonably resistant against de-processing and microprobing attacks (for ex., using vertical flash, booby-trapped wire mesh, differential logic, etc..., but not for example phosphorus coating :-) ). Also, the crypto engine must be designed very carefully not to leak the private key via side channels (power draw, emissions, etc.).
- Future plan (help wanted): involve some (preferably more) people with deep knowledge in HW security to co-operate on the subject and also to have it reviewed.
Regards, Ferenc
On Sun, Apr 14, 2019 at 9:17 PM Éger Ferenc eegerferenc@gmail.com wrote:
Hello List,
On the Mumble session of today, we began to deal with a new topic: as one of our aims is to make semiconductor manufacturing transparent, our products may end up in critical systems (infrastructure, government, important NGOs, ...). Therefore the security, trustworthiness and traceability of our devices is important. The topic grown out of a thread on our Matrix room, that I copy below to give some context:
Sorry that I didn't join this Mumble session. I was brainstorming how to successfully apply for an infosec researcher position to be able to do some disruptive innovation for which 1. your bespoke, large feature size IC process may be crucial, and 2. a lot of the security protocols and procedures you describe may be unnecessary.
We apologize for the inconvenience tatzelbrumm
Hi so I missed a very interesting discussion. You should announce it in advance that you will talk about the conscription theories associated with intels ME and USB debug feature. Having a serious HW troyan will always involve additional HW as leaking side channels are interesting generated by variations of process but one would need a receiving device to get the data from a side channel. Intel ME has a nic and a switch in the phy to leak out data. So it could also leak out non ethernet data provided the stream is manchester or similar coded to make it over the magnetics of the nic the ME lives in (early version) or has control over.
So an attacker will need a transmit (leak out) and receive (what to leak over a limited bandwith) channel.
How to implement both by only changing diffusion and other process parameters. I guess this is paranoia and one can look at optical inspection and pattern comparision first.
Cheers
Ludwig
On Monday, April 15, 2019, Christoph Maier christoph.maier@ieee.org wrote:
On Sun, Apr 14, 2019 at 9:17 PM Éger Ferenc eegerferenc@gmail.com wrote:
Hello List,
On the Mumble session of today, we began to deal with a new topic: as
one of our aims is to make semiconductor manufacturing transparent, our products may end up in critical systems (infrastructure, government, important NGOs, ...). Therefore the security, trustworthiness and traceability of our devices is important. The topic grown out of a thread on our Matrix room, that I copy below to give some context:
Sorry that I didn't join this Mumble session. I was brainstorming how to successfully apply for an infosec researcher position to be able to do some disruptive innovation for which
- your bespoke, large feature size IC process may be crucial,
and 2. a lot of the security protocols and procedures you describe may be unnecessary.
We apologize for the inconvenience tatzelbrumm _______________________________________________ Libre-silicon-devel mailing list Libre-silicon-devel@list.libresilicon.com http://list.libresilicon.com/mailman/listinfo/libre-silicon-devel
On Mon, Apr 15, 2019 at 7:14 AM ludwig jaffe ludwig.jaffe@gmail.com wrote:
Intel ME has a nic and a switch in the phy to leak out data. So it could also leak out non ethernet data provided the stream is manchester or similar coded to make it over the magnetics of the nic the ME lives in (early version) or has control over.
it's not outside the realm of possibility at all.
i have... empirical evidence which tends to suggest that some sort of low-bandwidth power / data signalling can result in activation of embedded spying backdoor co-processors within intel processors (whether it be entirely hardware-based or whether it's part of the spying firmware i have insufficient information to determine).
being based on power analysis by way of pretty much anything, such activation may occur through a huge variety of channels: remote network access, WIFI data traffic streams, *INTERNAL* (non-networked) scenarios where just opening a file would cause data to be sequentially loaded from disk, cause certain patterns of power-usage to occur that are monitored by the spying backdoor co-processor...
it's an extremely ingenious method, as it doesn't rely on actual physical compromise AND does NOT require execution of any specific application, or in fact any application *at all*
even just being near enough to broadcast bogus WIFI packets would be sufficient to trigger IRQs on the data bus of the machine to be compromised (even if the packets were never actualy processed, and even if the machine were not even running an OS at all).
even if the machine were not sufficiently well EMI shielded, it may even be possible to create the required "spikes" down the Power Supply, or via directed radio bursts.
l.
Hi,
I think that we cannot do much about the security of the delivery process. One thing we can work on that should help security though is what the software world is calling "reproduceble builds".
From my point of view, the process from RTL to GDS2 should be fully
reproducable, so given a specified version of the Verilog design-files and a specified version of the LibreSilicon-PDK and a specified version of the toolchain, a deterministic GDS2 file should pop out of the process. So 2 different people should be able to take the same source-files, compile them, send them to 2 different fabs, get many chips produced, then take a few samples apart, and then those chips should look similar.
I think reproducable builds from RTL to GDS2 should be achievable, although it might take some rethinking the usual processes like e.g. DRC.
Several months ago, I thought about reproducable builds for qflow, specifically Graywolf, and I came to problem that Graywolf had poor single-threaded performance for huge projects, so parallelisation was the obvious solution for that, but I did not had a good idea how simulated annealing could be done in a reproducable way in parallel.
So my conclusion back then was that if we cannot find a way to do it reproducable in parallel, that we would need at least a reproduceable way that is single-threaded, so that the user could choose between fast and non-reproducable or slow and reproduceable.
I think that a libre PDK and reproducable builds are the best we can deliver for auditable and trustworthy chips.
Best regards, Philipp Gühring
Hello Philipp,
Several months ago, I thought about reproducable builds for qflow, specifically Graywolf, and I came to problem that Graywolf had poor single-threaded performance for huge projects, so parallelisation was the obvious solution for that, but I did not had a good idea how simulated annealing could be done in a reproducable way in parallel.
The way to go is not to use simulated annealing (e.g., graywolf) but to use analytic placement (e.g., RePlaCe from abk-openroad (UCSD) on github).
---Tim
+--------------------------------+-------------------------------------+ | R. Timothy Edwards (Tim) | email: tim@opencircuitdesign.com | | Open Circuit Design | web: http://opencircuitdesign.com | | 19601 Jerusalem Road | phone: (240) 489-3255 | | Poolesville, MD 20837 | cell: (408) 828-8212 | +--------------------------------+-------------------------------------+
On Mon, Apr 15, 2019 at 11:50 AM Philipp Gühring pg@futureware.at wrote:
So my conclusion back then was that if we cannot find a way to do it reproducable in parallel, that we would need at least a reproduceable way that is single-threaded, so that the user could choose between fast and non-reproducable or slow and reproduceable.
the design of alliance / coriolis2 lends itself to this approach, by being mostly programmable. configuration files are actually python code snippets, written with a syntax that is so basic that it might as well be a .txt file.
by working recursively to create progressively larger cells - some of which may be programmatically generated (yes, using python to specify placement as well as routing), some of which may call on the assistance of the auto-router, some of which may have involved manual placement - it becomes possible to perform the reproduceability that you describe *and* also not end up in heavy compromise situations, because, overall, there are so many sub-tasks that even if some of them need to be single-processor tasks, the entire design does not.
l.
Hi so I missed a very interesting discussion. You should announce it in advance that you will talk about the conscription theories associated with intels ME and USB debug feature. Having a serious HW troyan will always involve additional HW as leaking side channels are interesting generated by variations of process but one would need a receiving device to get the data from a side channel. Intel ME has a nic and a switch in the phy to leak out data. So it could also leak out non ethernet data provided the stream is manchester or similar coded to make it over the magnetics of the nic the ME lives in (early version) or has control over.
So an attacker will need a transmit (leak out) and receive (what to leak over a limited bandwith) channel.
How to implement both by only
On Monday, April 15, 2019, Christoph Maier christoph.maier@ieee.org wrote:
On Sun, Apr 14, 2019 at 9:17 PM Éger Ferenc eegerferenc@gmail.com wrote:
Hello List,
On the Mumble session of today, we began to deal with a new topic: as
one of our aims is to make semiconductor manufacturing transparent, our products may end up in critical systems (infrastructure, government, important NGOs, ...). Therefore the security, trustworthiness and traceability of our devices is important. The topic grown out of a thread on our Matrix room, that I copy below to give some context:
Sorry that I didn't join this Mumble session. I was brainstorming how to successfully apply for an infosec researcher position to be able to do some disruptive innovation for which
- your bespoke, large feature size IC process may be crucial,
and 2. a lot of the security protocols and procedures you describe may be unnecessary.
We apologize for the inconvenience tatzelbrumm _______________________________________________ Libre-silicon-devel mailing list Libre-silicon-devel@list.libresilicon.com http://list.libresilicon.com/mailman/listinfo/libre-silicon-devel
On Sun, Apr 14, 2019 at 8:17 PM Éger Ferenc eegerferenc@gmail.com wrote:
eegerferenc: Unfortunately, it is quite possible. The chip design on Github is trusted,
by whom? do not make that assumption that just because it is published, it *WILL* be trusted. that is NOT your decision to make.
the correct language to use is, "the chip design on github is published and make public SO THAT OTHER PEOPLE MAY AUDIT IT".
you are NOT the auditor, you are NOT directly responsible for the end-users' decision-making process, and the trustworthiness is NOT an automatic implication by virtue of quotes it being dropped onto github quotes.
aside from which, github is managed and run by microsoft... a third party company. where compromise of many types is a real possibility at any time.
the best that can be done is to provide a web-of-trust set of signatures on the source, i.e. to follow debian package management and use debian distribution infrastructure.
in full.
by doing so, there will be an audit trail that is independent of network compromises, infrastructure attack, and much more besides.
but then it needs to be realized by a foundry anyway.
it is important to include "consequences of detection" as part of the risk assessment.
question: what are the consequences for a foundry if they were discovered to be involved in the production of compromised wafers?
what do you think would happen to them?
start with their reputation.
what would happen to the reputation of a foundry where it became public knowledge that they'd manufactured something *other* than *exactly* what the customer asked for?
how many customers do you think they would have in the future, after such became public knowledge?
how much money would they stand to lose if that occurred?
do you think they would be in business for very long?
what would the consequences be, say, for the Taiwanese economy, if TSMC were discovered to be manufacturing compromised wafers?
so by utilising this logic, we may reasonably conclude that the smaller the foundry, the riskier it is to use them.
why?
the larger the foundry, the more damaging the consequences of a compromise, and therefore the higher the chances that they will have better security measures in place.
unfortunately, though, the secrecy involved in foundries means that there is no *guarantee* that they will actually have *any* security measures in place.
negotiating access to the foundry in order to double-check their security measures will be extremely delicate.
it's extremely complex, basically.
India, Russia, China and the U.S. all solve this by having their own Foundries.
India has a 180nm fab (they're working on an upgrade). they are presently using it to design and manufacture India's world-first 64-bit RISC-V SoC, which will be used in things like their Fast Breeder Nuclear Reactor Programme.
until that is completed they will stick with the 45+-year-old Motorola 68000.
l.
Hello All,
" what are the consequences for a foundry if they were discovered to be involved in the production of compromised wafers?" I think it depends on how, where and when it is discovered. If it happens by us before shipment, we can handle it. If it happens at the customer right after the shipment, that may or may not become public, and if it goes public, it is a good indication that we have effective detection measures and they work. But if some white-hat hacker discovers it independently in the end-product some years and a bazillion of already delivered consumer devices later, that can easily result in a PR disaster. "Why don't we have any detection measures? If we had, why the incident was known from the beginning? Hide the facts? Who pays you? etc." Actually, the larger the foundry is, the less secure is it, as it has more pressure to use "corporate confidentiality" to suppress news that may damage its reputation (and also a more attractive target due to the widespread use of its products).
Regarding scan chain: hiding malicious function is actually easy. In a design with a scan chain, flipflops have additional mux logic that in normal mode, route input to the FF input and FF output to the output. In scan mode, the output of the previous FF is routed instead to the input of the next FF, turning the whole IC into a long shift register. Since the mux logic needs the scan mode entry signal, it is easy to craft "fake" flipflops by altering the mux in such way that in scan mode it behaves as expected, but in normal mode it outputs a constant level. This even retains the fault coverage property, so there is no sudden change in yield or increase of slip-trough faulty parts, that would trigger an investigation.
Regards, Ferenc
Howdy all,
Regarding "security", or better, unauthorized modification of the design ...
All chips I have worked on in the past use a jtag scan chain for (manufaturing) testing. The customer can run a similar scan on the final board or on the chips from the fab.
It would be quite difficult (but not impossible) to hide malicious code from the scan chain. Best case (for the hacker) would be that he hid his code but doesn't know if there are manufacturing defects in it or not.
Some people have obscure security blocks, that are based on noise generated by the circuit and they can detect the smallest deviation. I believe they monitor the power lines. I think these guys do something like that: www.eshard.com (no idea if it's patented etc.)
rudi =============================================================== Rudolf Usselmann, ASICS World Services, LTD, www.asics.ws Your IP Partner: SAS 12G, SATA-3, USB-3, SD/MMC/SDIO, FEC, etc.
The agony of poor quality remains long after the joy of low cost has been forgotten
This email message may contain confidential and privileged information. Any unauthorized use is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message.
Hello Everyone,
Some remarks to the discussions:
Regarding the implicit assumption of the relation between publicity and trustworthiness: the remark is true. We did not proclaim anything to be trusted, we only make the circumstances suitable for that. The remaining is not (and shall not be) in our discretion.
The highest-level goal of the security concept is to assure that the IC the customer receives indeed performs the function described by the high-level HDL code. This can be sub-divided into three major challenges: - The layout corresponds to the HDL. This is the area where reproducible build, back-annotation and other mainly SW-defined measures apply. - The chip at the end of our line corresponds to the submitted layout. This mainly depends on the location, organization and operation of the fab, and is beyond the scope in this phase of the project (this is the point where "open-doors" audit policy and a network of "geographically/geopolitically near" fabs applies, for example). - The chip the user receives is the same one that came off our line. This was the topic of the Mumble session.
Regarding the side channel for HW backdoors: they are not always needed, as the goal of the backdoor may not be direct remote access only. In addition, a HW trojan may not exfiltrate anything, just "paves the way" for inserting the actual backdoor. Some use cases: - A modified RNG (one of the referred papers describes such scenario) that has its output space reduced to a few (e.g. 1024) distinct keys of e.g. 256 bits of length: still long enough to defeat statistical proofs, but still a few enough to make random numbers (private keys generated on the host, keys for encrypted comms like HTTPS, storage encryption keys for cloud backups or HDD of laptops, etc.) guessable in real-time, thus making impersonation of the target, MITM, traffic collection, backup collection or HDD encryption bypass possible. - A modified cryptographic hash function accelerator, that looks for a magic number in the input data, and once found, outputs an arbitrary number (e.g. specified after the magic number) as the hash. Use case: impersonating update server by DNS poisoning or boomerang routing and presenting a fake certificate containing the magic number and the hash from a legit certificate, then installing malware by defeating the archive signature verification in the same way. - Tampered MMU logic, that looks for some handshake sequence (e.g. writing 0xdefaced into the accumulator 100 times in a row), and then silences segfaults until the next context switching - Tampered privilege control logic, that looks for some handshake sequence (e.g. writing 0xdefaced into the accumulator 100 times in a row), and then ignores privilege violations of the given process.
Regards, Ferenc
On Mon, 2019-04-15 at 23:19 +0200, Éger Ferenc wrote:
Hello Everyone,
Some remarks to the discussions:
....
Regarding the side channel for HW backdoors: they are not always needed, as the goal of the backdoor may not be direct remote access only. In addition, a HW trojan may not exfiltrate anything, just "paves the way" for inserting the actual backdoor. Some use cases:
- A modified RNG (one of the referred papers describes such scenario)
that has its output space reduced to a few (e.g. 1024) distinct keys of e.g. 256 bits of length: still long enough to defeat statistical proofs, but still a few enough to make random numbers (private keys generated on the host, keys for encrypted comms like HTTPS, storage encryption keys for cloud backups or HDD of laptops, etc.) guessable in real-time, thus making impersonation of the target, MITM, traffic collection, backup collection or HDD encryption bypass possible.
- A modified cryptographic hash function accelerator, that looks for
a magic number in the input data, and once found, outputs an arbitrary number (e.g. specified after the magic number) as the hash. Use case: impersonating update server by DNS poisoning or boomerang routing and presenting a fake certificate containing the magic number and the hash from a legit certificate, then installing malware by defeating the archive signature verification in the same way.
- Tampered MMU logic, that looks for some handshake sequence (e.g.
writing 0xdefaced into the accumulator 100 times in a row), and then silences segfaults until the next context switching
- Tampered privilege control logic, that looks for some handshake
sequence (e.g. writing 0xdefaced into the accumulator 100 times in a row), and then ignores privilege violations of the given process.
Regards, Ferenc
All true, but don't you think you need to draw a line somewhere ?
It seems this group is being overly paranoid (no offense intended).
For example, everything before handing the design over to foundry is the customers responsibility. Just as it is with any commercial foundry today. If I fail to ensure and understand that my code is free of any malware, than it is my problem and the foundry can not be held responsible.
I think you need to decide what you are trying to accomplish, and clearly document it and allow a customer to understand the potential risks. Than you will need to work with each customer on a case by case basis to ensure that malware can not be introduced in to the flow (e.g. use end to end encryption in all communications).
And at the foundry, you need to vet your employees and contractors and implement redundant reviews of the process, to avoid any modification and malware insertions.
Al this talk about implementing security is void, if the human factor is not addressed.
Again, the models of commercial foundries today can be a good starting point.
Regards, rudi =============================================================== Rudolf Usselmann, ASICS World Services, LTD, www.asics.ws Your IP Partner: SAS 12G, SATA-3, USB-3, SD/MMC/SDIO, FEC, etc.
The agony of poor quality remains long after the joy of low cost has been forgotten
This email message may contain confidential and privileged information. Any unauthorized use is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message.
Hello List!
Well, I like to summarize my (personal) points of view to the LibreSilicon Project.
Yes, we like to be free and open. And we like to be free of bug-doors and/or back-doors. As long as we democratize the fundamental process of silicon fabrication of course we become a target. And since Snowden we all aware of the effort, which some three-letter-agencies investing to get into other countries, companies and citizen around the world. Stuff like TAO (https://en.wikipedia.org/wiki/Tailored_Access_Operations) is just ridiculous.
Our defense only transparency can be. No secrets, no hidden features but being robust. And no technical solution can extinguish the human factor.
So, being as open as possible, writing excellent documentation and publish them is our educational mission.
And IMHO, we have this important long-term action items, without priority among each other:
* establish a robust infrastructure, without single-point-of-failures (like located in one country, hosted at one company, based on health/wealth of a single person)
* enforce the tool chain from HDL down to GDSII and fabrication for reproducibility and step-wise verification/equivalence checking
* set-up a tool chain for going backward, from die-pics over layout recognition back to human-readability and computer-comparability against the original HDL.
Is this really paranoiac? I do not think so.
Best Regards, Hagen Sankowski
libresilicon-developers@list.libresilicon.com