Hello Luke.
On 06/26/2018 09:26 AM, Luke Kenneth Casson Leighton wrote:
the strategy i was advised of by people on the librecores list summarised ass: keep the IO cell absolutely standard (which basically means an analog design), and keep the muxing separate (and entirely digital).
Yes, of course.
there's only one thing missing from it and that's the current control selection.
I'll put it on my list, probably a little bit harder than all the other points.
identical to (based on) the elinux.org ericsson IO cell diagram (link sent already, previous message).
Common standard.
here is a list of requirements:
pull-up control which switches in a 10k (50k?) resistor
pull-down control which switches in a 10k (50k?) resistor
low-hanging fruits
- a "mode" setting that flips it between Open-Drain (floating if LO,
and pulled to GND if HI) and CMOS (MOSFET) Push-Push modes. see https://en.wikipedia.org/wiki/Open_collector#MOSFET for details on Open Drain (requires a MOSFET).
also easy, I'll put it on the list.
- a "hysteresis" setting that controls the input schottky filter's
sensitivity. this is important for push-buttons for example, to stop spiking (bounce) that would be amplified and result in massive noise spikes onto all wires throughout that entire area of the chip. low middle and high settings are needed to cover filtering ranges of say 2mhz, 5mhz, 10mhz and unlimited (disabling hysteresis). looking at STM32F documentation helps here as does this https://electronics.stackexchange.com/questions/156930/stm32-understanding-g...
Hysteresis is mostly done a slighter other way than a filter - a small google search got me this picture
http://farhek.com/jd/z1158t7/hysteresis-in/97kl85/
with the typical transistors controlled by the output signal. The reason for that is, that analog stuff like resistors and capacitors always taking to much area and are liked to be replaced by smaller sized transistors. Filters are ugly in silicon.
an input-enable selector
an output-enable selector
Always on the list.
- also required is a means to change the current output: 10mA, 20mA,
30mA and 40mA are reasonable
- also the input and output really need *automatic* level-shifting,
built-in to the IO cell. so whilst there is a VDD for driving the pad (and setting the CMOS threshold levels for input), there is *also* a need for an IO VREF. this is *important*. the input and output needs to be CMOS push-push (standard logic) whilst the IO pad needs to be switchable between OD and PP.
This comes to the list as soon as our technology node gets smaller. Currently we plan 1um with 5 Volt only. But next node with 0.5um should handle 5 Volt as 3.3 Volt. And so we do need this IO-Banking concept I am very familiar with from FPGAs.
- input threshold voltages that trigger the input from HI to LO
should be standard CMOS voltage levels (even in OD mode), which i believe is below 0.3 * VDD for "LO" and 0.7 * VDD for "HI"
Agree.
- output voltage levels should be as close to 0 as possible for LO
(0.3v or below @ nominal temperature) and as close to VDD as possible for HI (VDD-0.3v or above @ nominal temperature).
Agree, only one (big) transistor between output and power rails - the Drain-Source-Voltage in a switched state should be very small.
- some ability to protect itself from over-driving (current fights)
when in output mode are a must.
naturally
- the ability to protect itself from *being* over-driven when in
input mode is not strictly necessary (over-voltage tolerance e.g. 5V tolerance when VDD is well below that) but would be nice to have as an option (two variants: one for ECs which need 5V tolerance and one for SoCs where it's not).
This topic and the topic before are aware by the ESD protection stuff.
So, do you like to provide us (or me?) a list of requirements / features your embedded SoC really needs, which are nice to have and which are to avoid?
documented here:
http://libre-riscv.org/shakti/m_class/
i need to update it as i've managed to track down an LPDDR3 PHY (USD $300k), and also HyperRAM (upcoming JEDEC xSPI).
I'll keep an eye on this new hyped HyperRAM.
the project that i'm working on needs a 512 mbyte DRAM, to be made in a minimum of 110nm (which is where you can get up to 400mhz double data-rate). twin 128 mbyte DRAMs would do fine, as probably would four 64 mbyte DRAMs (in a pinch).
Any RAM, in principle, is a good technology driver while containing big regular structured areas. Do you now this https://github.com/VLSIDA/OpenRAM/blob/master/OpenRAM_ICCAD_2016_paper.pdf paper here?
Just layout your cell primitives and all the stuff for completing the RAM is generated by this script. Doing so is a common thing, new is that the generator itself is published and not NDA'ed by a "solution partner firm".
I think, we can do design our small cells also. But this is on the longer To-Do-List, as we need RAM for caches etc.
i have been promised (free, monetarily-zero-charge) access to a university's 180nm foundry *IF* and *ONLY* if the entire design is libre. if you can design a DRAM that can be tested for tape-out on 180nm which has a HyperRAM interface i *might* be able to justify putting it to the sponsor.
Until now I do not have experience with 180nm, so I am a little bit hesitating. Do you have more details for that? I understood this is a Europractice or Mosis Multi-Project-Wafer submission, isn't it?
Okay, about which time schedule we are talking?
could you give me some idea of the die area that a 512mb DRAM would take up, say, in 110nm? and is the size dependent on geometry at all? some cells are not (you can't shrink IO pads for example, no matter what geometry), but i don't know enough about DRAM design (other than, "it's a capacitor" and a transistor)
Roughly spoken, the textbooks missing the back-stage details. Yes, the storage cell itself "it's a capacitor". But the logic around, driving long lines for reading / writing / refreshing are analog art close to Voodoo (randomly mentioned in some details by textbooks). And there are huge analog amplifier beside all the digital stuff like address logic, build-in-self-test, refresh-logic, interface logic etc. IMHO the back-stage stuff takes at least 20% of the core logic area. And yes, the IO cells are in-flexible in shrinking sizes. So, I guess, 50% of the die-size is fixed at all.
Let's calculate: a 1-bit cell in 110nm would take - let's say - .5 by .5 um, multiplied with 512M (aka 536870912 cells) is 134217728 square microns. Doubled for back-stage stuff and IOs means 268435456 square microns at all - this are round about 16 x 16 mm if I am not completely wrong. Oh my god, this is huge!
Okay, back from break, I got a good impression, where the problems are - size matters!
Challenge accepted! Hagen