--- crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68
On Wed, Jun 27, 2018 at 3:44 AM, Mohammad Amin Nili manili.devteam@gmail.com wrote:
Dear Luke,
I’m currently working on list of requirements and tech specs for the GPU. Unfortunately I did not find any documents which describe the GC800’s specs completely (e.g. power consumption, area estimation and so on). Would you mind help me find a proper document including complete info? Otherwise is it possible for you to describe the GPU specs completely? These are what I’ve found until now from your emails (it would be great if you fill the question mark parts):
Deadline = ?
about 12-18 months which is really tight. if an FPGA (or simulation) plus the basics of the software driver are at least prototyped by then it *might* be ok.
if using nyuzi as the basis it *might* be possible to begin the software port in parallel because jeff went to the trouble of writing a cycle-accurate simulation.
The GPU must be matched by the Gallium3D driver
that's the *recommended* approach, as i *suspect* it will result in less work than, for example, writing an entire OpenGL stack from scratch.
RTL must be sufficient to run on an FPGA.
a *demo* must run on an FPGA as an initial
Software must be licensed under LGPLv2+ or BSD/MIT.
and no other licenses. GPLv2+ is out.
Hardware (RTL) must be licensed under BSD or MIT with no “NON-COMMERCIAL CLAUSES”. Any proposals will be competing against Vivante GC800 (using Etnaviv driver).
in terms of price, performance and power budget, yes. if you look up the numbers (triangles/sec, pixels/sec, power usage, die area) you'll find it's really quite modest. nyuzi right now requires FOUR times the silicon area of e.g. MALI400 to achieve the same performance as MALI400, meaning that the power usage alone would be well in excess of the budget.
The GPU is integrated (like Mali400). So all that the GPU needs to do is write to an area of memory (framebuffer or area of the framebuffer). the SoC - which in this case has a RISC-V core and has peripherals such as the LCD controller - will take care of the rest. In this arcitecture, the GPU, the CPU and the peripherals are all on the same AXI4 shared memory bus. They all have access to the same shared DDR3/DDR4 RAM. So as a result the GPU will use AXI4 to write directly to the framebuffer and the rest will be handle by SoC. The job must be done by a team that shows sufficient expertise to reduce the risk. (Do you mean a team with good CVs? What about if the team shows you an acceptable FPGA prototype?
that would be fantastic as it would demonstrate not only competence but also committment. and will have taken out the "risk" of being "unknown", entirely.
I’m talking about a team of students which do not have big industrial CVs but they know how to handle this job (just like RocketChip or MIAOW or etc…).
works perfectly for me :)
btw i am happy to put together a crowd-funding campaign (that's already underway) that would also help fund this effort.
l.