Ask the Experts - ARM Fellow Jem Davies Answers Your GPU Questions
by Anand Lal Shimpi on June 30, 2014 11:52 AM EST- Posted in
- GPUs
- Arm
- Ask the Experts
- SoCs
When we ran our Ask the Experts with ARM CPU guru Peter Greenhalgh some of you had GPU questions that went unanswered. A few weeks ago we set out to address the issue and ARM came back with Jem Davies to help. Jem is an ARM Fellow and VP of Technology in the Media Processing Division, and he's responsible for setting the GPU and video technology roadmaps for the company. Jem is also responsible for advanced product development as well as technical investigations of potential ARM acquisitions. Mr. Davies holds three patents in the fields of CPU and GPU design and got his bachelor's from the University of Cambridge.
If you've got any questions about ARM's Mali GPUs (anything on the roadmap at this point), the evolution of OpenGL, GPU compute applications, video/display processors, GPU power/performance, heterogeneous compute or more feel free to ask away in the comments. Jem himself will be answering in the comments section once we get a critical mass of questions.
99 Comments
View All Comments
quasimodo123 - Monday, June 30, 2014 - link
Why is it that the large players in desktop-class GPUs (Nvidia, AMD) have not been able to differentiate their graphics on the mobile platform? Is it because graphics technology is largely a commodity today? Or is their IP somehow specifically focused away from the mobile market?twotwotwo - Monday, June 30, 2014 - link
GPU computing seems to be both capable of providing huge boosts where it applies, and pretty specific so far in the areas to which it applies. Are there any particular use cases where you see the GPUs on the Mali roadmap or any other future mobile GPUs providing a big boost?At least one other company's talking a lot about making GPU and CPU address a single pool of memory and generally work together more closely. Interesting direction to ARM or no?
twotwotwo - Monday, June 30, 2014 - link
Heh, you know, these were pretty well addressed earlier in the thread, so nevermind. :)JemDavies - Monday, June 30, 2014 - link
GPU Computing provides good boosts in performance efficiency for certain types of workload. Usually that workload will be typified by lots of computation across very large datasets, and where the computation can be highly parallelised. In other words we can exploit very large levels of data parallelism through thread parallelism (we have the capability of issuing thousands of threads. Typically, the code also exploits the high levels of floating-point capability found in modern GPUs, though actually for our case, that’s not always necessary in order to gain a performance efficiency advantage.It is possible to imagine a very wide range of possible applications that would probably benefit from GPU Computing. The ones that we most often see in our ecosystem are image processing in its many forms: computational photography, computer vision, image clean-up such as sharpening, denoise, filtering, beautification etc. etc.
Another use case that has been used in products being announced by many of our partners is using GPU Computing to accelerate software codecs for video, e.g. where a new standard such as HEVC or VP9 comes out, but where the platform does not support that particular standard in hardware (yet). I think this is a useful example that shows the framework of decision making. As I said in a previous answer, it *is* a digital world: it’s all 1s and 0s, and the problems to be solved can be expressed as an algorithm which can be expressed in C code, Matlab, OpenCL, RTL or in many forms. You can write C code and run it on a CPU. You can write code in various forms (possibly low-level) and run it on a DSP. You can write it in a parallel programming language like OpenCL and run it on a compute-capable GPU. You can design your own special-purpose hardware, write the RTL for it and get your chip manufactured. Because we work in many of these fields, we can take a fairly balanced view of these options. All come at various levels of difficulty, levels of efficiency and quality of development environment, timescales, development costs and ultimately power efficiency. What will be “best” for one Partner in one environment will not be best for all. Some won’t want to dedicate hardware to a rarely-performed task. Some will need to spend hardware (silicon area = chip cost) on something because they need the power efficiency. Some want the Time To Market advantage of doing it Now in software so that it can be deployed on today’s platforms while the hardware catches up later.
As to making CPU and GPU addresses unified into a single address space we have been doing this from our very first GPU. We are also working towards a shared virtual memory environment as well, where the MMUs in the CPU and the GPU can share the same set(s) of page tables.
richough3 - Monday, June 30, 2014 - link
ARM processors are becoming a great way for home hobbyists to create their own NAS, media server, or energy efficient desktop, but while I see new video codecs being implemented, some of the old ones like DIVX seem to be ignored. Any chance on implementing some of the older codecs? It would also be nice to see better Linux support as well.JemDavies - Tuesday, July 1, 2014 - link
ARM video processors such as the Mali-V500 do support DivX. We also support Linux with it.ruthan - Monday, June 30, 2014 - link
My two cents?1) Why we still waiting such long time for beefy desktop ARM processor (with TPD 30W or 60W ) and even it will appear?
2) Will this beefy piece, have standart number of cores (4,8) or would be cluster of relatively low performance cores (32,64+)?
3) Would be possible to use existing PCI-E PC cards together with ARM process? Or for desktop ARM we need brand new HW?
4) Did you try some experiements with x86 virtualization or emulation or ARM, sooner of later we will need it.. i dont believe that every piece of software will be recompiled for ARM, there is huge x86 legacy.
lmcd - Tuesday, July 1, 2014 - link
What is the driver situation for Windows RT? What APIs match up with the upcoming Mali GPUs? How equipped are Windows drivers and apis to deal with heterogeneous compute?What does the upcoming Mali fixed-function hardware support? Looking toward 4k encode/decode?
And the dreamer followup:
Also, what are the chances of a bigger chip with a bigger GPU? Probably not the right person to contact (but no one is! :-( ) but has ARM considered a Windows RT laptop, PC (think NUC), or set-top box? Everyone's screaming Android to the rooftops but I'm still clamoring over the potential of an RT laptop to get ~20 hrs of battery life.
Clincher: could such a device support Steam streaming? :D :D :D
Man I worked hard to work in legitimate questions :P Thank you for putting up with us!
JemDavies - Tuesday, July 1, 2014 - link
Thanks for the questions.Mali Midgard GPUs do support Windows DirectX APIs and we have drivers available for Windows RT.
Mali video processors such as the Mali-V500 do indeed support resolutions up to 4k.
The followup question is not really one for us, but for our Partners.
lmcd - Tuesday, July 1, 2014 - link
My question was rather oriented toward gaming, where 1080p30 isn't exactly cutting-edge. Is there one that can do at least 1080p60 in the pipeline?