Sunday, August 27, 2017

About shader compilers, IR's, and where the time is spent

Occasionally the question comes up about why we convert between various IR's (intermediate representations), like glsl to NIR, in the process of compiling a shader.  Wouldn't it be faster if we just skipped a step and went straight from glsl to "the final thing", which would be ir3 (freedreno), codegen (nouveau), or LLVM (radeonsi/radv).  It is a reasonable question, since most people haven't worked on compilers and we probably haven't done a good job at explaining all the various passes involved in compiling a shader or presenting a breakdown of where the time is spent.

So I spent a bit of time this morning with perf to profile a shader-db run (or rather a subset of a full run to keep the perf.data size manageable, see notes at end).
A flamegraph from the shader-db run, since every blog post needs a catchy picture.

Breakdown:

  • parser, into glsl: 9.98%
  • glsl to nir: 1.3%
  • nir opt/lowering passes: 21.4%
    • CSE: 6.9%
    • opt algebraic: 3.5%
    • conversion to SSA: 2.1%
    • DCE: 2.0%
    • copy propagation: 1.3%
    • other lowering passes: 5.6%
  • nir to ir3: 1.5%
  • ir3 passes:  21.5%
    • register allocation: 5.1%
    • sched: 14.3%
    • other: 2.1%
  • assembly (ir3->binary): 0.66%
This is ignoring some of the fixed overheads of shader-db runner, and also doesn't capture individually a bunch of NIR lowering passes.  NIR has ~40 lowering passes, some that are gl related like nir_lower_draw_pixels and nir_lower_wpos_ytransform (because for hysterical reasons textures and therefore FBO's are upside down in gl).  For gallium drivers using NIR, these gl specific passes are called from mesa state-tracker.

The other lowering passes are not gl specific but tend to be specific to general GPU shader features (ie. things that you wouldn't find in a C compiler for a cpu) and things that are needed by multiple different drivers.  Such as, nir_lower_tex which handles sampling from YUV textures, ie. inserting the instructions to do YUV->RGB conversion (since GLES and android strongly assume this is a thing that hardware can always do), lowering RECT textures, or clamping texture coords.  These lowering passes are called from the driver backend so the driver is in control of what lowering pass are needed, including configuration about individual features in passes which handle multiple things, based on what the hardware does not support directly.

These lowering passes are mostly O(n), and lost in the noise.

Also note that freedreno, along with the other drivers that can consume NIR directly, disable a bunch of opt passes that were originally done in glsl, but that NIR (or LLVM) can do more efficiently.  For freedreno, disabling the glsl opt passes shaved ~30% runtime off of a shader-db run, so spending 1.3% to convert into NIR is way more than offset.

For other drivers, the breakdown may be different.  I expect radeonsi/radv skips some of the general opt passes in NIR which have a counterpart in LLVM, but re-uses other lowering passes which do not have a counterpart in LLVM.


Is it still a gallium driver?


This is a related question that comes up sometimes, is it a gallium driver if it doesn't use TGSI?  Yes.

The drivers that can consume NIR and implement the gallium pipe driver interface, freedreno a3xx+, vc4, vc5, and radeonsi (optionally), are gallium drivers.  They still have to accept TGSI for state trackers which do not support NIR, and various built-in shaders (blits, mipmap generation, etc).  Most use the shared tgsi_to_nir pass for TGSI shaders.  Note that currently tgsi_to_nir does not support all the TGSI features, but just features needed by internal shaders, and what is needed for gl3/gles3 (ie. basically what freedreno and vc4 needed before mesa state-tracker grew support for glsl_to_nir).


Notes:


Collected from shader-db run (glamor + supertuxkart + 0ad shaders) with a debug mesa build (to have debug syms and prevent inlining) but with NIR_VALIDATE=0 (otherwise results with debug builds are highly skewed).  A subset of all shader-db shaders was used to keep the perf.data size manageable.

Sunday, June 25, 2017

long overdue update

Since it has been a while since the last update, I guess it is a good time to post an update on some of the progress that has been happening with freedreno and upstream support for snapdragon boards.

freedreno / mesa

While the 17.1 release included enabling reorder support by default, there have been many other interesting features landed since the 17.1 branch point (so they will be included in the future 17.2 release).  Many, but not all, are related to a5xx.  (Something that I just realized I forgot to blog about, but have demoed here and there.)

GL/GLES Compute Shaders:

So far this is only a5xx (although a4xx seems to work similarly, and would probably be not too hard to get working if someone had the right hardware and a bit of time).  SSBOs and atomics are supported, but image support (an important part of compute shaders) is still TODO (and some r/e required, although it seems to share a lot in common with SSBOs).  Adreno 3xx support for compute shaders appears to be more work (ie. less in common with a4xx/a5xx, and probably part of the reason that qualcomm never bothered adding support in android blob driver).  Patches welcome, but for now a3xx compute support is far enough down my TODO list that it might not otherwise happen.
 

I know there is a lot of interest in open source OpenCL support for freedreno, and hopefully that is something that will come in the future.  But there is the big challenge of how to get opencl shaders (kernels) into a form that can be consumed by freedreno's ir3 shader compiler backend.  While there is some potential to re-use spirv_to_nir at some point, there are some complicated details.  For compute kernels (ie. OpenCL) there are some restrictions lifted on SPIRV that spirv_to_nir relies on.  (Little details like lack of requirement for structured flow control.)

A5xx HW Binning Support:

Traditionally hw binning support, while a pretty big perf boost, has been kinda difficult (translation: lot of things can be done wrong to lead to difficult to debug GPU lockups), this time around it wasn't so hard.  I guess experience on a3xx/a4xx has helped.  And everyone loves ~30% fps boost in your favorite game!

This has brought performance roughly up to the levels as ifc6540/a420.  Which sounds bad, but remember we are comparing apples and oranges.  On ifc6540 (snapdragon 805), we don't yet have upstream kernel support so this was using a 3.10 android kernel (with bus-scaling and all the downstream tricks to optimize memory bandwidth and overall SoC performance).  But on a530 (dragonboard820c), I never had a working downstream kernel (or had to bother backporting the upstream drm/msm driver to some ancient android kernel.. hurray!).  The upshot is that any perf #'s for a5xx don't include bus-scaling, cpufreq, etc.  I expect a pretty big performance boost on a530 once we have a way to clock up memory/interconnects.  (Ie. on micro-benchmarks a530 is >2x faster than a420 on alu limited workloads, but still a bit slower than a420 on bandwidth limited workloads, despite having a higher theoretical bandwidth.)

Side note, linaro is working on an upstream solution for bus-scaling.  This is a very important improvement needed upstream for ARM SoC's, especially ones that optimize so strongly for battery life.  (Keep in mind that interconnects, which span across the SoC, and memory, are a big power consumer in a modern SoC.. so a lot of qualcomm's good performance + battery life in phones comes down to these systemwide optimizations.)  It is equivalent to slow memory clockings on some generations of nouveau, except in this case it is outside the gpu driver (ie. we aren't talking about vram on a discrete gpu), and the reason is to enable a high end phone SoC to last a couple days on battery, rather than keeping your video card from melting.

A5xx gles3.0/gl3.1 support:

Probably it would have made sense to spend time on this before compute shaders (since they are otherwise only exposed with $MESA_GL_VERSION_OVERRIDE tricks.. but hey, I was curious about how compute shaders worked).  After an assortment of small things to r/e and implement, we where just a few (~50) texture/vbo/fb formats away from gl3.1.  Nothing really exciting.  Mostly just a few weekends probing unknown format #'s and seeing which piglit format tests started passing.  The sort have thing that would have taken approximately 10 minutes with docs.. but hey, it needed to be done.

Switching to NIR by default:

This is one thing that benefits a3xx and a4xx as well as a5xx.  While freedreno has had NIR support for a while, it hasn't been enabled by default until more recently.  The issue was handling of complex dereferences (multi-dimensional arrays, arrays of structs, etc).  The problem was that freedreno's ir3 backend preferred to keep things in SSA form (since that gives the instruction scheduler more flexibilty, which is pretty imprortant in the a3xx+ instruction set architecture (ir3)).  Adding support to lower arrays to regs allowed moving the deref offset calculation to NIR so that we wouldn't regress by turning NIR on by default.  This is useful since it cuts shader compilation time, but also because tgsi_to_nir doesn't support SSBOs, atomics, and other new shiny glsl features.  (Now we only rely on tgsi_to_nir for various legacy paths and built-in blit shaders which don't need new shiny glsl features.)

A5xx HW Query Support:

Adreno 5xx changed how hw queries (ie. occlusion query and time-elapsed query, etc) work.  For the better, since now we can accumulate per-tile results on the GPU.  But it required some new support in freedreno for a different sort of query, and some r/e about how this actually worked.  And while we had previously lied about occlusions query support (mostly to expose more than gl1.4 support), that isn't a very good long term solution.  In addition, time-elapsed query is useful for performance/profiling work, so helpful for some of the following projects.

A5xx LRZ Support:

Adreno 5xx adds another cute optimization called "LRZ".  (Presumably "low resolution Z (depth buffer)".  I've spent a some time r/e'ing this feature and implementing support for it in freedreno.  It is a neat new hw trick that a5xx has, which serves two purposes.
The basic idea is to have a per-quad depth value so that in the binning pass primitives can be rejected (per tile) based on depth (ie. reject more early). But then recycle the LRZ buffer in draw phase to function as for-free depth pre-pass (ie. reject earlier primitives based on the z value of later primitives).

The benefit depends on how well optimized the game is.  Ie. games that are well optimized for traditional GPU architectures (ie. sorting geometry, already doing depth pre-passes, etc) won't benefit as much.. but this helps a lot for badly written games that relied on per-pixel deferred rendering.

Overall, for things like stk/xonotic, it seems like a ~5-10% win.

edit: I forgot to mention, this isn't enabled by default as it causes some issues (which seem like a sort of z-fighting) with 0ad.  Other than that, I haven't found anything that it doesn't work with.  To enable: FD_MESA_DEBUG=lrz.   It would be nice if there were some way to have driver specific flags in driconf to control things like this.

The main remaining performance trick for a5xx is UBWC (ie. bandwidth compression) + tiled textures.  I've worked out mostly how UBWC works (in particular texture layout, at least for 2d textures + mipmap, but I think we can infer how 2d arrays, 3d, etc, work from that).  Most of the infrastructure for upload/download blits (to convert to/from linear) should be easier thanks to the reorder support.  We'll see if I actually find time to implement it before the mesa 17.2 branch point.

Standardized Embedded Nonsense Hacks

Anyone who has dealt with arm (non-server) devices, should be familiar with the silly-embedded-nonsense-hacks world.  In particular the non-standard boot-chain which makes it difficult for distro's to support the plethora of arm boards (let alone phones/tablets/etc) out there without per-board support.  Which was fine in the early days, but N boards times M distro's, it really doesn't scale.

Thanks to work by Mateusz Kulikowski, we now have u-boot support for dragonboard 410c.  It's been on my TODO list to play with for a while.  But more recently I realized that u-boot, thanks to the work of many others, can provide enough of EFI runtime-services interface for grub to work.  This means that it is a path forward for standardized distros on aarch64 (like fedora and opensuse), which expect UEFI, to boot on boards which don't otherwise have UEFI firmware.

So I decided to spend a bit of time pretending to be a crack smoking firmware engineer.  (Not literally, of course.. that would be stupid!)

After fixing some linker script bugs with u-boot's db410c support vs efi_runtime section, and debugging some issues with grub finding the boot disk with the help of Peter Jones (the resident grub/EFI expert who conveniently sits near me), and a couple other misc u-boot fixes, I had a fedora 26 alpha image booting on the db410c.

The next step was figuring out display, so we could have grub boot menu on screen, like you would expect on a grown-up platform.  As it turns out, on most devices, lk (little kernel, ie. what normally loads the kernel+initrd on snapdragon android devices) already supports lighting up the display, since most/all android devices put up the initial splash-screen before the kernel is loaded.  Unfortunately this was not the case with the db410c's lk.  But Archit (qcom engineer who has contributed a whole lot of drm/msm and other drm patches) pointed me at a different lk branch (among the 100's) which had msm8916 display + adv7533 dsi->hdmi bridge (like what db410c uses).  After digging through a convoluted git history, I was able to track down the relevant gpio/i2c/adv7533 patches to port to the lk branch used on db410c.

After that, I added support for lk to populate a framebuffer node, using the simple-framebuffer bindings to pass the pre-configured scanout buffer (+dimensions) to u-boot.  This plus a new simplefb video driver for u-boot, enables u-boot to expose display support to grub via the EFI GOP protocol.  (Along the way I had to add 32bpp rgb support to lk since u-boot and grub don't understand packed 24bpp rgb.)

All this got to the point of:



This is a fedora image, booting off of usb disk (ie. not just rootfs on usb disk, but also grub/kernel/initrd/dtb).  With graphical grub menu to select which kernel to boot, just like you would expect on a PC.  The grubaa64.efi here is vanilla distro boot-loader, and from the point of view of the distro image, lk/u-boot is just the platform's firmware which somehow provides the UEFI interface the distro media expects.  It is worth pointing out some advantages of a traditional lk->kernel boot chain:
  • booting from USB, network, etc (which lk cannot do)
  • doesn't require kernel packed in custom boot.img partition which is board specific
  • booting installer image (ie. from sd-card or network)
When the kernel starts, in early boot, it is using efifb, just like it would on a PC.  (Ie. so you can see what is going on on-screen before hw specific drm driver kernel module is loaded).

There are still a few rough edges.  The drm/msm driver and msm clk drivers are a bit surprised when some clks are already enabled when the kernel starts, and the display is already light up.. now we have a good reason to fix some of those issues.  And right now we don't have a good way to load a newer device tree binary (dtb) after a distro kernel update (ie. without updating u-boot, aka "the firmware").  (For simple SoC's maybe a pre-baked dtb for the life of the board is sufficient... I have my doubts about that for SoCs as complex as the various snapdragon's, if for no other reason that we haven't even figured out how to model all the features of the existing SoCs in devicetree.)  One idea is for u-boot to pass to grub the name of the board dtb file to load via EFI variables.  I've sent a very early RFC to add EFI variable support in u-boot.  We'll see how this goes, in the mean time there might be more "firmware" upgrades needed than you'd normally expect on a mature platform like x86.

For now, my lk + u-boot work is here:
and prebulit "firmware" is here.  For now you will need to edit distro grub.cfg to add 'devicetree' commands to load appropriate dtb since what is included with u-boot.img is a very minimal fdt (ie. just enough for the drivers in u-boot).