The intended audience of this document is developers of language frontends targeting LLVM IR. This document is home to a collection of tips on how to generate IR that optimizes well.
As with any optimizer, LLVM has its strengths and weaknesses. In some cases, surprisingly small changes in the source IR can have a large effect on the generated code.
Beyond the specific items on the list below, it’s worth noting that the most mature frontend for LLVM is Clang. As a result, the further your IR gets from what Clang might emit, the less likely it is to be effectively optimized. It can often be useful to write a quick C program with the semantics you’re trying to model and see what decisions Clang’s IRGen makes about what IR to emit. Studying Clang’s CodeGen directory can also be a good source of ideas. Note that Clang and LLVM are explicitly version locked so you’ll need to make sure you’re using a Clang built from the same svn revision or release as the LLVM library you’re using. As always, it’s strongly recommended that you track tip of tree development, particularly during bring up of a new project.
An alloca instruction can be used to represent a function scoped stack slot, but can also represent dynamic frame expansion. When representing function scoped variables or locations, placing alloca instructions at the beginning of the entry block should be preferred. In particular, place them before any call instructions. Call instructions might get inlined and replaced with multiple basic blocks. The end result is that a following alloca instruction would no longer be in the entry basic block afterward.
The SROA (Scalar Replacement Of Aggregates) and Mem2Reg passes only attempt to eliminate alloca instructions that are in the entry basic block. Given SSA is the canonical form expected by much of the optimizer; if allocas can not be eliminated by Mem2Reg or SROA, the optimizer is likely to be less effective than it could be.
LLVM currently does not optimize well loads and stores of large aggregate types (i.e. structs and arrays). As an alternative, consider loading individual fields from memory.
Aggregates that are smaller than the largest (performant) load or store instruction supported by the targeted hardware are well supported. These can be an effective way to represent collections of small packed fields.
On some architectures (X86_64 is one), sign extension can involve an extra instruction whereas zero extension can be folded into a load. LLVM will try to replace a sext with a zext when it can be proven safe, but if you have information in your source language about the range of a integer value, it can be profitable to use a zext rather than a sext.
Alternatively, you can specify the range of the value using metadata and LLVM can do the sext to zext conversion for you.
Internally, LLVM often promotes the width of GEP indices to machine register width. When it does so, it will default to using sign extension (sext) operations for safety. If your source language provides information about the range of the index, you may wish to manually extend indices to machine register width using a zext instruction.
LLVM will always generate correct code if you don’t specify alignment, but may generate inefficient code. For example, if you are targeting MIPS (or older ARM ISAs) then the hardware does not handle unaligned loads and stores, and so you will enter a trap-and-emulate path if you do a load or store with lower-than-natural alignment. To avoid this, LLVM will emit a slower sequence of loads, shifts and masks (or load-right + load-left on MIPS) for all cases where the load / store does not have a sufficiently high alignment in the IR.
The alignment is used to guarantee the alignment on allocas and globals, though in most cases this is unnecessary (most targets have a sufficiently high default alignment that they’ll be fine). It is also used to provide a contract to the back end saying ‘either this load/store has this alignment, or it is undefined behavior’. This means that the back end is free to emit instructions that rely on that alignment (and mid-level optimizers are free to perform transforms that require that alignment). For x86, it doesn’t make much difference, as almost all instructions are alignment-independent. For MIPS, it can make a big difference.
Note that if your loads and stores are atomic, the backend will be unable to lower an under aligned access into a sequence of natively aligned accesses. As a result, alignment is mandatory for atomic loads and stores.
When translating a source language to LLVM, finding ways to express concepts and guarantees available in your source language which are not natively provided by LLVM IR will greatly improve LLVM’s ability to optimize your code. As an example, C/C++’s ability to mark every add as “no signed wrap (nsw)” goes a long way to assisting the optimizer in reasoning about loop induction variables and thus generating more optimal code for loops.
The LLVM LangRef includes a number of mechanisms for annotating the IR with additional semantic information. It is strongly recommended that you become highly familiar with this document. The list below is intended to highlight a couple of items of particular interest, but is by no means exhaustive.
One of the most common mistakes made by new language frontend projects is to use the existing -O2 or -O3 pass pipelines as is. These pass pipelines make a good starting point for an optimizing compiler for any language, but they have been carefully tuned for C and C++, not your target language. You will almost certainly need to use a custom pass order to achieve optimal performance. A couple specific suggestions:
If you didn’t find what you were looking for above, consider proposing an piece of metadata which provides the optimization hint you need. Such extensions are relatively common and are generally well received by the community. You will need to ensure that your proposal is sufficiently general so that it benefits others if you wish to contribute it upstream.
You should also consider describing the problem you’re facing on llvm-dev and asking for advice. It’s entirely possible someone has encountered your problem before and can give good advice. If there are multiple interested parties, that also increases the chances that a metadata extension would be well received by the community as a whole.
If you run across a case that you feel deserves to be covered here, please send a patch to llvm-commits for review.
If you have questions on these items, please direct them to llvm-dev. The more relevant context you are able to give to your question, the more likely it is to be answered.