Summary: --load-average changes order or jobs
Submitted by: aheider
Submitted on: Fri 10 Jan 2020 05:23:38 PM UTC
Severity: 3 - Normal
Item Group: Bug
Assigned to: None
Discussion Lock: Any
Component Version: 4.2.1
Operating System: POSIX-Based
Fixed Release: None
Triage Status: None
When using -l, it seems the order of jobs is different compared to not using
For some projects, this difference is enough to break the build.
For the cases where I observed this, it was always a bug in a Makefile.
Examples are GNU screen and bash. Both projects create a header using a make
rule, with missing dependencies for the files including it.
Another example is MAME, where the directory for a to-be created object file
was not created in time (missing " | $(OBJDIR)")
Building those projects without -l always succeeds.
For distros, which compile many packages in parallel, -l is a gift, but breaks
random packages too easily. Is it possible to not change the behavior when
I can easily reproduce this locally (8 core box) with GNU screen 4.6.2 using
make 4.2.1 or 4.2.93:
while true; do make clean; make -j8 -l0.5 || break; done
screen.h:48:10: fatal error: comm.h: No such file or directory
It's not the -l option that causes this, it's the -j option. If you ran with
-j8 without -l you'd still see this.
It's also not that make changes the order of jobs: make always walks the graph
of prerequisites in the same order, regardless of whether -j is provided or
However, by introducing parallelism some underlying assumptions that people
were making in their makefiles no longer hold. Without -j, if a rule says
"foo: bar baz" then you could assume that "bar" would be completed before
"baz" was started.
That makefile is not "correct", because if the target "bar" requires "baz" to
be completed it should list that as a prerequisite: "bar: baz".
But many makefile authors (especially those who work with versions of make
that don't support parallel builds) don't bother with defining explicit
ordering, and just rely on the implicit ordering.
Parallel builds cannot work in this situation and there's no way to make them
work, without defeating the entire purpose of parallelism. If make has to
ensure that each target finishes before the next one starts then obviously you
cannot have parallel builds.
So your only option is to either fix the makefile, or not use -j.
All that means is that it's harder to get exactly the right number of jobs to
run in parallel without -l.
The -l option doesn't change anything except add an extra test that make
checks to see if it can run a job. So instead of "I have a job to run, have I
exceeded my -j level?", make uses "I have a job to run, have I exceeded my -j
level OR is the load higher than the -l setting?" That's the only difference
with and without -l. There's no difference in the order in which jobs are
selected to be run.
Just to note, that a load of 0.5 when you have 8 CPUs is pretty small. A
setting of "-j8 -l0.5" basically tells make "run 8 jobs in parallel BUT if the
load is higher than one half of one CPU's worth, don't start more".
Load averages on multi-core systems are something of a dark art but _more or
less_ you get 1 for every core. So if you had 8 cores and they were 100% busy
with no other processes waiting to run you'd expect your load average to
You may be thinking that the load is averaged across CPUs so that a load of
1.0 means "all CPUs busy" and 0.5 means you want about half your system to be
busy but that's not how it works.