On 12/30/18, Norman Wilson <norman(a)oclsc.org> wrote:
Ld could all along have just made two passes through the
library, one to assemble the same list ranlib did in advance,
a second to load the files. (Or perhaps a first pass to
load what it knows it needs and assemble the list, and a
second only if necessary.)
That would avoid the O(N**2) situation I described in a reply to this thread.
Presumably it didn't both to
make ld simpler and because disk I/O was much slower back
then (especially on a heavily-loaded time-sharing system,
something far less common today). I suspect it would work
fine just to do it that way today.
Probably not. For some reason linkers have always been notoriously
slow when compared to other parts of the compilation toolchain. I
suspect it's because of all the I/O involved.
Nowadays ranlib is no longer a separate program: ar
recognizes object files and maintains an index if any are
present. I never especially liked that; ar is in
principle a general tool so why should it have a special
case for one type of file? But in practice I don't know
anyone who uses ar for anything except libraries any more
(everyone uses tar for the general case, since it does a
As you say, nobody these days uses ar for anything except object
module libraries. And just about anything you do that modifies an ar
library will require re-running ranlib afterwards. So as a
convenience and as a way to avoid cockpit errors, it makes sense to
merge the ranlib function into ar. MacOS still uses an independent
ranlib, and it's a pain in the butt to have to remember to run ranlib
after each time you modify an archive.
Were I to wave flags over the matter I'd
rather push to ditch ar entirely save for compatibility
with the past, move to using tar rather than ar for object
libraries, and let ld do two passes when necessary rather
than requiring that libraries be specially prepared. As
I say, I think modern systems are fast enough that that
would work fine.
The tar file format isn't as compact as ar's, since it operates on
fixed-size blocks. Nowadays that wouldn't be a problem, but it
certainly was when disks were small, given that object modules very
often would be significantly smaller than a ranlib block. Making two
passes over the entire library might be OK if the file system caches
file contents, but it would still require at least one complete scan
of the library, whereas if you have an index of global symbols, you
just have to process that index (and, of course, the modules you
finally decide to load). As I said earlier, linkers are slow enough
as is--we don't need anything to make them less efficient.