[TUHS] LSX issues and musing

Noel Chiappa jnc at mercury.lcs.mit.edu
Thu Aug 4 01:17:09 AEST 2022


    > Also, another problem with trying to 'push' LSX into a previously
    > un-handled operating regions (e.g. large disks, but there are likely
    > others) is that there are probably things that are un-tested in that
    > previously unused operating mode, and there may be un-found bugs that
    > you trip across.

'Speak of the devil, and hear the sound of his wings.'

    >> From: Gavin Tersteeg

    >> Interestingly enough, existing large V6 RK05 images can be mounted,
    >> read from, and written to. The only limitations on these pre existing
    >> images is that if enough files are deleted, the system will randomly
    >> crash.

    > I had a look at the source (in sys4.c, nami.c, iget.c, rdwri.c, and
    > alloc.c), but I couldn't quickly find the cause; it isn't obvious.

I don't know if the following is _the_ cause of the crashes, but another
problem (another aspect of the '100 free inodes cache' thing) swam up out of
my brain. If you look at V6's alloc$ifree(), it says:

	if(fp->s_ninode >= 100)
		return;
	fp->s_inode[fp->s_ninode++] = ino;

LSX's is missing the first two lines. So, if you try and free more than 100
inodes on LSX, the next line will march out of the s_inode array and smash
other fields in the in-core copy of the super-block.

Like I said, this is not certain to be the cause of those crashes; and it's
not really a 'bug' (as in the opening observation) - but the general sense of
that observation is right on target. LSX is really designed to operate only
on disks with less than 100 inodes, and tring to run it elsewhere is going to
run into issues.

How many similar limitations exist in other areas I don't know.


    > From: Heinz Lycklama <heinz at osta.com>

    > Remember that the LSX and Mini-UNIX systems were developed for two
    > different purposes.

Oh, that's understood - but this just re-states my observation, that LSX was
designed to operate in a certain environment, and trying to run it elsewhere
is just asking for problems.

	Noel


More information about the TUHS mailing list