2.9BSD/usr/src/ucb/berknet/BUGLIST

Network Status 		January 11, 1980

BUGS
----
--	Various response messages are lost.  This includes "fetching"
	files when the file being retrieved never arrives.  I suspect
	this has something to do with unreliable delivery of error messages,
	but this is not reliably reproducible.

--	The net command will create files in the queue directories
	without the corresponding control files ("dfa..." without "cfa...").
	Unknown cause.  They should be periodically removed.
	(Perhaps caused by an error such as an invalid machine name.)

--	The network makes no provision for errors in transit on intermediate
	machines, such as "No more processes" or "File System Overflow".
	While these occur only rarely, when they do, no message or
	notification is sent to anyone.

--	The network rendezvous protocol seems to occasionally get
	in a state where a specific file is continually retransmitted
	and never seems to get through.  This happens when both the
	host system and the network queues are overloaded, and thus
	is very unpleasant to debug.

--	The network daemons occasionally core dump.  They should not.
	

SUGGESTIONS
-----------

--	Performance Improvements:  
	A number of links now running at 1200 Baud could have their speeds
	changed to 9600 Baud without deteriorating the system the
	network runs on.  
	There are some high speed links (dmc-11's) which the network
	could use for much better performance. 
	Likewise, the Bussiplexor could be used as a faster link.
	This would allow us to increase the present 100,000 character
	file length limit.
	All the links would be faster if UNIX kernel drivers were used to avoid
	going through the terminal character queues and interrupting
	the CPU for every character.
	At the end of every quarter, network transmission speed decreases and
	the volume of traffic increases.  The network becomes saturated
	between two links and requests may arrive days later.
	Increases in the link speed would reduce these seasonal delays
	a great deal.

--	Maintenance Improvements:
	The network has become large enough to make re-compilation
	of the source on all machines to become practically impossible.
	The net command has compiled within it a routing table for each
	remote machine (defined in config.h).
	Adding a new machine to the network requires recompiling the
	net command on ALL machines.  The net command should read an
	external text file to compute its data structures.
	There is a program patchd, written by Bill Joy, which could
	be used to patch the binary versions of the network
	on like-systems, such as the Computer Center machines.
	The network code should use the retrofit library for
	non-Version 7 systems.

--	Network mail needs to be generalized in a number of ways.
	People with accounts on many machines want their mail forwarded
	to one specific machine. Also, there are at least two other networks
	now connected to the Berkeley network (the Bell Research net and
	the Arpanet), and mail destined for those networks should be
	routed to the appropriate gateway.  Neither of these is particularly
	difficult to implement, but system mail is an important facility
	and the people in charge of the various machines on the network
	disagree on how these features are to be added, especially concerning
	issues of reliability and error reporting.

--	The possibility of a number of small UNIX personal machines wanting
	intermittent access to the network looms ahead.  We should attempt
	to organize the software to allow occasional use
	by other UNIX machines, without tying down a port all the time.

--	The A machine has a typesetter that can be used from the 
	Computer Center machines through the network.  It would be nice
	if this facility were available from non-Computer Center machines
	to the A machine.  Programs exist to provide this and have been used
	extensively by Bill Joy and myself but the
	Computer Center is reluctant to open up that facility for
	security and reliability reasons.
	We would like to arrange for Computer Center job numbers to
	be stored in the password file on non-CC machines, to 
	allow people without accounts on A to have access to the
	typesetter.

--	Bob Fabry has suggested the "machine" be generalized to imply a 
	machine/account pair, e.g. -m caf would imply "caf" on Cory,
	-m Cory would imply "fabry" on Cory.
	Environments could provide this information.   
	It has also been suggested that the notion of a "default" machine
	is too restrictive and that each type of command should have a
	default machine, e.g. netlpr to A, net to B, netmail to C, etc.

--	Colin has developed some data compression algorithms.  On machines
	which are normally CPU idle, his algorithms could be used to
	compress data and speed up file transfer.
	Each individual host could decide whether data should be compressed,
	and each receiving machine would be able to handle both compressed
	and uncompressed data.

--	Files being retrieved, or fetched, are created zero-length
	as the request is sent to the remote machine.  An alternative 
	would be to put the message "File being transferred." in the file to
	make things clearer.

--	File modes should be preserved across the network.  Currently
	they are set to 0600 most of the time.

-- 	It would be nice if the rcs facilities and commands on various
	UNIX machines with rcs links were more accessible from machines
	without an rcs link.

--	The network was not expected to become as large as it has.
	Not much thought was given to large networks.
	The netq command only lists queues on the local machine,
	but many times the user is waiting for long queues on intermediate
	machines.
	Likewise, once the request is forwarded to the nearest machine,
	the netrm command will not let the originator remove the queue file.
	Finally, a network status command telling people what the network
	was doing would be very helpful.

--	The file length restriction of 100,000 characters forces users to split
	their files up into small pieces.  The network should have a 
	way to do this split automatically.

--	The underlying protocol is wasteful and/or confusing in a 
	number of ways:
	* The request length should be in ASCII, not a long integer.
	* Remove the extra 5 character string at the beginning of each
	  transmission.
	* Compute a full checksum on the entire file in addition
	  to the checksum per packet now provided.
	It is unlikely these will be changed since all the daemons
	on the network machines would have to be changed at once.

--	The netcp command should allow the user to default one of
	the filenames to a directory, ala the cp command.

--	File transfers, like remote mail, should be possible from
	the Berkeley Network to the Arpanet and the Bell Research Net.
	This is not difficult technically, but requires UNIX-like
	stream interfaces to be written for the gateways.

--	Currently the network files being transferred are
	copied into /usr/spool... it would be nice for
	large files to simply use a pointer to them.
	(To save time and space).

--	The scheduler the daemon uses is very simple.
	It should have a way to age priorities and to "nice"
	transfers, to be done after all normal ones are done.
	Also, there are some network uses that are time-dependent.
	It would be nice if certain queue files would disappear
	at certain times, if for example, a remote machine were down,
	given that they are no longer useful.