On 3/8/20 9:39 PM, Warner Losh wrote:
> floppy controller supports the full range of crazy that once roamed
> the earth
Does anyone have any knee jerk reaction to the idea of putting a 5¼"
floppy drive on a USB-to-Floppy (nominally 3½") adapter?
Do I want to avoid tilting at this windmill?
Am I better off installing the 5¼" floppy inside the computer and
connecting directly to the motherboard?
I'm only wanting to pull files off of 5¼" disks. At most I'll want to
dd the disks to an image.
That being said, I wonder if I should also be collecting any different
types of images. (This may mean mobo instead of USB.)
Thank you for any pro-tips that you can provide.
--
Grant. . . .
unix || die
Moving to COFF ....
On Tue, Mar 17, 2020 at 10:58 AM Larry McVoy <lm(a)mcvoy.com> wrote:
> As much as I don't care for Forth, man do I wish it had become the standard
> for boot proms, it might not be my cup of tea but I could make it do what
> I needed it to do.
Amen bro... Sun did a nice job on that. Although the Alpha Boot ROMs
were pretty good too. At least they were UNIX like and were extensible like
the Sun boot ROMs. HP's were better than a PC BIOS, but they were pretty
awful.
> Can't say the same for UEFI, I disable that crap.
>
Well, it beats the crap out of IBM's BIOS, but that bar is very low. UEFI
was sort of a 'camel' (a horse designed by committee) and too many people
peed on it. Intel created EFI to try to fix BIOS and then people went
nuts. Apple's version is the best of them, but as you say, they all suck
if you have seen anything better. A big problem IMO is that EFI tried to
be somewhat compatible. In the end, they were not, so you got the worst of
both (new interfaces and legacy functionality).
Server systems that support IPMT have Minux under the covers in
coprocessor, which using a coprocessor is also how Apple runs UEFI. With
IMPT, it is sort of sad more of it is not really exposed, but you need the
added cost of the coprocessor. Plus it adds a new security domain, which
many people complain about. I try to know as little about it as possible
to get my work done, but exposing more of that interface might help.
Hi all, I'm looking for an interactive tool to help students learn the
Unix/Linux command line. I still remember the "learn" tool. Is there an
equivalent for current systems?
I have tried to forward-port the old learn sources to current Linux but
my patience ran out :-)
Thanks in advance for any tips/pointers.
Cheers, Warren
Given the recent discussion of pipes and networking ... I'm passing this
along for those that might not have seen it.
---------- Forwarded message ---------
From: Jack Haverty via Internet-history
Date: Tue, Mar 10, 2020 at 1:30 PM
Subject: Re: [ih] NCP and TCP implementations
To: *Internet-History*
The first TCP implementation for Unix was done in PDP-11 assembly
language, running on a PDP-11/40 (with way too little memory or address
space). It was built using code fragments excerpted from the LSI-11
TCP implementation provided by Jim Mathis, which ran under SRI's
home-built OS. Jim's TCP was all written in PDP-11 assembler. The code
was cross-compiled (assembled) on a PDP-10 Tenex system, and downloaded
over a TTY line to the PDP-11/40. That was much easier and faster than
doing all the implementation work on the PDP-11.
The code architecture involved putting TCP itself at the user level,
communicating with its "customers" using Unix InterProcess
Communications facilities (Rand "Ports"). It would have been
preferable to implement TCP within the Unix kernel, but there was simply
not enough room due to the limited address space available on the 11/40
model. Later implementations of TCP, on larger machines with twice the
address space, were done in the kernel. In addition to the Berkeley BSD
work, I remember Gurwitz, Wingfield, Nemeth, and others working on TCP
implementation for the PDP-11/70 and Vax.
The initial Unix TCP implementation was for TCP version 2 (2.5 IIRC), as
was Jim's LSI-11 code. This 2.5 implementation was one of the players
in the first "TCP Bakeoff" organized by Jon Postel and carried out on a
weekend at ISI before the quarterly Internet meeting. The PDP-11/40 TCP
was modified extensively over the next year or so as TCP advanced
through 2.5, 2.5+, 3, and eventually stabilized at TCP4 (which it seems
we still have today, 40+ years later!)
The Unix TCP implementation required a small addition to the Unix kernel
code, to add the "await" and "capac" system calls. Those calls were
necessary to enable the implementation of user-level code where the
traditional Unix "pipeline" model of programming
(input->process->process...->output) was inadequate for use in
multi-computer programming (such as FTP, Telnet, etc., - anywhere where
more than one computer was involved).
The code to add those new system calls was written in C, as was almost
all of the Unix OS itself. The new system calls added the functionality
of "non-blocking I/O" which did not previously exist. It involved very
few lines of code, since there wasn't room for very many more
instructions, and even so it required finding more space by shortening
many of the kernel error messages to save a few bytes here and there.
Randy Rettberg and I did that work, struggling to understand how Unix
kernel internals worked, since neither of us had ever worked with Unix
before even as a user. We did not try to "get it right" by making
significant changes to the basic Unix architecture. That came later
with the Berkeley and Gurwitz efforts. The PDP-11/40 was simply too
constrained to support such changes, and our mission was to get TCP
support on the machine, rather than develop the OS.
I think I speak authoritatively here, since I wrote and debugged that
first Unix TCP code. I still have an old, yellowing listing of that
first Unix TCP.
FWIW, if there's interest in why certain languages were chosen, there's
a very simple explanation of why the Unix implementation was done in
assembler rather than C, the native language of Unix. First, Jim
Mathis' code was in assembler, so it was easy to extract large chunks
and paste them into the Unix assembler implementation. Second, and
probably most important, was that I was very accustomed to writing
assembler code and working at the processor instruction level. But I
barely knew C existed, and was certainly not proficient in it, and we
needed the TCP working fast for use in other projects. The choice was
very pragmatic, not based at all on technical issues of languages or
superiority of any architecture.
/Jack Haverty
On 3/9/20 11:14 PM, vinton cerf via Internet-history wrote:
> Steve Kirsch asks in what languages NCP and TCP were written.
>
> The Stanford first TCP implementation was done in BCPL by Richard Karp.
> Another version was written for PDP-11/23 by Jim Mathis but not clear in
> what language. Tenex was probably done in C at BBN. Was 360 done in PL/1??
> Dave Clark did one for IBM PC (assembly language/??)
>
> Other recollections much appreciated.
>
> vint
--
Internet-history mailing list
Internet-history(a)elists.isoc.org
https://elists.isoc.org/mailman/listinfo/internet-history
I started using 'cpio' in the 80tish and still use it, especially
transferring files and complete directories between various UNIX
versions like SCOUNIX 3.2V4.2, Tru64, HP-UX 11i..
The main option I use with cpio is (of course) "-c" and only occasionally "-u"
Hi,
while exploring the gopherspace (YES! Still existing,
growing community) I found this gopher page:
gopher://pdp11.tk/1
which can be reached with Lynx for example.
Unfortunately I cannot evaluate the items there, but
may be it is worth a look by someone knowledgeable.
Cheers!
mcc
I work at an astronomy facility. I get to do some fun dumpster diving.
I recently have pulled out of the trash a plugboard with a male and a
female D-Sub 52 connector. 3 rows of pins, 17-18-17. I took the
connectors off the board: there's nothing back there, so this thing only
ever existed so you could plug the random cable you found into it and its
friends to see what the cable fit.
I can't find much evidence that a 52-pin D-Sub ever existed.
Is this just Yet Another Physics Experiment thing where, hey, if your
instrument already costs three million dollars, what's a couple of grand
for machining custom connectors? Or was it once a thing?
(also posting to cc-talk)
Adam
Not UNIX, not 52-pin, but old, old and serial
Your mission, should you choose to accept it, is to save data from a
computer that should have died aeons ago
...
Tap into the serial line – what could be simpler?
Alas, the TI was smart enough to spot the absence of the rattly old
beast ("the software wouldn't print without some of the seldom-used
serial control lines functioning," explained Aaron) so the customer
was asked to bring in the printer as well.
https://www.theregister.co.uk/2020/02/24/who_me/
---------- Forwarded message ---------
From: Adam Thornton <athornton(a)gmail.com>
Date: Tue, Feb 11, 2020 at 5:56 PM
Subject: Re: [COFF] Old and Tradition was [TUHS] V9 shell
To: Clem Cole <clemc(a)ccc.com>
As someone working in this area right now....yeah, and that’s why
traditional HPC centers do not deliver the services we want for projects
like the Vera Rubin Observatory’s Legacy Survey Of Space and Time.
Almost all of our scientist-facing code is Python though a lot of the
performance critical stuff is implemented in C++ with Python bindings.
The incumbents are great at running data centers like they did in 1993.
That’s not the best fit for our current workload. It’s not generally the
compute that needs to be super-fast: it’s access to arbitrary slices
through really large data sets that is the interesting problem for us.
That’s not to say that that lfortran isn’t cool. It really really is, and
Ondrej Cestik has done amazing work in making modern FORTRAN run in a
Jupyter notebook, and the implications (LLVM becomes the Imagemagick of
compiled languages) are astounding.
But...HPC is no longer the cutting edge. We are seeing a Kuhnian paradigm
shift in action, and, sure, the old guys (and they are overwhelmingly guys)
who have tenure and get the big grants will never give up FORTRAN which
after all was good enough for their grandpappy and therefore good enough
for them. But they will retire. Scaling out is way way cheaper than
scaling up.
On Tue, Feb 11, 2020 at 11:41 AM Clem Cole <clemc(a)ccc.com> wrote:
> moving to COFF
>
> On Tue, Feb 11, 2020 at 5:00 AM Rob Pike <robpike(a)gmail.com> wrote:
>
>> My general mood about the current standard way of nerd working is how
>> unimaginative and old-fashioned it feels.
>>
> ...
>>
>> But I'm a grumpy old man and getting far off topic. Warren should cry,
>> "enough!".
>>
>> -rob
>>
>
> @Rob - I hear you and I'm sure there is a solid amount of wisdom in your
> words. But I caution that just, because something is old-fashioned, does
> not necessarily make it wrong (much less bad).
>
> I ask you to take a look at the Archer statistics of code running in
> production (Archer large HPC site in Europe):
> http://archer.ac.uk/status/codes/
>
> I think there are similar stats available for places like CERN, LRZ, and
> of the US labs, but I know of these so I point to them.
>
> Please note that Fortran is #1 (about 80%) followed by C @ about 10%,
> C++ @ 8%, Python @ 1% and all the others at 1%.
>
> Why is that? The math has not changed ... and open up any of those codes
> and what do you see: solving systems of differential equations with linear
> algebra. It's the same math my did by hand as a 'computer' in the 1950s.
>
> There is not 'tensor flows' or ML searches running SPARK in there. Sorry,
> Google/AWS et al. Nothing 'modern' and fresh -- just solid simple science
> being done by scientists who don't care about the computer or sexy new
> computer languages.
>
> IIRC, you trained as a physicist, I think you understand their thinking. *They
> care about getting their science done.*
>
> By the way, a related thought comes from a good friend of mine from
> college who used to be the Chief Metallurgist for the US Gov (NIST in
> Colorado). He's back in the private sector now (because he could not
> stomach current American politics), but he made an important
> observation/comment to me a couple of years ago. They have 60+ years of
> metallurgical data that has and his peeps have been using with known
> Fortran codes. If we gave him new versions of those analytical programs
> now in your favorite new HLL - pick one - your Go (which I love), C++
> (which I loath), DPC++, Rust, Python - whatever, the scientists would have
> to reconfirm previous results. They are not going to do that. It's not
> economical. They 'know' how the data works, the types of errors they
> have, how the programs behave* etc*.
>
> So to me, the bottom line is just because it's old fashioned does not make
> it bad. I don't want to write an OS in Fortran-2018, but I can wrote a
> system that supports code compiled with my sexy new Fortran-2018 compiler.
>
> That is to say, the challenge for >>me<< is to build him a new
> supercomputer that can run those codes for him and not change what they are
> doing and have them scale to 1M nodes *etc*..
>
> _______________________________________________
> COFF mailing list
> COFF(a)minnie.tuhs.org
> https://minnie.tuhs.org/cgi-bin/mailman/listinfo/coff
>