[COFF] A little networking tool to reduce having to run emulators with privilege

Steffen Nurpmeso steffen at sdaoden.eu
Tue Sep 22 07:38:34 AEST 2020


Adam Thornton wrote in
 <23BB3E13-7306-4BB6-9566-DF4C61DE9799 at gmail.com>:
 |I finally got around to tidying up a little shell tool I wrote that \
 |turns a network interface you specify into a bridge, and then creates \
 |some tap devices with owning user and group you specify and attaches \
 |them to that bridge.
 |
 |This gets around having to run emulated older systems under sudo if \
 |you want networking to work.
 |
 |It’s mostly intended for the PiDP-11/simh, but it also works fine with \
 |klh10 and TOPS-20.
 |
 |Maybe it will be useful to someone else.
 |
 |https://github.com/athornton/brnet <https://github.com/athornton/brnet>

Bridges usually do not work with wireless interfaces, it need some
v?eth.  And those br* tools are not everywhere, too (grr).
Have you ever considered network namespaces?

After over a year of using proxy_arp based pseudo bridging (cool!)
i finally wrapped my head around veth, and with it and Linux
network namespaces i loose 40 percent ping response speed, but
have a drastically reduced need for configuration.

What i have is this, maybe you find it useful.  It does not need
any firewall rules.  (Except allowing 10.0.0.0/8.)

In my net-qos.sh (which is my shared-everywhere firewall and tc
script)

  vm_ns_start() {
        #net.ipv4.conf.all.arp_ignore=0
     sysctl -w \
        net.ipv4.ip_forward=1

     ${ip} link add v_n type veth peer name v_i
     ${ip} netns add v_ns
     ${ip} link set v_i netns v_ns

     ${ip} a add 10.0.0.1/8 dev v_n
     ${ip} link set v_n up
     ${ip} route add 10.0.0.1 dev v_n

     ${ip} netns exec v_ns ${ip} link set lo up
     #if [ -z "$BR" ]; then
     #   ${ip} netns exec v_ns ip addr add 10.1.0.1/8 dev v_i broadcast +
     #   ${ip} netns exec v_ns ip link set v_i up
     #   ${ip} netns exec v_ns ip route add default via 10.0.0.1
     #else
        ${ip} netns exec v_ns ${ip} link set v_i up
        ${ip} netns exec v_ns ${ip} link add v_br type bridge
        ${ip} netns exec v_ns ${ip} addr add 10.1.0.1/8 dev v_br broadcast +
        ${ip} netns exec v_ns ${ip} link set v_br up
        ${ip} netns exec v_ns ${ip} link set v_i master v_br
        ${ip} netns exec v_ns ${ip} route add default via 10.0.0.1
     #fi
  }

  vm_ns_stop() {
     ${ip} netns del v_ns

^ That easy it is!

        #net.ipv4.conf.all.arp_ignore=1
     sysctl -w \
        net.ipv4.ip_forward=0
  }

And then, in my /x/vm directory the qemu .ifup.sh script

  #!/bin/sh -

  if [ "$VMNETMODE" = bridge ]; then
     ip link set dev $1 master v_br
     ip link set $1 up
  elif [ "$VMNETMODE" = proxy_arp ]; then
     echo 1 > /proc/sys/net/ipv4/conf/$1/proxy_arp
     ip link set $1 up
     ip route add $VMADDR dev $1
  else
     echo >&2 Unknown VMNETMODE=$VMNETMODE
  fi

Of course qemu creates the actual device for me here.
The .ifdown.sh script i omit, it is not used in this "vbridge"
mode.  It would do nothing really, and it cannot be called because
i now can chroot into /x/vm (needs dev/u?random due to libcrypt
needing it though it would not need them, but i cannot help it).

This then gets driven by a .run.sh script (which is called by the
real per-VM scripts, like

  #!/bin/sh -
  # root.alp-2020, steffen: Sway

  debug=
  vmsys=x86_64
  vmname=alp-2020
  vmimg=.alp-2020-amd64.vmdk
  vmpower=half
  vmmac=52:54:45:01:00:12
  vmcustom= #'-boot menu=on -cdrom /x/iso/alpine-virt-3.12.0-x86_64.iso'

  . /x/vm/.run.sh
  # s-sh-mode

so, and finally invokes qemu like so

  echo 'Monitor at '$0' monitor'
  eval exec $sudo /bin/ip netns exec v_ns /usr/bin/qemu-system-$vmsys \
     -name $VMNAME $runas $chroot \
     $host $accel $vmdisp $net $usb $vmrng $vmcustom \
     -monitor telnet:127.0.0.1:$monport,server,nowait \
     -drive file=$vmimg,index=0,if=ide$drivecache \
     $redir

Users in the vm group may use that sudo, qemu is executed in the
v_ns network namespace under runas='-runas vm' and jailed via
chroot='-chroot .'.  It surely could be more sophisticated, more
cgroups, whatever.  Good enough for me.
That .run.sh does enter

   if [ "$1" = monitor ]; then
      echo 'Entering monitor of '$VMNAME' ('$VMADDR') at '$monport
      eval exec $sudo /bin/ip netns exec v_ns telnet localhost $monport
      exit 5

and enters via ssh

   elif [ "$1" = ssh ]; then
      echo 'SSH into '$VMNAME' ('$VMADDR')'
      doex=exec
      if command -v tmux >/dev/null 2>&1 && [ -n "$TMUX_PANE" ]; then
         tmux set window-active-style bg=colour231,fg=colour0
         doex=
      fi
      ( eval $doex ssh $VMADDR )
      exec tmux set window-active-style bg=default,fg=default
      exit 5

for me.  (I use VMs in Donald Knuth emacs colour scheme it seems,
at least more or less.  VMs here, VM there.  Hm.)

Overall this network namespace thing is pretty cool.  Especially
since, compared to FreeBSD jails, for example, you simply can run
a single command.  Unfair comparison though.  WHat i'd really
wish would be a system which is totally embedded in that
namespace/jail idea.  I.e., _one_ /, and then only moving targets
mounted via overlayfs into "per-jail" directories.  Never found
time nor motivation to truly try this out.

--steffen
|
|Der Kragenbaer,                The moon bear,
|der holt sich munter           he cheerfully and one by one
|einen nach dem anderen runter  wa.ks himself off
|(By Robert Gernhardt)


More information about the COFF mailing list