The Darknet, in most peoples understanding, isn't just an overlay network. Its kind of become synonymous with Tor hidden services. Previously, I was using projects like Yggdrasil and CJDNS as modules to plug BelaGOS into some form of Darknet. While that is cool, to make a cyberpunk operating systems; it needs to be hittable in the "cool" Darknet. That was today's work.
I was kind of opting to skip this as I assumed it was going to be a pain. However, it turns out its extremely easy. Easier than getting things set up in Yggdrasil and IPv6. Maybe even easier than routing a normal IPv4 connection strangely. Shockingly easy. Basically, I just followed the "Anonymizing Middlebox" guild on TransparentProxy. Seeing a ".onion" address in Mothra is a beautiful thing, and I plan on posting a screenshot on my website over the old based Yggdrasil screenshot.
Exposing 9p via Tor hidden services is extremely cool, and extremely laggy with some things as you would expect. When connecting with Drawterm to an FS server over Yddrasil; at least in my home network its extremely fast; none of that anonymous overhead. Just the local link detection. When connecting over Tor, its bouncing that connection all over the world just to hit a server a couple feet away. GUI based Drawterm connections are unusable. But the non graphical connection seems fine.
"proxychains4" is needed to get drawterm to play nice with Tor. Everything seems to work but "ping". I even downloaded and installed some packages from 9front. There was no issue mounting that remote filesystem over Tor.
I plan on making Tor the default for IPv4 connections for this thing. Maybe making a "clearnet" installer options for the squares. I think forcing all outbound IPv4 connections through Tor might make me feel better about opening an install for public connections. For inbound connections, I think Yggdrasil is still my choice.
2021-02-15Not much in this week as far as changes to this project. Mostly experimentation.
The biggest update is I got X11 working on the grid (thanks to equis). Rather than have the X11 apps run on a VM inside one of the Plan 9 servers, my goal was to have the X11 app run on the main host. As most folks use linuxemu for X11 apps on Plan 9, I had to play around with some old school X11 stuff to get it to work. But in the end, I got Firefox to pop on a rio session with drawterm! Ill make sure to add some notes to Notes to document this for other people.
Disk encryption was another thing I played around with. The documentation around this seems extremely straight forward, so no problem there. A couple use cases I have in mind have me running this thing on headless servers and VMs, which sometimes makes full disk encryption of the host a pain. So I want the possibility of some sort of Plan 9 disk encryption option. I'm toying with the idea of having like an Expect boot wrapper that reads from a Named pipe on boot for the disk password, and just sits and waits until its available. So it could be booted headlessly with disk encryption; the admin would just have to log in and send the disk password in via the named pipe and the server would continue booting. This is probably a ways off; just a thought.
I played around a little with moving the DHCP from DNSMasq to Plan 9. I don't really have a practical reason for this (in fact, I think its best to have the network stuff set outside of the grid as it makes it easier to add in stuff without rebuilding the grid). Its more just playing around to have more elements of a Level 7 grid (Expanding Your Grid).
I've been toying with the idea of making each plan 9 server (fs, auth, and cpu) its own vde network with its own IPv6 Yggdrasil address. Some complications around this is I haven't figured out how to create multiple tun interfaces for Yddrasil. It seems like a cool idea, however, in that these components of the grid can be anywhere and even move around; yet still have predictable addresses. I also haven't been able to PXE boot a Plan 9 machine from an IPv6 address for some reason.
2021-02-07I didn't set out to work much on this project this week. However, there has been some massive progress. I'm not sure, but I might have made the first ever 9fs mount over Yggdrasil between Plan 9 machines. Maybe the first ever Plan 9 9fs mount over any kind of Darknet?
Graphically, I've added some the 9front "Amber" theme Rio build. I've always thought Amber was the coolest terminal color. Basically, I want this thing to look like something that would run on a "Grid Compass". Here is the first screenshot with this theme, where its using Mothra to connect to a Yggdrasil search page. Truly, a lot of cyberpunk going on in one screenshot. http://jgstratt.sdf.org/belagos/images/Screenshot1.pngFinally, I've added some ip6tables commands that allow all the normal Plan 9 connections (9fs, auth, and cpu) over Yggdrasil. However, I'm not sure I want them on by default, so they are commented out in the "install_darknet.sh" script.
2021-01-31I wanted a quick ref for some of the extremely useful things I learned about Plan 9 and some Auxiliary topics around uses in Plan 9, so I started this Page: This weekend I made a breakthrough. The build grid process qemu in curses mode driven by Expect to do most of the install with Plan 9 in text mode. Curses, despite being text based, is full of hidden terminal control characters. This plays havoc with Expect, resulting in the build process failing about 20% if the time. This is because the default Plan 9 install doesn't seem to support serial consoles (ie "-nographic") in qemu. One of the benefits of this failure is the build process is now fairly robust, and should be easy to restart now. However, this isn't exactly helpful to someone playing around with my project. One of my goals in life is to write as little TCL as possible, so I doubled down on getting serial consoles working on Plan 9. Routing around in the plan9.ini file documentation, it turns out it's simply a matter of adding "console=0" to the plan9.ini! This allows us to run qemu in "nographic" mode and make our install more robust! An added side knowledge nugget, adding "*acpi=1" to the plan9.ini file will make "fshalt" powerdown the system. No more wacky qemu control characters in Expect! Well, not after the first one. I worked with my Dad (who is a graphics guy) to design a logo for this thing. It includes references to Plan 9 (via Glenda), Bela Lugosi (vampire fangs), Bauhaus (via boxy style of the face) whos most famous song is "Bela Lugosi's Dead". I plan on building out that page and moving my updates there. Like maybe having a Journal with these entries. Also, after some tweaks, I got it working a Mesh Network (batman-adv based) via another one of my projects; Mesh Front (https://github.com/JonStratton/mesh-front-py). So now its a Plan 9 grid emulated in Qemu, connected to the network via a wireless mesh network, and connected from the Mesh Network do a Darknet via Yggdrisil! Damn that's a lot of cyberpunk stuff! Behind the scene I pulled out drawterm from the install. I didnt like that I had to build it from source (as its a branched version of the main drawterm with 9front auth added). Instead I have some light RC scripts that I copy over with 9mount. I still need to work on the gui of this thing. Im thinking of using one of the 9front version of rio. Like the one that is Amber on black to look more hacker-ish. Ideally, I want the interface to look like it belongs on one of the Grid Compass turret control laptops from the Aliens film. Also, I want to smooth the interplay between BelaGOS and Mesh Front, so Ill probably be making some changes to that project. I also need to add something to deal with port forwarding over the IPv6 overlay network. One of the Main goals is to say things like "Meshing my Node into the Darknet Grid!" So I guess I have to make it so the Grid can be build over the IPv6 networking. I think I'm going to have to lean more about Plan 9 IPv6 networking to make that happen. If anyone knows this, or can point me to a source more than Plan 9's networking man page, I would appreciate it. Not sure if anyone is still curious about my personal project (a self contained Plan 9 grid emulated on QEMU and connected to the Darknet), BelaGOS. But I reached a point this morning where I got everything in the grid to build with just one script. Some notes on useful things I have learned in the last couple weeks: The 9front version of Drawterm is a must for interacting with 9front as it supports some auth method the drawterm in the debian repo doesn't. It also has a "-G" mode, that allows it to run in text mode, which help some with the install. When executing some commands with the "-c" argument (like ip/tftpd) it hangs for some reason. Drawterm mounts the local filesystem automatically (aka in the namespace) under /mnt/term. Its much more robust to use the 9front drawterm to run an RC script from the local file system over QEMU + Expect for some things. Running qemu in curses mode is HORRIBLE with Expect. Basically, terminal control characters constantly interrupt the text. I HATE TCL I HATE TCL I HATE TCL. A dirty hack was to keep what it keys off of as short as possible. Even then, it stalls out occasionally. A much better why would be to use the "-nographics" mode with QEMU, but it doesn't seem like Plan 9 is set up for serial terminals out of the box. I am trying to do as little with Expect as possible until I can get around this. Plan 9 seems to support IPv6 out of the box and didnt need much to get on a Overlay / Darknet (Yggdrasil or CJDNS). I simply did the normal iptables stuff and advertised the router on my tap0 network with radvd. However, it seems Plan 9 still wasn't using this route by default, so I had to add a static route during the install. These grid nodes(?) being so interconnected make the build process much easier and much more difficult in some ways. For instance, when installing on the authserver or cpuserver, I need to have the fsserver running as they PXE boot from that server. To run the cpuserver installer, I need to have both the authserver and fsserver running. Its almost like these things were not intended to be installed like this... I'm in too deep now. I've been typing ip/ping at work and at home on linux machines and in powershell. I feel oppressed by the stacks of hard drives with redundant OS installs are closing in around me. The unhardnessed power of many CPUs sitting idle on my home network feels like a powder keg, waiting to go off. This weekends work was to get Plan 9 on the dark web. This ended up being much more work than I expected. I ended up running Yggdrasil on my host machine, adding an IPv6 address to my grid network, and using iptables. The hard part was "Router Advertisement". In the IPv6 world, routers seem to be able to advertise themselves and their ranges. And clients will auto detect them. On linux, "radvd" seems to be used for this. Plan 9 seems to have this ability (ip/ipconfig ra6 recvra 1), and will even complain when it sees a IPv6 network without a "RA" (ipconfig: recvra6: no router advs after 3 sols on /net/ether0). It took me creating an OpenBSD vm on my qemu grid network to see that for some reason Plan 9 didnt seem to be sending traffic to this router. In the end, I ended up putting a record in Plan 9's routing tables to force all IPv6 traffic into the router address:
Like with my Expect scripts, this is dirty but it moves things along for now. The next goal is to force one of those dark rio themes, so it looks all hackerish and cyberpunk. Tonight's work was to get NVRAM working. I opted to create a 1MB plan9 disk and put the NVRAM there. I wasted some amount of time trying to figure out why my fs server and auth server werent authenticating. It seems after you get it set up, you need to reset glenda's password on the auth server. Got PXE booting off Auth server (and CPU, terminal, etc). Its very strange when you think about it; booting an external server that shares a filesystem, yet is also used to authorize access. Now some things I am currently fighting with: 1. PXE booting seems to cause issues with the default location of "nvram", so I am having issues with the credentials on the Auth servers living between boots. It seems like you can define a location for this to live in the plan9.ini file; I just got to find a standard location on the FS. 2. Expect plays nice with Curses about 85% of the time. But about 15% of the time my Expect scripts fail at the "boot" prompt due to funky curses control chars. There are some nasty Expect workarounds for using with curses I want to avoid (as I HATE TCL). I dont think I can used qemu on console mode to talk with plan 9 in clear text. Another option would be to: 3. Create a 9fs mount between the host machine and the fsserver and use that to copy stuff over to the fsserver. This was an eventual goal, but it might make the building of this stuff easier in the short term. Hell, Im already copying all the credential in a keepass db. Maybe I can just create the nvram on the host box (with plan 9 from userspace) and copy it over every time or something crazy like that. I'm working on some Expect scripts to build a Qemu VM 9front grid for a fun personal project / Distro ( https://github.com/JonStratton/belagos-builder [BelaGOS. Get it, its a pun on Bell Labs and Bela Lugosi!]). What I was doing was creating an image from the base install, copying it into two images (one for the Auth Server and one for the FS Server). Then I was going to convert those images to Auth and FS Server. However, it seems like a waste of the disk space have duplicate file systems. So I am trying to more info on both having my Auth Server: Can anyone point me to simple documentation around this? Currently Ive been looking at http://fqa.9front.org/fqa7.html and https://nicolasmontanaro.com/blog/9front-guide/ , and they are great for converting an disk install to Auth and FS Servers. But I basically want to start with an FS server, give my Qemu runner a MAC address, and have it pull the base image and config files from the File Server. Whoops, looks like I skipped ahead. Seems to be documented well on http://fqa.9front.org/fqa6.html#6.7.1 ...echo 'add :: 0:0:0:0:0:0:0:0 300:c865:30ee:904e::1' >/net/iproute
2020-12-24
2020-12-22