kilo.bytesize

I have been working on Aux's burgeoning package set, nicknamed Tidepool. In developing this project I've had the opportunity to rethink many of the design decisions that other projects like Nixpkgs have made. One notable change is the use of the module system to declare packages rather than functions operating on plain values around a fixed point. These changes have wide-reaching implications for users which requires each to be thoughtfully considered. One such implication is overrides.

Overrides

In Nixpkgs a series of helpers allow for the customization of function arguments and build system modification. However, these are fairly blunt tools and many of the intricacies of package development require custom implementations in order to function effectively.

Firstly, each package created with callPackage is assigned a .override attribute which allows the user to call the package function again with different arguments. This feature is used today for overriding both dependencies and package configuration. Unfortunately, certain patterns like the use of pkgs as an argument in a package definition complicate this override behavior significantly. These overrides are also not discoverable unless a user is willing to read the package definition and understand what values are acceptable to provide. Often this is unclear.

Secondly, packages built with stdenv.mkDerivation come with a .overrideAttrs attribute. This attribute allows for the modification of configuration provided to stdenv.mkDerivation, letting the user modify the build steps, environment, and more. However, this tool, too, is a blunt one. Configuration provided via .overrideAttrs is not typically able to effect changes in the package's original definition. This means that any build steps, sources, or anything else that the package configures must either be used as-is or fully re-implemented. While a newer finalAttrs pattern exists to try and help somewhat, it is still far from commonplace in Nixpkgs.

Users often find themselves having to operate on a package using both of these disjointed helpers, producing strange code:

let
  myModifiedPackage = (package.override {
    dependencyA = pkgs.a;
    someSetting = true;
  }).overrideAttrs (old: {
    buildInputs = old.buildInputs ++ [ pkgs.b ];
  });
in
# ...

These functions should not be separate. In fact, there really should be a first-class, reactive solution for augmenting packages.

Submodules

A submodule is a fancy name for part of a module configuration which is evaluated separately before having its resulting value assigned. This gives developers the ability to construct more complex structures by having multiple layers of dynamic elements, many of which automatically configuring themselves. The resulting value, though, remains static. Once evaluated at a location this value must be used in full and the underlying definitions which produced the value cannot be reused elsewhere. Imagine, for a moment, that you wish to assign config.a to config.b, each of which uses the same submodule definition. What should the resulting value be for config.b? Take some time to digest this example and try to come up with the answer for what config.b.static and config.b.dynamic should be.

{ config, lib }:
let
  type = lib.types.submodule ({ config }: {
    options = {
      static = lib.options.create {
        type = lib.types.int;
        default.value = 0;
      };

      dynamic = lib.options.create {
        type = lib.types.int;
        default.value = config.static + 1;
      };
    };
  });
in
{
  options = {
    a = lib.options.create {
      inherit type;
      default.value = {};
    };

    b = lib.options.create {
      inherit type;
      default.value = {};
    };
  };

  config = {
    b = lib.modules.merge [
      config.a
      {
        static = 99;
      }
    ];
  };
}

... ... ... ... ... ... ... ... ... ... ... ...

Here is the answer:

{
  static = 99;
  dynamic = 1;
}

Did you get that right? Even if you did, is that result truly desired? I don't believe so. These dynamic pieces of configuration need to propagate in places like this, but the existing tooling in Nixpkgs does not allow for it. Instead, we need a solution for bringing the whole submodule definition along rather than just the static output.

Portables

In order for configuration at different locations to propagate dynamic behavior, the definitions of those behaviors must also be provided. To do so, an additional attribute can be added to submodules: __modules__. This name is not special, rather, it relies on its naming convention to inform users that it is not intended to be manipulated directly. By passing the module definitions along themselves, we can then perform a new evaluation with any additional modifications. The ergonomics of such a feature are not pleasant when performed manually as each location must implement the functionality. Instead, a helper type can be used to allow for more intuitive use: lib.types.submodules.portable.

When operated on as typical submodules, a portable submodule will propagate any configuration as a new module definition. For example, setting config.a.static = 99 will also add a definition to the __modules__ attribute for this location. Here is the resulting value of config.a after performing the assignment:

{
  __modules__ = [
    { config.static = 99; }
  ];

  config = {
    static = 99;
    dynamic = 100;
  };
}

This may solve the problem of dynamic values not evaluating, but we are now left with a new undesirable pattern. Users frequently need to override parts of configuration when performing assignment. This is performed with the convenient // operator to merge an old value with a new one. If used on a portable submodule, however, the changes will not propagate. Instead, a module definition needs to be added to __modules__ and a new execution of lib.modules.run is required. This can be worked around by separating assignment and augmentation with lib.modules.merge to leverage the portable submodule type's merging behavior, but the solution requires what feels like boilerplate. A first-class solution is preferred and we can look to lib.extend for ideas. A similar concept can be applied to portable submodules, allowing the user to call .extend on a configuration value and have any desired modifications applied in a way that respects the type's propagation functionality. Let's rewrite the original submodule example to use portables instead.

{ config, lib }:
let
  type = lib.types.submodules.portable ({ config }: {
    options = {
      static = lib.options.create {
        type = lib.types.int;
        default.value = 0;
      };

      dynamic = lib.options.create {
        type = lib.types.int;
        default.value = config.static + 1;
      };
    };
  });
in
{
  options = {
    a = lib.options.create {
      inherit type;
      default.value = {};
    };

    b = lib.options.create {
      inherit type;
      default.value = {};
    };
  };

  config = {
    b = config.a.extend {
      static = 99;
    };
  };
}

As expected, the resulting output respects the original module definition. The value of config.b is the following (omitting __modules__).

{
  static = 99;
  dynamic = 100;
}

Implications

The simple examples given in this post hint at some larger benefits that this technique provides. One use case being exploited today is package management in Tidepool to allow for package definitions to be portable and easily modifiable. Other problems such as system services can take advantage of this behavior to allow for easy reuse of module definitions to retarget a user environment or ephemeral development instance whereas existing solutions are required to reimplement everything for each unique location.

Overall I am quite happy with the results of this experiment and the developer experience for portable submodules is rather pleasant.

A torrent of emotions and frustration have engulfed the Nix Community, however one may define it. Tensions between contributors are high, tensions between non-contributors are high. There is no joy in such an environment. Within the last calendar year, problems bubbling beneath the surface have boiled over, breaching the previously restricted confines of individual conversations or one-off occurrences in chat rooms. Something needs to change, or so it would seem, unless the current dismal state of affairs is intended to persist. What, then, should change? Who will drive this change forward? And, perhaps most importantly, how can history be stopped from repeating itself.

Landscape

Nix, for the longest time, has operated organically. That is to say, rules and direction have only been constructed or provided once absolutely necessary. The growth of the project is beyond what one, two, or ten could manage on their own. As such, a bureaucratic body, the NixOS Foundation, was enshrined as the entity which would oversee legal requirements for the project as well as provide direction. This north star, however, was dim at best and misleading at worst. The lacking power, or described more bluntly, backbone, of the foundation has left it appealing to few. The entity is intended to provide oversight of Nix and its related projects, but has appeared to mostly be a vehicle for advertisement or legitimization of the businesses its members operate.

NixPkgs has operated in much the same way as previously described. Natural growth is difficult and, throughout the accumulation of hundreds of thousands of packages, the project certainly has its share of scars. Importantly, the package repository sees a much larger pool of contributors and interaction than Nix itself does. The ad-hoc structure of maintainers here better represent the goals of Nix users and their methods of success. Yet, these contributors are beginning to flee.

The work being done for Nix and its surrounding official projects is difficult to map. A series of “teams”, ranging from security to moderation exist to compartmentalize the many processes and fulfill the requirements of Nix projects. Paradoxically, these teams are hardly more empowered than any other individual contributor, only benefiting from more direct modes of communication and name recognition. The management of projects and features is still wildly ad-hoc and limited significantly by the volunteer nature of these contributors. Frequent re-litigation of well-discussed issues for one's ego, rejection of reasonable contributions in rejection of a healthy ecosystem, and thinly-veiled hostility is a staple of the Nix landscape despite many refusing to admit such.

Leadership

Put simply and plainly, Nix has none. The only entity asserting its official status as the central figure of Nix refuses to fulfill this position. Frequent inaction, unsatisfactory action, and a general disconnection from the entirety of the Nix community and ecosystem seem to be pervasive within the NixOS Foundation. Lacking the conviction to decide, fully, the direction of Nix for the entire ecosystem leaves the many corners of the community to construct their own interpretations. This model is not compatible with the current structure of Nix projects. Rather, the current boys-club of a foundation continues to stifle any natural ownership that contributors would develop for the many projects they provide value to.

The figures at the helm deserve assessment and scrutiny for, without their decisions, this current set of circumstances would not have precipitated. Yet, malice should not be attributed directly to these individuals, if only for naivety. The NixOS Foundation was not created to become a body of inaction or divisiveness, and, its members do not participate for want of division within the community. Building Nix, NixOS, and all of the related tooling is still the fundamental desire of everyone involved. For fear of detracting any further, it must be said that these people are just that: people. Treat them as such.

Should Nix survive its current cataclysm, a democratic future may present itself. Many of the troubles of today arise from a disconnect between the foundation members and the many Nix teams, contributors, and users. Providing these groups with methods for action may allow for the avoidance of such a problem recurring.

Community

Communities are comprised of the many. The Nix community is no exception. There is no single archetype which overlays cleanly upon every member. The myriad members express different beliefs and values, many of which are diametrically opposed. Many of these can be avoided or understood as unnecessary territory, but it is inevitable for disagreements to form and tensions with them. The actions within the Nix sphere have invoked such a disagreement, proving it had been put off for far too long. One subset of said community believes in valuing the impact their actions have upon others. Another subset believes willful ignorance towards the effects of their actions is acceptable. One cannot compromise the two views.

The solution, however, is not one of compromise, but one of direction. Fundamentally, the community must be provided clear direction of Nix and its projects. Either it will be a project of conscientious action or one of ignorance. The limbo within which the issue currently sits is not sustainable. Such a decision must be made and must be done so clearly and directly. Refusing to do so has and will continue to allow malicious bodies to incite more significant harm. A community devoid of people is no community.

Much of the frustration has risen from inaction by authoritative figures. Whether inconsiderate sponsorship, conflicts of interest, harmful conduct, or a refusal to collaborate, these actions have repeatedly affirmed such positions of authority as the effective owners of Nix with no consequence. Attempts at resolving any of these issues are to be deemed personal attacks with little to no validity. No action is capable of overruling this imbalance of power. With little to no recourse, the community is expected to accept every outcome.

Resolution

One must now decide, what kind of project is Nix? One that does good in the world seems like a hard statement to argue with. Mindful creation is no new feat, but clearly one that must be reiterated and practiced regularly in a cooperative manner. Still, there is no simple answer for Nix; rather, three outcomes appear possible.

  1. The internet moves on, people forget. Despite the current attrition Nix will continue, if only slowed briefly. The only lesson learned will be to explicitly refuse any acknowledgement of non-technical Nix topics. Nix will continue on its journey to becoming a product.

  2. The NixOS Foundation acknowledges its shortcomings and, in doing so, extend governance to contributors in a democratic fashion. Some will still leave, some will refuse to return, but cooperation will be the future of Nix. The events spoken of today will provide useful reference for management of contentious issues. Nix will be created by and for the Nix Community.

  3. The NixOS Foundation refuses to admit its failures and instead asserts certain undesirable actions as acceptable. In doing so, those who disagree will have no choice but to remove themselves from the now permanently hostile environment. Nix will retain only contributors accepting of these actions. All other contributors will either leave Nix entirely or collaborate on a fork. Nix will have competition by way of those who care.

Inevitably

No member of the Nix Community enjoys its current state of affairs. The desire for a more peaceful, collaborative environment is shared. Yet, some do not see the incongruity of their views and actions with those opposing them. This piece is not likely to sway any of these parties, but still the reader is encouraged to reflect on these events and their forthcoming effects. The actions taken today will disturb the sobriety of tomorrow unless due care is taken. Perhaps we can all do better for each other.

The tools which we create and make accessible define our culture and society. With the addition of new technologies like Large Language Models, many of the same unproductive arguments continue to be repeated. This post serves as an anchor to present my rationale for acceptance of different tools and avoidance or dismissal of others.

A tool is defined by its uses, of which there are typically many. A spoon may be useful for eating soup, but it is also adequate for digging a hole in dirt. These are positive uses, uses which are entirely beneficial for all parties involved. However, these are not the only uses a tool has. Rather, any tool extends beyond often recognized positive uses and into the realm of negative uses. A pencil, for example, may be a sufficient writing implement, but can also perform violent feats. While this example may seem exaggerated, it is important to recognize that every tool is capable of a myriad different uses, of which many are negative. Yet, these tools are in broad circulation and widely accepted to be harmless. Tools whose positive uses are either so vital or so numerous such that they significantly outweigh their negative uses are deemed acceptable. The distinction is not typically simple and contains nuances for the effects of the tool upon the individual and society at large. When a tool meets these conditions, it is still often engaged in avoidance of negative uses where possible. An automatic nail driver has safety measures integrated to prevent its operator from accidentally impaling themself. These negative uses are well-understood, considered, and actioned upon.

It would not be possible to talk about tools without speaking about the inverse type, a tool whose negative uses are so numerous or so atrocious that the only acceptable course of action is prevention. To select an, unfortunately divisive, example, guns. The positive to negative tool spectrum is not one of single steps and likewise different variations of tools fall upon this spectrum at different points. Weaponry is no exception. With the express purpose of killing another living being, its negative uses often greatly exceed its positive uses. However, positive uses do exist, such as hunting, marksman competitions, and possibly safety. This is not to suggest that military grade weaponry is acceptable to disperse within the broader population, but rather to suggest that a more restrained and considered tool can accomplish the necessary positive uses while entirely preventing previous negative uses. It is important to recognize the difference of effect that these tools impose on the world compared to others. These tools distinguish themselves by their uses, no different than positive tools.

With the foundation for considering tools we have now, it is important to ask questions of new creations:

  • What are the positive uses?
  • What are the negative uses?
  • Are the positives vital?
  • Are the negatives acceptable?

I fear that many of the necessary questions are not being asked and, at worst, the refusal to accept the existence of many negative uses is used to excuse the continued production and release of tools. The time to consider impact is not only after, but also before. One does not even need to predict the impact of AI tools now, considering the time they have been released and the substantial exploitation of its many negative uses. Over time, these tools, if produced and dispersed, will shape our world by their uses in the same way that past tools have. The tools we create today will define us tomorrow; consider whether that will leave the world in a better state than it was before.

Lately I have seen history repeating itself in the software world. Bun hit the scene with a remarkable response from web developers praising the tool for fixing many long-standing issues with NodeJS. Many of these problems were discussed at length in the NodeJS community and development circles, but all reached their dead ends the same ways: we can't do this, it would break the spec; we can't do this, it would be wrong; we can't do this, it would be incompatible. Seemingly impossibly then, Bun has managed to solve many problems such as coexisting CommonJS and ECMAScript Modules that NodeJS claimed were not feasible.

Some History

These events are not dissimilar from the events of 2014 when the io.js project officially forked off of NodeJS in order to further the project's capabilities. io.js quickly became the favored solution by developers by having modern JavaScript features and solving many problems that came from developing with NodeJS. After many io.js releases, and lengthy discussion, the project was finally merged back into NodeJS and officially ended as of 2016. It shouldn't be understated the amount of turbulence and frustration that these events caused to web developers attempting to simply get work done. What was originally one promising new technology for JavaScript developers quickly became two, with the original seemingly having stalled while the newest among them rolled in new feature after feature. Luckily for NodeJS, io.js being a fork enabled the two to reconcile their differences and merge into a single project.

The Newcomer

Bun has quickly proven it is willing to make things work without excuses and, unfortunately for NodeJS, the project is entirely separate from NodeJS internally. With nearly full compatibility to NodeJS and NPM it is difficult to recommend a solution other than Bun currently. Offering a drop-in replacement that performs better and improves the developer experience in every meaningful way is a difficult thing to look past. It is no wonder that web developers have begun flocking to Bun. Whereas NodeJS typically conceded previously, Bun seems to be willing to make things work:

  • CJS and ESM in the same file? We'll make it work.
  • Package installs too slow because of package locking, install process? We'll make it work.
  • Overhauling core APIs? We'll make it work.

Expectations

NodeJS may not be so lucky this time. As an entirely separate project not based on NodeJS, Bun has no future of being merged into NodeJS in order to save the project from itself once again. Instead, we will now finally see what would have happened if io.js had not rejoined its originator. Bun has tremendous momentum thanks to its compatibility, feature set, and performance. Unless NodeJS catches up and is willing to forego its penchant for technical correctness, it does not seem likely that it will stop Bun's path to dominance.

Setting up GPU passthrough on Linux can be tedious at the best of times. NixOS makes some parts of this process easier, but still requires many manual steps as of today. This guide will cover the steps necessary to enable passthrough, create a virtual machine, install Windows, and configure passthrough and Looking Glass for your VM.

Preparation

ISO Downloads

Before doing anything, make sure you have a recent ISO of Windows 10 (or Windows 11, though you may run into some incompatibilities with drivers). You will also need a VirtIO Windows driver ISO.

Bios Settings

To support virtualization, the feature (VT-d, VT-x, or SVM) must be enabled in your Bios. Consult your motherboard manual or explore the menus for information on how to do this.

Next, IOMMU groups must be enabled in the Bios. This information may be difficult to find on your own, typically a web search for how to enable IOMMU on your motherboard brand will help.

Graphics Processor

Ensure that your graphics processor that you want to pass through (either a second graphics card or a graphics card accompanying integrated graphics) is installed in the system. In order to function, you may require a “dummy” connector to be plugged into the graphics output. Here is an example of one such product on Amazon.

In order to specify the specific card to use, you will need to know its IOMMU group and id. To get this information you can either run nix run github:jakehamilton/config#list-iommu or create and run the following script yourself:

#! /usr/bin/env nix-shell
#! nix-shell -i bash -p pciutils

shopt -s nullglob
for d in /sys/kernel/iommu_groups/*/devices/*; do 
    n=${d#*/iommu_groups/*}; n=${n%%/*}
    printf 'IOMMU Group %s ' "$n"
    lspci -nns "${d##*/}"
done;

Note the entries for the graphics card that you would like to pass through. For example, an AMD RX480 may appear with the following entries:

IOMMU Group 23 23:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Ellesmere [Radeon RX 470/480/570/570X/580/580X/590] [1002:67df] (rev c7)
IOMMU Group 23 23:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Ellesmere HDMI Audio [Radeon RX 470/480 / 570/580/590] [1002:aaf0]

The important information from these entries are the groups (23.00.0, 23.00.1) and the ids (1002:67df, 1002:aaf0). Keep these on hand later to be used when working with the virtual machine configuration.

NixOS Configuration

Before creating a virtual machine, changes must be made to the system configuration. These changes will vary depending on the device you are using, but should typically require only specifying either amd or intel variants for kernel modules and options. Append or import the following configuration into your own.

{ pkgs, config, ... }:
let
  # Change this to your username.
  user = "my-user";
  # Change this to match your system's CPU.
  platform = "amd";
  # Change this to specify the IOMMU ids you wrote down earlier.
  vfioIds = [ "1002:67df" "1002:aaf0" ];
in {
  # Configure kernel options to make sure IOMMU & KVM support is on.
  boot = {
    kernelModules = [ "kvm-${platform}" "vfio_virqfd" "vfio_pci" "vfio_iommu_type1" "vfio" ];
    kernelParams = [ "${platform}_iommu=on" "${platform}_iommu=pt" "kvm.ignore_msrs=1" ];
    extraModprobeConfig = "options vfio-pci ids=${builtins.concatStringsSep "," vfioIds}";
  };

  # Add a file for looking-glass to use later. This will allow for viewing the guest VM's screen in a
  # performant way.
  systemd.tmpfiles.rules = [
      "f /dev/shm/looking-glass 0660 ${user} qemu-libvirtd -"
  ];

  # Add virt-manager and looking-glass to use later.
  environment.systemPackages = with pkgs; [
      virt-manager
      looking-glass-client
  ];

  # Enable virtualisation programs. These will be used by virt-manager to run your VM.
  virtualisation = {
     libvirtd = {
       enable = true;
       extraConfig = ''
         user="${user}"
       '';

       # Don't start any VMs automatically on boot.
       onBoot = "ignore";
       # Stop all running VMs on shutdown.
       onShutdown = "shutdown";

       qemu = {
         package = pkgs.qemu_kvm;
         ovmf = enabled;
         verbatimConfig = ''
            namespaces = []
           user = "+${builtins.toString config.users.users.${user}.uid}"
         '';
       };
    };
  };

  users.users.${user}.extraGroups = [ "qemu-libvirtd" "libvirtd" "disk" ];
}

Once updated, run sudo nixos-rebuild switch on your configuration and reboot.

VM Creation

Start by opening the virt-manager program. Once open, make sure that it has connected to the default KVM target (qemu:///system). This should be the default and require no extra work.

Now, create a new virtual machine, selecting the Windows ISO that you've downloaded. You may configure your system's RAM and CPU resources, but leave the disk configuration for later. We will be using a specific disk setup with VirtIO support. Before finishing VM creation, select “Configure the machine before installing.”

Within the VM Configuration window, add the following:

  • Two new disks:
    • A VirtIO disk to use aws your storage. If you are using an SSD, set Cache mode to non and Discard mode to unmap to improve performance.
    • A SATA CDROM to load the VirtIO drivers ISO that you downloaded earlier.
  • PCI devices for each IOMMU id that you want to use.
  • Optionally, add a USB device to pass through a controller.
  • Select Overview and view the configuration XML. Find the memballoon entry if one exists and replace it with <memballoon model="none" /> to improve performance.
  • Configure the boot order to boot from the main disk first and Windows ISO second. Additionally, check the “Enable boot menu” option to make selecting the boot device easier in the future.

OS Installation

You may now boot the VM and begin installing Windows. When prompted to select a drive, you may not see any available if you're using a VirtIO disk. When this happens, select “Load Driver” and select the VirtIO driver for your operating system version.

Windows 11 Installation

Windows 11 does not support systems without certain features like a TPM and Secure Boot. Currently, these features can be avoided by performing the following actions.

  • At the beginning of the install process (after clicking the first “Install Now” button), press Shift + F10 to open a console window.
  • Run the command regedit
  • Navigate to HKEY_LOCAL_MACHINE\SYSTEM\SETUP
  • Right click on Setup and select New > Key. Name it LabConfig and press enter.
  • Right click on LabConfig and select New > Dword (32-bit). Name it BypassTPMCheck and then assign it the value 1.
  • Right click on LabConfig and select New > Dword (32-bit). Name it BypassSecureBootCheck and then assign it the value 1.
  • Right click on LabConfig and select New > Dword (32-bit). Name it BypassRAMCheck and then assign it the value 1.

Now you may proceed with installation.

Drivers

SPICE

Install the SPICE drivers for your Windows system.

VirtIO

Install the VirtIO drivers if you haven't already. Note that these drivers do not currently support Windows 11.

Looking Glass

In order to install Looking Glass, you must download the matching host version for the client version that you will be using on your NixOS machine. To find your client version, run the following command:

nix-instantiate --eval -E "(import <nixpkgs> {}).looking-glass-client.version"

Download and install the appropriate host version of Looking Glass for your Windows system, taking care to get the one that matches the version you just checked.

Once installed you will need to configure the program to run on startup. This can be done from an administrator powershell with the following commands:

# Done manually
SCHTASKS /Create /TN "Looking Glass" /SC  ONLOGON /RL HIGHEST /TR 'C:\Program Files\Looking Glass (host)\looking-glass-host.exe'

# Installed as a service
C:\Program Files\Looking Glass (host)\looking-glass-host.exe InstallService

Passthrough

With the OS, drivers, and services installed on your VM, you can begin configuring virt-manager to use full device passthrough. To do so, shut down the VM and open its configuration in virt-manager.

Add the following entries to the end of the <devices> section in your system's XML.

<shmem name="looking-glass">
  <model type="ivshmem-plain"/>
  <size unit="M">32</size>
  <address type="pci" domain="0x0000" bus="0x0b" slot="0x01" function="0x0"/>
</shmem>

Change the <video> model to none in the <devices> section in your system's XML.

<video>
  <model type="none"/>
</video>

Remove <input type="tablet"> from <devices> in your system's XML.

- <input type="tablet" bus="usb">
-   <address type="usb" bus="0" port="1"/>
- </input>

You may now boot the VM and open Looking Glass. If Looking Glass does not work, I recommend connecting a display to the graphics card that you are passing through and using the output there to diagnose the issue. Sometimes Looking Glass can take a while to start or Windows will refuse to start it. A reboot of the VM may fix the issue.

Troubleshooting

Installation may be done, but the trouble isn't over. Thanks Microsoft!

Booting ISO results in PAGE_FAULT_IN_NONPAGED_AREA

This issue was resolved by correcting permissions on the ISO file. For some reason it was read-only when it should have been read-write. It was also owned inexplicably by the root group. Running sudo chown my-user:users my-windows-installer.iso and chmod g+w my-windows-installer.iso corrected the problem.

Windows fails to boot

Sometimes Windows will break itself...

No drive found

Windows isn't able to find the drive for some reason. We must boot into the installation ISO, select Repair this PC, and then choose Troubleshoot > Command Line.

Run the following command to list the disks currently available.

fsutil fsinfo drives

Find the drive with the VirtIO drivers (likely E:) and run the following command to install the driver for a 64-bit system.

# For Windows 10 drivers on the E: drive
drvload E:\amd64\w10\viostor.inf

With the VirtIO driver loaded, exit out of the terminal and select Troubleshoot > Startup Repair. Choose the Windows installation and let the machine reboot. Repeat the process of loading drivers again once the windows prompt comes up, then select Continue to Windows this time.

In the OS, run the following commands.

# Scan the drive & fix if possible
sfc /scannow

# Scan the drive & fix (more thoroughly)
dism /online /cleanup-image /restorehealth

# Scan & fix one last time...
sfc /scannow

Reboot the VM.

Windows 11 fails to install

Make sure that you've followed the steps in the Installation section of this guide to disable installation checks. If those no longer work then you may be out of luck.

Resizing <shmem>

In order to change the size of shared memory for Looking Glass, first shut down the virtual machine. Then remove the existing device with the command sudo rm /dev/shm/looking-glass. Then modify the XML for the virtual machine and start it.

My experience with Go began a few years ago in a brief attempt to see what the buzz was about. I had waded into the cloud native world running multiple Kubernetes clusters and could not escape the Go ecosystem if I tried. That attempt was short-lived due to oddities with the $GOPATH environment variable and associated workspace as well as a real lack of genuine use cases on my end. I already had everything I needed and could make do with other languages at the time.

Skipping forward several years to the present and I've been working with Go for a few days in order to make use of Tailscale's client library. I have been vaguely aware of improvements in the Go language and ecosystem over that time and know that the folks who work on Tailscale are incredibly knowledgeable Go developers. For all these reasons I found myself once again ready to dive into Go, but this time I've actually taken the plunge.

Setup

The initial experience for Go this time around was dramatically improved and there was no fuss involved. Perhaps I was already saved some discomfort by knowing about $GOPATH, but Go Modules make it an easily resolved annoyance. This initial setup phase was by far the most pleasant part and I've nothing to say other than: well done. Go's out of the box experience should get full (or almost full) stars.

Language

Once I got into actually trying to build something, the many flaws of Go began to surface. My initial reactions were that of surprise and disbelief. However, those reactions aren't particularly helpful to understand why I believe some pieces of the language aren't as good as they can be and it certainly doesn't offer any solutions for improving the language. In this section, I'll attempt to provide more useful feedback and solutions. I will also point out some of the parts that I do truly like about the language, almost all of which I believe other languages would be better for adopting.

Variable Declaration

I find Go's use of the := operator to declare, and assign a value to, a new variable to be the correct decision. Removing syntax bloat while maintaining readability is a difficult battle for any language, but I believe the Walrus Operator is a reasonable solution in this case. However, I do wish that Go would double-down and support type annotations in this form as well. It is awkward to require two lines when one would do:

// Current Go
var x MyType
x = GetSuperType()

// My Preference
x: MyType = GetSuperType()

I understand this is likely due to the preference and frequency of using multiple return values in order to support Go's error handling semantics. Of all the issues I've found, this may be one of the smallest “paper cuts” in the language.

Implicit Return Variables

I've not seen this in any actual code and have been told to never use this feature. In which case I must ask: why is it still in the language?

func getHi() (msg string) {
	msg = "hi"
	return
}

It seems like an interesting thought experiment which I admire and encourage language designers to explore. However this feature is not useful, often being more harmful than convenient. I believe that Go would be better off removing this feature.

Casing-Based Visibility

Using the casing of package contents to determine visibility for consumers is another interesting design decision. At first I was mostly unsure about the idea due to other languages being more flexible with naming conventions. Now, though, I am quite happy with this decision and, while I am still acclimating, I find it makes things clear and consistent when reading code.

The ability to map fields to custom names when transforming to JSON, making certain fields required for validation, and more are all great additions to Go's feature set without requiring much additional work.

type MyStruct struct {
  // Serialize/deserialize to/from a custom key in JSON.
  MyValue string `json:"my_value"`
}

Types

Go is in the difficult position of having to support the type syntax it's committed to, while constantly feeling the pain of worsened readability for doing so. It should be no surprise that the type syntax map[int]string is awkward and more complex types continue to get harder to read and write. Here are a few examples that are unnecessarily confusing.

// Array of one integer
_ := [1]int{0}

// *not* an array, but a Slice of integer with entirely different semantics
_ := []int{0}

// Empty struct
_ := struct{}{}

In some areas Go leans on syntax sugar to help ease some of these issues. I think that the language could do the same in some of these areas, but others seem to be more fundamentally difficult. Unfortunately I don't have a good suggestion for these that wouldn't require a change in the language's grammar.

Go Routines

While there are a few small things to watch out for and mutexes can be awkward, Go Routines dramatically improve the developer experience when developing concurrent applications. So much so that I've been using them! They're overall quite pleasant and reasonable so I have few complaints about them.

Channels

Accompanying Go Routines, channels serve well as event buses. While the syntax can be quirky they've worked as advertised and have been equally as pleasant as Go Routines.

Make

At first it was unclear to me why the make function needs to exist and cannot be replaced with an initializer syntax. After further inspection I understood why the function remains.

_ := make([]int, 10)
_ := make([]int, 10, 20)
_ := make(map[string]int)
_ := make(chan int)
_ := make(chan int, 5)

It seems like Go has backed itself into a corner where there is genuinely no other way to extend its initializers to support any amount of customization. They're all already overloaded to the point of being difficult to read in some cases and adding anything else would certainly do more harm. This is another case where the only solution I see would be a more fundamental change which would come far too late for Go.

Tags

Go's support for type metadata is something that all languages should learn from. The syntax seems to work well in Go to allow for tagging struct fields with additional information that can be used later in unique situations. I cannot stress how useful this feature is and how simple the implementation ends up being. In hindsight, this is a feature that should have been implemented in other languages for years.

Nested Block Comments

Let's lower the stakes again for a moment. While mostly a nitpick, support for nested block comments should be featured in all languages, especially new ones. It is a small thing that goes a long way to improving the developer experience.

Implicit Imports

Go's pattern of globbing together files to put everything on the package scope is an antipattern. This is a strong stance, but one I've learned from having worked with code that did just that. It quickly becomes impossible to know where anything comes from. Instead, imports should be explicit, requiring named members or a namespace to place its contents under. Without explicit imports any amount of Go code quickly becomes difficult for someone else to read.

Go constrains a package's scope to a directory so the damage is not as bad as it could be, but I do not see this as a solution. Rather, a more fundamental redesign of the package import system would be needed to resolve this problem.

Function Signatures

Functions in Go have become unwieldy with 5 separate grouped sections:

  • Method parameters
  • Generic type parameters
  • Function parameters
  • Return type
  • Function body

Here is an example of a function that could reasonably exist in a Go project.

func (m *MyThing) Act[N int64 | float64](num N) (N, error) {
  // finally do something
}

What's worse is that this example is on the simpler side. With more complex types and more parameters things grow quite quickly. It becomes far more difficult than necessary to read these function signatures. Without altering its syntax, however, there is no better solution.

Single Character Variable Names

This is more of an ecosystem problem in Go than a language one, but deserves a mention. Single character names are often not allowed and actively linted for in other languages due to the cognitive overhead they introduce when trying to read your code. Go developers seem perfectly happy to ignore this advice and use single character names for most things. Other shortened or abbreviated names are no better and should be actively discouraged. Go will not let you compile your application if you have an unused variable, but is happy to compile if all of your variables are unreadable. It seems like there is more that can be done to ensure Go programs are maintainable over time in addition to being quick to develop.

Packages

Time Format

Go's time package made the decision to use a pre-defined date as templating values for parsing and formatting. This change is no more arbitrary than strftime and only seeks to upset the law of least surprise by providing yet another flawed solution.

Go's time package uses the date Jan 2 03:04:05 PM 2006 MST as template parts in order to format or parse other dates. These values were selected by counting upwards: 01 02 03:04:05 06 MST (MST is GMT-7). Ideally this would disambiguate which numbers correspond to which values. In practice it is just as confusing (or more) than strftime. Immediately the first value being the Month throws off developers outside the US as well as a good chunk of US developers who use other formats (typically ISO-8601). Why should we prefer time.Parse("01, 2006", myDate) over something like time.Parse("mm, yyyy", myDate)? I believe that we shouldn't.

Instead, either a syntax consistent with other languages should be used or the Go alternative should remove its arbitrariness altogether and parse templates in another way like the following.

time.Parse("<padded-month-number>, <full-year-number>")

This problem is more confusing given Go's fmt helpers which do have similar functionality to other languages and make use of template tokens like %s, %d, etc. This is not a foreign concept for Go and I think that strftime formatted templates would feel right at home in the language.

Closing Thoughts

In my experience with Go I've seen a pattern begin to emerge . You've likely already spotted it as well: many of these issues would require fundamental changes to Go in order to fix. Down to the syntax, Go suffers from a number of consequences due to its design decisions. Almost all of these problems seem to be self-inflicted and unrecoverable without committing to a major release which is incompatible with the entire current ecosystem. Some issues may be solved or given band-aid solutions, but the majority seem to be here to stay. Even of the issues that I believe could be fixed there seems to be the consensus that these aren't problems and Go is designed perfectly. Without a large amount of effort and willingness to change, I don't see Go evolving into the language that it could be. The language provides several unique features and possibilities, but they're currently squandered by consequences of early design decisions.

Finally, I know that creating a language is hard. Designing it to be perfect is near impossible and actually hitting that mark is even more difficult. I want Go to be better, I think it has a lot of potential and I can see myself falling in love with the language if it continued to improve. With its large following the language is in a unique place to effect positive change for many developers, new and old.


P.S. On the off-chance that someone reading this works on Go, then I'd like you to know that I've appreciated the work you've done. And I hope that you're given the opportunity and support to make the language even better.