Just a basic programmer living in California

  • 5 Posts
  • 256 Comments
Joined 2 years ago
cake
Cake day: February 23rd, 2024

help-circle

  • I’m not an organizer for this community. But I also find the Quikscript literature compelling. Although an advantage of Shavian is that it has an established Unicode assignment, and corresponding fonts are in circulation. For example Shavian text renders correctly for me running the Thunder Lemmy client on Android without any special setup.

    The main criticism I’ve read of Shavian comes down to accommodating dialect differences. How you write "R"s and vowels is particularly issuous. You kinda have to pick a dialect as the one to canonicalize in spelling. But I think that applies to all phonetic alphabets - unless someone has come up with some very clever system of per-dialect glyph interpretation rules that I’m not aware of.






  • Not OP, but I’ve been using Niri as my daily driver for almost two years (since v0.1.2). The stability and polish have really impressed me. In addition to the scrolling workflow it has some especially nice features for screen sharing & capturing, like key binds to quickly switch which window you are sharing, and customizable rules to block certain windows when showing your whole desktop.

    I do use a 40" ultrawide. Looking for options for getting the most out of an ultrawide was how I got into scrolling window managers.

    I only occasionally use my 13" laptop display. I still like scrolling because I like spatial navigation. Even if windows end up mostly or entirely off the screen I still think about my windows in terms of whether they’re left, right, up, or down from where I’m currently looking.

    I don’t like traditional tiling as much because I find squishing every window to be fully in view to be awkward; and with e.g. i3-style wms if I want to stash a window out of view, like in a tab that’s a separate metaphor I have to keep track of, with another axis where windows might be. Scrolling consistently uses on spatial metaphor, placing all windows on one 2D plane with one coordinate system.


  • Home Manager is a Nix tool for managing configuration for a single user, usually on a Linux or MacOS system, or possibly WSL. You configure installed programs, program configuration (such as dot files), and a number of other things, and you get a reproducible environment that’s easy to apply to multiple machines, or to roll back configuration, etc. I find it helpful for having a clear record of how everything is set up. It’s the sort of thing that people sometimes use GNU Stow or Ansible for, but it’s much more powerful.

    A Home Manager configuration is very similar to a NixOS configuration, except that NixOS configures the entire system instead of just configuring user level stuff. (The lines do blur in Nix because unlike traditional package managers where packages are installed at the system level, using Nix packages can be installed at the system, user, project, or shell session level.) Home Manager is often paired with NixOS. Or on Macs Home Manager is often paired with nix-darwin. As I mentioned, the Home Manager portion of my config is portable to OSes other than NixOS. In my case I’m sharing it in another Linux distro, but you can also use Home Manager to share configurations between Linux, MacOS, and WSL.



    • NixOS + Home Manager
    • Niri
    • Kitty
    • Neovim, via Neovide

    For work it’s Fedora + Home Manager because the remote admin software doesn’t support NixOS. Thankfully I’ve been able to define my dev environment almost fully in a Home Manager config that I can use at work and at home.

    I use lots of Neovim plugins. Beyond the basic LSP and completion plugins, some of my indispensables are:

    • Leap for in-buffer navigation & remote text copying
    • Oil for file management
    • Fugitive + Git Signs + gv.vim + diffview.nvim for git integration
    • nvim-surround to add/change/remove delimiters
    • vim-auto-save
    • kitty-scrollback


  • Out of sheer curiosity I checked. 18 USC § 921(a)(16) defines “antique firearm” for purposes of crimes and criminal procedure. The term “firearm” is defined in 18 USC § 921(a)(3), which includes the text, “Such term does not include an antique firearm.” (source)

    It’s perplexing because the “antique firearm” definition has numerous references to “firearm”. The (A) and (B) parts include or reference the text, “any firearm (including any firearm with a matchlock, flintlock, percussion cap, or similar type of ignition system) …”.

    So it looks like antique firearms are an instance of Russell’s Paradox. I guess a flintlock is not not a firearm. Paradox resolving powers must be one of those things you need law school for.





  • Also the Social Security Administration, despite being a huge operation, runs with less than 1% overhead. And they get those checks out month after month. Medicare’s overhead is under 2%, compared to an average of 12% for private insurance, and polls seem to show people are more satisfied with Medicare than with private insurance.

    I know the complaint that government is ineffective and inefficient is a classic - but it makes me wonder what programs that refers to? Maybe something in the Defense Department?





  • I use a chat interface as a research tool when there’s something I don’t know how to do, like write a relationship with custom conditions using sqlalchemy, or I want to clarify my understanding on something. first I do a Kagi search. If I don’t find what I’m looking for on Stack Overflow or library docs in a few minutes then I turn to the AI.

    I don’t use autocompletion - I stick with LSP completions.

    I do consider environmental damage. There are a few things I do to try to reduce damage:

    1. Search first
    2. Search my chat history for a question I’ve already asked instead of asking it again.
    3. Start a new chat thread for each question that doesn’t follow a question I’ve already asked.

    On the third point, my understanding is that when you write a message in an LLM chat all previous messages in the thread are processed by the LLM again so it has context to respond to the new message. (It’s possible some providers are caching that context instead of replaying chat history, but I’m not counting on that.) My thinking is that by starting new threads I’m saving resources that would have been used replaying a long chat history.

    I use Claude 4.5.

    I ask general questions about how to do things. It’s most helpful with languages and libraries I don’t have a lot of experience with. I usually either check docs to verify what the LLM tells me, or verify by testing. Sometimes I ask for narrowly scoped code reviews, like “does this refactored function behave equivalently to the original” or “how could I rewrite this snippet to do this other thing” (with the relevant functions and types pasted into the chat).

    My company also uses Code Rabbit AI for code reviews. It doesn’t replace human reviewers, and my employer doesn’t expect it to. But it is quite helpful, especially with languages and libraries that I don’t have a lot of experience with. But it probably consumes a lot more tokens than my chat thread research does.