I'm a full-stack web developer, and this is my blog. Please connect with me on LinkedIn or visit my Github for more! Also, you may be interested in learning more about me.

Projects

  • Why I Switched to git switch

    Git is too hard.

    When I first learned1 git, I learned that to stop working in the current branch and to start working in a new branch, the command was git checkout branchname. Or to create a new branch, git checkout -b branchname.

    All the way back in 2019, though, the git team released git switch, which is supposed to replace git checkout. (Don’t worry, checkout isn’t going anywhere.) And finally, this year, I retrained my muscle memory to use git switch instead of checkout. Why is switch better?

    1. checkout tries to do too many things. git checkout <branchname> is valid, as is git checkout <hash>, as is git checkout -- path/to/file. These all do different things. Checking out a *branch means “start working on this branch.” Checking out a commit hash puts you in “detached HEAD” state, the source of many a developer’s footgun.2 Checking out a file reverts it to a previous state (usually to the state of the last commit).

      I’ve used all these use cases! Usually on purpose! But you have to admit it’s kinda confusing.

      Also, if you have a file and a branch that have the same name, git has to decide which one you meant when you use checkout. I’ve never come across this particular collision myself but I can imagine there are a non-zero number of git branches called readme out there, which could lead to really unexpected results if you just typed git checkout readme without looking closely at the output.

    2. The way to create a new branch with switch is to use the -c flag, which means “create.” The way to create a new branch with checkout is to use the -b flag, which means “branch,” which is a tautology.

    Basically, as I understand it, the git team felt that checkout did too many things and was confusing for new users. All good reasons to split the command into two.

    Most important to me, however, is git switch - which switches back to the previous branch, similar to other Unix-y commands, such as cd - , which takes you back to your previous directory.

    As always, saving like two keystrokes is the only thing I care about. Efficiency!

    1. Nobody has learned git. We all just type random things into our terminal until it works or we have to do a git push --force

    2. I still don’t fully understand what a detached HEAD is. 

  • TIL: Cleaner Log Output When Using 'Concurrently'

    People's legs and feet on a racetrack. They're about to begin running at the same time.

    If you’ve ever used the package concurrently to run more than one thing at the same time while developing, you probably have seen how the logs are a little jumbled together. All output is logged to the console, by default, with a number representing the process that created the output. As an example, here’s a Vite frontend and a toy websocket server in the backend that logs every message received from the FE:

    [0]   VITE v7.3.0  ready in 220 ms
    [0] 
    [0]   ➜  Local:   http://localhost:5174/
    [0]   ➜  Network: use --host to expose
    [1] received: { 'message': 'hi' }
    

    This is…fine? But it’s easy to miss the [0] or [1], it’s just one number and honestly if you’re not looking for it it just sort of fades into the background.

    But by passing the --names flag in you can call your processes anything you want.

    concurrently --names frontend,backend 'vite' 'node src/server.js' turns the above output into:

    [frontend]   VITE v7.3.0  ready in 220 ms
    [frontend] 
    [frontend]   ➜  Local:   http://localhost:5174/
    [frontend]   ➜  Network: use --host to expose
    [backend] received: { 'message': 'hi' }
    

    And of course you can use more descriptive names, although these are plenty for me.

    I wasn’t able to find the official documentation on this, although there is documentation on how to interact programmatically with concurrently and pass in the names options. I’m not sure how to incorporate that into a development pipeline but that’s a me problem. For now, the command-line flag is all I need.

  • Making This Blog Even Faster With Speculation Rules

    A man on a bike going so fast that the background is just a series of blurred lines.

    Browsing the HTMHell Advent Calendar I learned about a completely new-to-me browser API called “speculation rules.” This poorly-named (according to me) feature allows browsers to prefetch or even pre-render content speculatively, basically predicting what a user is going to click on. Currently, this feature is available in Chromium-based browsers, but Safari and Firefox are working on it, and in the meantime, including it doesn’t harm the experience for Firefox and Safari users.

    In its most basic form, adding the following to a page:

    <script type="speculationrules">{  "prerender": [{    "where": { "href_matches": "/*" },    "eagerness": "moderate"  }]}</script>
    

    is all it takes to pre-render the destination link when a user hovers their mouse over it, making the load almost instantaneous from the user’s perspective.

    The author of the blog post, Barry Pollard, who works on Chrome at Google, goes on to explain some of the quirks of speculationrules. For example, how do you handle mobile users, where “hover” isn’t a thing? What about Javascript that you don’t want to execute before the page is actually viewed? (What about analytics where you don’t want to count page views before the user actually looks at the page?) These are problems that are “actively being worked on,” which means not solved just yet. I would not use this feature on a production site where I cared about any of those things.

    This is a progressive enhancement to <link rel="prefetch"> which is more widely supported. But the old way seems harder to implement as, at least by my read of MDN, I would have to consider exactly what links are most likely to be loaded next by the user on what pages, whereas the speculation rules API decides for me based on the user’s actions.

    To sum up, this is a pretty neat option that is definitely not yet ready for use on all sites. There are a lot of questions that need to be answered. But in the meantime, I think speculation rules are a perfect fit for a static, no-JS site like this blog. Which is already pretty fast, by dint of it being a static, no-JS site. 🤷

    If you’re using Chrome and want to check out the new behavior for yourself, simply open Devtools, navigate to the Network tab, then hover over any link on this site. You’ll see a little blip of activity before you click. Neat!

  • Webpack 201

    An orange cat sitting inside a box, as cats do. This cat is now bundled for production.

    In 2024, I wrote about learning the basics of Webpack when I realized it was a tool that I used almost daily without thinking about it.

    Now, I’m working on a Firefox extension that is going to depend on a third-party library. How on earth do you get third-party packages into a Firefox extension?

    The answer is, again, Webpack. But unlike with React-based projects, where Webpack is often invisibly part of the build and deploy process, we are going to have to manually configure Webpack ourselves for this extension.

    The problem

    I want to include third-party Javascript in my extension. As we learned from my previous post, it makes developers’ lives easier to be able to require or import external scripts, but the browser doesn’t know how to find those scripts on a filesystem. Or even how to access a filesystem. So we need to use Webpack to turn those external dependencies into internal code that the browser knows how to use.

    Yes, this is overly simplified.

    As a concrete example, my extension’s structure looks like this:

    .
    ├── db.js
    ├── node_modules
    ├── package.json
    ├── package-lock.json
    ├── readme.md
    ├── index.js
    └── webpack.config.js
    
    

    Note that there are multiple .js files in the root of the extension, plus a node_modules folder. Any time I write import {thing} from 'thing' in my code, whether I’m talking about code I created or a module I installed, my local dev environment knows how to resolve those imports, but a browser environment wouldn’t – hence the need for a build tool like Webpack. (Note: this is overly simplified and I read Lea Verou’s writing on this topic and, much like the post below, it broke my brain.)

    The footnote

    There are a zillion other ways to get Javascript libraries working without a build system. I read about a number of them on Julia Evans’s blog, but the outcome of reading that blog post is I realized I just don’t know enough about Javascript to understand all these options. I’ve set up time with a senior engineer at work to learn more, which is exciting in a very nerdy way.

    I can say for sure that one of the alternatives is, depending on how the module is written, “just stick it in a <script src> tag, which would be very simple except that Firefox extensions don’t use HTML.1

    There are other options (including importing modules directly from CDNs??) but let’s assume for this project we just want to use webpack.

    The solution: roll your own (simple) webpack config

    First, we need a package.json file to define our dependencies and build tooling. Actually, as I write this, I don’t know for sure if this step is 100% necessary, but it makes things easier. I ran npm init from my extension’s base folder and a wizard walked me through creating a package.json. Super easy!

    I then modified my package.json to look like this:

    {
      "name": "linkedin-extension",
      "version": "1.0.0",
    ...
     "scripts": {
        "build": "webpack"
      },
      "author": "rachel kaufman",
      "devDependencies": {
        "webpack": "^5.74.0",
        "webpack-cli": "^4.10.0"
      }
    }
    

    Now when we run npm i we should install two Webpack tools as dev dependencies, and when we run npm run build webpack will run.

    We then define a webpack.config.js file:

    
    const path = require("path");
    
    module.exports = {
        entry: {
            whatever: "./whatever.js"
        },
        output: {
            path: path.resolve(__dirname, "addon"),
            filename: "[name]/index.js"
        },
        mode: 'none',
    };
    
    

    This defines an entrypoint of whatever.js, meaning when we run npm run build, Webpack will look at that code, resolve any imports in it recursively, and output new, bundled files to the addon directory. Specifically, to addon/whatever/index.js. We then refer to those built files instead of the source files when running or testing our extension.

    This was surprisingly easy, thanks to MDN’s great docs.

    What does the extension do? Can’t wait to tell you about it next time.

    Further reading

    Footnotes

    1. Okay, they do, but I’m talking about content scripts which allow us to load .js or .css