I'm a full-stack web developer, and this is my blog. Please connect with me on LinkedIn or visit my Github for more! Also, you may be interested in learning more about me.

Projects

  • Webpack 201

    An orange cat sitting inside a box, as cats do. This cat is now bundled for production.

    In 2024, I wrote about learning the basics of Webpack when I realized it was a tool that I used almost daily without thinking about it.

    Now, I’m working on a Firefox extension that is going to depend on a third-party library. How on earth do you get third-party packages into a Firefox extension?

    The answer is, again, Webpack. But unlike with React-based projects, where Webpack is often invisibly part of the build and deploy process, we are going to have to manually configure Webpack ourselves for this extension.

    The problem

    I want to include third-party Javascript in my extension. As we learned from my previous post, it makes developers’ lives easier to be able to require or import external scripts, but the browser doesn’t know how to find those scripts on a filesystem. Or even how to access a filesystem. So we need to use Webpack to turn those external dependencies into internal code that the browser knows how to use.

    Yes, this is overly simplified.

    As a concrete example, my extension’s structure looks like this:

    .
    ├── db.js
    ├── node_modules
    ├── package.json
    ├── package-lock.json
    ├── readme.md
    ├── index.js
    └── webpack.config.js
    
    

    Note that there are multiple .js files in the root of the extension, plus a node_modules folder. Any time I write import {thing} from 'thing' in my code, whether I’m talking about code I created or a module I installed, my local dev environment knows how to resolve those imports, but a browser environment wouldn’t – hence the need for a build tool like Webpack. (Note: this is overly simplified and I read Lea Verou’s writing on this topic and, much like the post below, it broke my brain.)

    The footnote

    There are a zillion other ways to get Javascript libraries working without a build system. I read about a number of them on Julia Evans’s blog, but the outcome of reading that blog post is I realized I just don’t know enough about Javascript to understand all these options. I’ve set up time with a senior engineer at work to learn more, which is exciting in a very nerdy way.

    I can say for sure that one of the alternatives is, depending on how the module is written, “just stick it in a <script src> tag, which would be very simple except that Firefox extensions don’t use HTML.1

    There are other options (including importing modules directly from CDNs??) but let’s assume for this project we just want to use webpack.

    The solution: roll your own (simple) webpack config

    First, we need a package.json file to define our dependencies and build tooling. Actually, as I write this, I don’t know for sure if this step is 100% necessary, but it makes things easier. I ran npm init from my extension’s base folder and a wizard walked me through creating a package.json. Super easy!

    I then modified my package.json to look like this:

    {
      "name": "linkedin-extension",
      "version": "1.0.0",
    ...
     "scripts": {
        "build": "webpack"
      },
      "author": "rachel kaufman",
      "devDependencies": {
        "webpack": "^5.74.0",
        "webpack-cli": "^4.10.0"
      }
    }
    

    Now when we run npm i we should install two Webpack tools as dev dependencies, and when we run npm run build webpack will run.

    We then define a webpack.config.js file:

    
    const path = require("path");
    
    module.exports = {
        entry: {
            whatever: "./whatever.js"
        },
        output: {
            path: path.resolve(__dirname, "addon"),
            filename: "[name]/index.js"
        },
        mode: 'none',
    };
    
    

    This defines an entrypoint of whatever.js, meaning when we run npm run build, Webpack will look at that code, resolve any imports in it recursively, and output new, bundled files to the addon directory. Specifically, to addon/whatever/index.js. We then refer to those built files instead of the source files when running or testing our extension.

    This was surprisingly easy, thanks to MDN’s great docs.

    What does the extension do? Can’t wait to tell you about it next time.

    Further reading

    Footnotes

    1. Okay, they do, but I’m talking about content scripts which allow us to load .js or .css

  • I Created Custom Procedurally Generated Truchet-Tiled Open Graph Images for This Blog

    The four basic Truchet tiles are just squares with a diagonal line drawn through the middle. Half of the square is colored black and the other half is colored white.

    I love Truchet tiles, which are square tiles that form interesting patterns when you tile them on the plane. The idea that some basic shapes, like the triangles above, can form elaborate emergent patterns when tiled in interesting combinations, fits in nicely with my interests of quilting and drawing geometric abstract shapes, which are both things I do in my spare time.

    I recently rediscovered the Truchet tiles by Christopher Carlson; he is also a mathy quilter to some extent, but I rediscovered his work thanks to Alex Chan, who blogged about recreating the Carlson tiles in SVG in order to use them as blog headers. That tripped something in my brain, and I remembered reading Cassidy Williams’s post about generating custom open graph images last year, and obviously I needed to smash these things together.

    The result is a custom image for every blog post that is used in the og:image tag in its header, which is what controls how posts are previewed when shared on social media, within Slack, etc. Each image has a unique, procedurally generated tiled background unique to only it1, plus the title of the post and my name. Here’s what this post’s image looks like:

    An orange and yellow Truchet-tiled design fills the background of this image. The foreground text says the title of the post: "I Created Custom Procedurally Generated Truchet-Tiled Open Graph Images for This Blog".

    So now I’ve covered why I built this (I got nerdsniped over winter break), but how did I do it? Read on to hear about that.

    I will add the disclaimer that there are probably much easier ways to achieve the same end result. Somehow I just got hung up on “do the thing Alex Chan did and then combine it with the thing Cassidy Williams did” and that was the architecture I ended up following. I’m curious how others would implement this while starting from scratch – please reach out!

    Creating the template

    Here I basically followed Alex Chan’s pseudocode, but as I don’t think they were writing with the intention of someone coming along and wholesale lifting borrowing it into their own project (fair), I did have to do a lot of tweaking. Essentialy, you define a set of base tiles as SVG that can be used to create all the shapes in the set, then you define all the shapes in the set, and then you use Javascript to randomly pick a tile from the bag and place it in your image:

    tilePositions.forEach(c => {
        let tileName =  tileTypes[Math.floor(Math.random() * tileTypes.length)];
      
        svg.innerHTML += `
          <use
            href="#${tileName}"
            x="${c.x }"
            y="${c.y }"
            transform="translate(${padding} ${padding}) scale(5)"/>`;
      });
    

    In Chan’s initial implementation (as well as Carlson’s) there’s also the complexity of the Truchet tiles working at multiple scales. While this is the coolest part of the original project, mathematically speaking, I a) didn’t love the look of the smaller tiles and b) couldn’t figure out the fiddly padding, even with Chan’s pseudocode, so mine is just a single layer of tiles of a single size. I dumped all this into a single HTML file that lives on my computer.

    Once I had the background, I needed to add some text. There are many approaches to this, but I decided to add a query param to the local HTML file which would take in text and render it to the SVG using the <text> element.

    That looked like this:

     const text = new URLSearchParams(window.location.search);
     svg.innerHTML+=`
      <text class="regular" style="font: bold 30px sans-serif; text-anchor:end" x=1140 y=250 >${text.get("foo")}</text>
      `;
    

    This worked fine…until it didn’t. SVG text positioning is a little janky and you don’t have as many levers to pull as with regular HTML text positioning. And we have to handle our own linebreaks.

    What I ended up doing is pretty hacky, but it works. I decided that no line should be longer than 8 words. If the input text is more than 8 words long, we divide it in roughly equal halves. (If the input text is more than 16 words long, we divide it in thirds.) Then each line of text is output into the SVG with a vertical offset.

    That looks roughly like this:

    function splitTextToLines(text){
    
    const result = text.split(" ")
    
    if (result.length<8){
      return [text]}
      else if (16>result.length){
        const first = result.slice(0,Math.ceil(result.length/2));
        const second = result.slice(Math.ceil(result.length/2));
        
        return [first.join(" "),second.join(" ")]
      } else {
        const first = result.slice(0,Math.ceil(result.length/3));
        const second = result.slice(Math.ceil(result.length/3),Math.ceil(result.length/3)*2);
        const third = result.slice(Math.ceil(result.length/3)*2);
        
        return [first.join(" "),second.join(" "),third.join(" ")]
      }
    }
    

    And the loop that actually renders the text to the image:

    const lines = splitTextToLines(text);
    y-=lines.length*45;
    for (let i=0;i<lines.length;i++){
      
      svg.innerHTML+=`
      <text class="regular" style="font: bold ${fontSize}px sans-serif; text-anchor:end" x=1140 y=${y+i*65} >${lines[i]}</text>
      `;
    }
    

    I got the numbers right by just generating a lot of text and manually tweaking it. I like the end result, although it’s definitely not perfect and might still fall down with edge cases of really long or really short words.

    Finally, I defined a handful of palettes that I like looking at and that vaguely go with the color scheme (such as it is) of this blog. The script selects one at random and injects a stylesheet to color the foreground and background of the tiles, as well as the text.

    Creating the images

    To actually create the images from the template, I wrote my first Jekyll plugin! Here, I am quite grateful to this 8-year-old gist that did close to what I wanted to do. Instead of generating the image with ImageMagick (which is also extremely cool!!!), I added code that uses Ruby Puppetteer to load the file and save a screenshot to the /assets/opengraph folder, returning the path to the file. The code is then registered as a Jekyll tag called og_image. This means…

    Putting it all together

    All I need to do to generate and use these images is edit my head.html layout like so:

    
    {% if page.image %}
        <meta property="og:image" content="{{ page.image }}">
        {%else %}
        <meta property="og:image" content="{{site.url}}{% og_image %}">
      {% endif %}
      
    

    Now, if the page has an image defined in its front matter, Jekyll will use that. Otherwise, Jekyll will generate an image using the plugin and refer to that.

    I’ve wanted to do something like this for some time. I’m not sure how many people will ever see these, which makes me feel a little insane, but, well:

    Marge Simpson holding one of my headers instead of a potato, saying "I just think they're neat"

    Resources

    Footnotes

    1. With 14 tiles to choose from, 540 tile positions, and 3 color palettes, the odds of a repeat are… low. #math 

  • 2025, Wrapped

    My 2025 Github skyline. I think this says that the week I committed the least code was at the company offsite, which I guess makes sense.

    2025 has been … well, it has been one of the years of all time, I can say that. Personally and professionally, I have achieved a number of my goals. The wider world? I’m not sure I even have the words to describe how I feel about … everything, and I live in Washington, DC, the center of everything, so I’ve theoretically had a lot of time and energy to devote to figuring out how to talk about it. I still can’t.

    So I’m sorry, but I won’t. It feels like an abdication of responsibility to not, but please know that it’s not me saying that everything is fine, but that I just don’t feel equipped to face the enormity of everything on a little blog about Python and Javascript. And if this feeling doesn’t resonate with you, please go read some news. The Guardian, Al Jazeera, and ProPublica are good places to start.

    Okay, we’re back? We’ve read all news reporting published in the past year? Great. We can now move on.

    My personal year in review

    At the beginning of last year I set a few professional goals:

    • Get promoted
    • Apply to speak at 12 conferences

    The first goal, “get promoted,” was not a “good” goal, in the sense that it’s not a goal I have control over. I can do as much as possible to try to put myself in a good position to get promoted, by taking on more ambitious projects that align with my company’s career ladder, by putting myself forward and “managing up,” etc,. but ultimately, whether this goal happens or not is not up to me.

    The good news is I got promoted in March, which meant I could coast for the remaining 9 months of the year. (I promise this is a joke, in case my manager is reading this.)

    The second goal, “apply to speak at 12 conferences,” was a goal that I had complete control over. And I applied to 16 conferences, so I suppose I did alright.

    Of those 16 applications, 5 were accepted, 5 were rejected, and 6 are still pending. This is definitely not me right now:

    A woman morosely sitting by the phone, just waiting for it to ring But for real if you run one of these conferences, get back to me, ok?

    Those were the only professional goals I set, but I suppose I hit a few other milestones:

    • In addition to the conferences I applied for, a few folks reached out and asked me to participate in things, so in 2025, I gave a total of 10 talks, podcast/panel appearances, and guest lectures. This is fun. People should reach out to me more.

    • I published 39 blog posts this year (not all of which were meta-posts about Jekyll, as much as it feels like it). This one will make 40, and will probably be my last post of the year.

    • Oh, I (and four other women) started a nonprofit. Minor detail. Women and Gender eXpansive Coders DC is officially a 501(c)(3), thanks to a lot of paperwork and help from a pro bono legal team that we look forward to publicly thanking as soon as our bank account/donation infrastructure is set up.

      As a woman in the tech world I cannot stress enough how important it is to me to have access to a community of people who look like me. Work has a lovely ERG that I’m part of, but I’m grateful to also have this access to a community outside of work. And you know what they say – if you can’t find a community, start your own. Do they say that? They should.

      A lot of the energy of 2025 was sucked up by the actual logistics of starting a nonprofit, but we hosted a number of career events, brunch and dinner socials, Craft n Crush events, and a very cool book club where the author joined us for the last session. I taught a session on the command line (with help from two co-organizers/TAs who I couldn’t have done without) and organized a livecoding session where we made music with Hydra. I’m already looking forward to what we will do in 2026.

    My goals for next year

    A woman writing intentions in her journal

    I’m not aiming to get promoted in 2026. People at my company usually stay at my level for at least another year, so this is reasonable. This gives me the space to focus on other goals.

    One thing I really want to get better at is measuring the impact of my work. This is obviously a great thing to do for one’s own career, because putting “I built a feature that made the company one miiiiillion dollars” on your resume or annual review is more impressive than “I closed some tickets.” But I think it’s also a useful skill in general, because knowing impact means I’m working on the right things.

    One lucky break for me is that I won a drawing for an O’Reilly book at Techbash and chose The Product-Minded Engineer. Hopefully this book contains a few tips. I’d be grateful for any other thoughts readers might have on this topic.

    My other goal is to speak at four in-person conferences. I realize this is not fully under my control, but I do have some levers. I can make sure I’m applying for in-person (not just virtual) conferences, continue to iterate on my talks, and network with other successful speakers. And if you are reading this and run an in-person conference, you can invite me to give a talk :)

    That’s it for this year. Thanks for reading. Please, donate to your local mutual aid groups, volunteer somewhere to make the world a little better, and be excellent to each other.

  • Building a Cookbook in Python, for Reasons (Part 2)

    In my last post, I talked about building a cookbook/recipe blog that stores recipes emailed to a special address. I talked about setting up the backend, the service that provides an ‘email received’ webhook, and the library that parses recipe information from a website using the Schema.org standardized schema.

    Where we left off, we had just grabbed all the information about a recipe – name, ingredients, cook time, etc., and dumped them into a Python dict. Now, we can inject them into a Markdown template for use by Jekyll.

    Loyal readers of this blog know that I’m a huge Jekyll fan. It’s so easy to create new static HTML files from a basic template.

    In my case, the template looks something like this:

    ---
    layout: post
    title:  {{title}}
    source_site: {{source_site}}
    source: {{canonical_url}}
    ....and so on
    ---
    
    ### Ingredients
    {{ingredients}}
    
    ### Instructions
    {{instructions}}
    

    Everything between the --- lines is considered “front matter” and can be used as data to be injected into a post, or metadata about a post, etc.

    Our templatizer just needs to read in this file and call a bunch of replace()s. It looks something like this:

    
        RECIPE_TITLE = "{{title}}"
        RECIPE_SOURCE = "{{source_site}}"
        RECIPE_URL = "{{canonical_url}}"
        with (open(TEMPLATE_FILE, 'r') as template):
            buffer = template.read()
            buffer = buffer.replace(RECIPE_TITLE, recipe.get("title"))
            .replace(RECIPE_SOURCE,recipe.get("site_name"))
            .replace(RECIPE_URL,recipe.get("canonical_url"))
            #and so on
    

    The templatizer also generates a file name based on the slug of the recipe and the date it was shared (not the date it was initially posted on the source site). Anyway, this is all relatively simple.

    But now we have to… dun dun dunnnnnn talk to Github.

    Talking to Github

    I wanted a human in the loop here. This is not a high-traffic application and my endpoint is semi-insecure, so if someone were to start spamming it with junk data…there’s not much they could accomplish, but there’s less they can accomplish if I have to manually approve every recipe first. So while it’s fairly easy to force-push directly to main with the Github API, I wanted my app to create a PR instead.

    It could be worse, Github’s documentation is pretty good, and they make it real easy to make a scoped personal access token just for the actions you need to take. I won’t walk you through every step of code that needs to happen here, but the general steps are:

    def do_github_stuff(content, filename): #i'm good at naming things
        main_sha = get_branch_sha("main") # gets the hash of the tip of main
        new_branch, new_ref, branch_name = create_new_branch(main_sha) #creates a new branch off of main, basically the same as doing git checkout -b newbranch. I'm generating the branch name inside this function but if it's easier to understand, just imagine that instead of returning the branch name, I'm passing in "newbranch"
        new_sha = create_tree(new_branch.get('object').get('sha'), content, filename) #adds my new file to the new branch
        new_commit = create_commit(new_sha, main_sha) #creates the commit; unlike doing a commit via command line,  we explicitly have to tell git who the parents and children of the commit are. this returns a new hash
        update_ref_pointer(new_ref, new_commit) #now we have the new ref (/heads/mybranch/) and the new hash of the commit, this forces the tip of the new branch to point to the newly created hash
        create_pull_request(branch_name, filename) #the second argument here is actually the name of the PR, which in this case is just generated as f"adds {filename}"
    

    That feels like a lot, and in some ways it is, but in other ways it’s just five POSTs.

    All I can say is thank goodness I watched that presentation about git commit hashes earlier this year or this would have been significantly more difficult.

    The frontend

    I did the basic Github Pages/Jekyll setup. In doing so, either I missed a step, or the setup is missing some steps. When I clicked the setup button, I got:

    • a repo with a deploy github action that installed the wrong version of Ruby
    • nothing else? So then I did jekyll new on my local machine to spin up a new site, but the default settings in config.yml weren’t appropriate for a site hosted on Github Pages. It turns out that Github does have docs on how to configure Jekyll, but I wish the button did more of this for me.

    Anyway! Did manage to get up and running, finally. However, only a handful of the Github Pages-supported Jekyll themes are set up for blogs. (Minimal, minima, and hacker, for those keeping score). If you want to use a different theme, you’ll have to override some theme defaults. Which is fine, there’s excellent documentation on Jekyll’s site about doing so.

    So now this thing is hooked up! We just have to subscribe our email robot to our listserv and wait for the recipes to come rolling in.

    A Spongebob-style title card reading "Three days later..."

    Nobody has posted a recipe! This is a disaster! No, actually it’s just a pretty low-traffic, and it shouldn’t be surprising that three days in we have nothing. But I’d like to seed the site with some examples so it’s not just empty.

    This listserv has been hosted on Google Groups for the last few years (we are currently in the process of de-Googling due to unrelated issues), and say what you will about Google, they at least do let you export your data. As a moderator of the listserv, I have access to the entire group’s message history. So I made a Google Takeout request.

    A Spongebob-style title card reading "Three days later..."

    A few days later, a zip file containing my data was sent to me. All the messages are there…as one giant .mbox file.

    Luckily, this is a solved problem. Using this gist as a reference, I was able to parse every email and look for ones that contained URLs. Pulled about 10 random ones out and fed them to my API, which was able to successfully parse half of them, anyway that’s how I ended up becoming a contributor to the recipe-parser library.

    In all seriousness this was a very rewarding project. I love when something comes together in a weekend or two (it’s taken me longer to write up this series of posts than to actually create the project), and I love when tech can be used to make something not that scales to a billion people, but solves a specific problem for ten people. Or maybe just even me, I’m not sure the rest of my potluck group cares. :) But I got to learn about FastAPI, email webhooks, and get more familiar with Jekyll. I count that as a big, delicious win.

  • Building a Cookbook With Python, for Reasons (part 1)

    📖 + 🐍 = ?

    *Note: This is a longer writeup of the project I presented at PyLadies 2025. If you want the bite-size version, watch it here.

    I’m part of a monthly potluck that organizes meetups over email, then meets in person to eat delicious vegan food. (I’m not vegan, but I love any excuse to try new recipes and eat more plants.) Occasionally, potluck members will email around a link to the recipe they used or are planning to use.

    A pretty normal, boring email, where the sender says, "I'm thinking of bringing curried lentils with sweet potatoes and hazelnuts."

    Pretty common thing, but I wanted to capture these recipes in a more permanent way than a link to a random blog in a mailing list archive. Link rot is a thing, plus it’s just not very fun to have to search old messages and try to remember when that delicious soup recipe was sent around – was it this year or last?

    I had an idea pop into my head at over the summer (I love when these things happen) that I could automatically post recipes to a new blog, when they were posted to our listserv. So then I spent the next two weekends making it happen.

    I’ll discuss how I built it in a series of posts (the writeup is far too long for a single post). Today’s post is about the overall stack, as well as the FastAPI backend.

    The stack

    We need an email address that will serve as the “listener” to notice when new recipes are posted and do something with them. This could be anything but I may as well buy a domain and then I can host the blog there as well. So I went to Cloudflare and bought brooklandrecipe.party.

    We also need a backend that can do the “something” when a new email comes in. Based solely on the fact that the first recipe-parsing library I found was written in Python, I chose to use FastAPI, which is a lightweight Python backend. This turned out to be an excellent choice.

    We need a way for the email listener to talk to the backend. I found ProxiedMail, which has incoming email webhooks.1 And it’s got a free plan. Fantastic. Now when anyone sends an email to [email protected]2, we can make a POST request to our new API.

    And we need a frontend, preferably one that updates with (minimal) intervention from a human. Jekyll with Github Pages is great for this. Posts are built in markdown and upon a successful merge to main the site automatically will build and deploy.

    Basically, we need this: A flow chart showing the following steps, in order: Incoming email->POST recipes->email contains a url?->URL contains a recipe?->Create new post from template

    Those are all the parts! Let’s see how they fit together.

    The email and the backend

    As previously stated, I set up [email protected]3 to post back on receipt of an email. We can inspect the shape of the payload before doing anything, by instead setting the postback destination to a free4 URL on webhook.site. This shows us the shape of the payload, which I have shortened by removing the boring stuff:

    {
      "id": "A48CF945-BD00-0000-00003CC8",
      "payload": {
        "Content-Type": "multipart/alternative;boundary=\"000000000000a4a98d063b045239\"",
        "Date": "Mon, 28 Jul 2025 17:53:20 -0400",
        "Mime-Version": "1.0",
        "Subject": "test",
        "To": "[email protected]",
        "body-html": "<div dir=\"ltr\">hello</div>\r\n",
        "body-plain": "hello\r\n",
        "from": "Rachel Kaufman <my-email>",
        "recipient": "[email protected]",
        "stripped-html": "<div dir=\"ltr\">hello</div>\n",
        "stripped-text": "hello",
        "subject": "test"
      },
      "attachments": []
    }
    

    This is going to post to our backend REST API built with FastAPI. FastAPI uses Pydantic to define types under the hood, so we can design our endpoint’s desired input like:

    class EmailPayload(BaseModel):
        Date: str
        from_email: str = Field(..., alias="from")
        stripped_text: str = Field(..., alias="stripped-text")
        recipient: str = Field(..., pattern=pattern)
    

    Notice those “alias” fields; this is a cool FastAPI/Pydantic trick to change any string input to valid python. (stripped-text isn’t a valid name for a variable in Python, even if it’s valid as a JSON key. And I aliased from to from_email just so I had a clearer picture of what that variable represented. Although now I think I should have called it sender….Oh well.)

    The actual logic is pretty simple. We need to get the text of the email, check if it contains a URL. If it does, we need to check if that URL is for a recipe (and isn’t just a link found in someone’s signature for example). If it is a recipe, we need to scrape the recipe data, create a Markdown file with the recipe data in it, then send that Markdown file to Github in the frontend repo.

    Putting it all together it looks like:

    @app.post("/my-route")
    async def parse_message(message: IncomingEmail):
        message_body = message.payload.stripped_text
        message_sender = message.payload.from_email.split(" ")[0]
        recipe_url = contains_url(message_body) #a pretty simple regex that returns the first match if found or None if not
        if not recipe_url:
            return {"message": "no url found"}
        recipe = parse_recipe(recipe_url) #uses the recipe_scrapers library and returns a dict
        if not recipe:
            return {"message": f"no recipe found at URL {recipe_url}"}
        template, filename = generate_template(recipe, message_sender) #creates a blob from the template and dict
        make_github_call(template, filename) #actually makes quite a few github calls
        return {"message": "ok"}
    

    Let’s look at a few of these methods in more detail. I’ll skip over contains_url as it’s pretty boring.

    parse_recipe is also pretty simple - it just grabs the URL, resolves it, and uses the recipe_scrapers library to get the recipe data, such as its title, cook time, ingredients and instructions. Most recipe websites use standardized formats, codified by Schema.org so this library supports a good number of sites (but not all).

    Once we have our dict of parsed values, we can inject them into a Markdown template for use by Jekyll. Which I’ll discuss at a later date.

    1. SO MANY services that claim to have “email webhooks” only have webhooks for message delivery events, which honestly makes sense as it is probably the much more frequent use case. But I just want to make a POST request when a new email comes in. 

    2. Not the actual email address. 

    3. Still not the actual email address. 

    4. Each unique webhook.site URL can respond to 100 post requests. More than that and you’ll need to pay up. 

> > >Blog archive