Lj Miranda | A collection of notes, projects, and essays.

Hi! I'm Lj Miranda, and welcome to my website!

I'm currently a member of the spaCy team at Explosion. Outside of work, I do a lot of game development, reviews, and retro photography.

Here, I write about my interests in natural language processing, machine learning systems, and games—so grab a cup of coffee and feel free to look around!


Recent Posts

  • Study notes on parameter-efficient finetuning techniques

    Traditional finetuning involves training the parameters of a large language model with a shallower domain-specific network. However, this approach requires a large compute budget unavailable to most organizations. In this blog post, I'll go through differrent parameter-efficient finetuning techniques I personally like.

  • Labeling with GPT-3 using annotation guidelines

    As an extension of my previous post on using LLMs to annotate argument mining datasets, I want to explore how we can incorporate annotation guidelines into a prompt so that LLMs can use them as additional context for annotation.

  • How can language models augment the annotation process?

    In this blog post, I want to demonstrate how we can leverage large language models like GPT-3 as a viable affordance to reduce a human annotator's cognitive load. I do this by exploring the application of LLM prompting in annotating argument mining datasets.