Jorge Roldán

Jorge Roldán

I am a machine learning engineer who loves building safe, robust, and awesome AI-powered tools, applications, and services. I have an M.S. in Computer Science with a focus on machine learning from New York University—Courant Institute of Mathematical Sciences. I also studied Computer Science and Mechanical Engineering at The City College of New York—Grove School of Engineering. I am originally from Santa Rosa de Osos, Colombia 🇨🇴, and currently live in New York City 🗽.

My interests lie at the intersection of applied mathematics, machine learning, and software engineering. I believe that truly understanding mathematical and scientific foundations is essential for building great technology. As an engineer, I strive to balance diving deep into theory with getting my hands dirty by building things I love.

Through this blog and newsletter, I want to share what I care about, what I learn, what I build, what I’d like to see more of in the world, and what I’d like to see less of.

🔋 Pilas, the newsletter’s name, is a Spanish slang term commonly used in Colombia to mean ‘watch out,’ ‘be alert,’ or ‘stay sharp.’ However, it also means ‘battery.’ I chose this name because now, more than ever, we need to stay engaged and vigilant to keep up with the changes. I hope this content energizes you to do so.

If you’d like to share your thoughts or just say hello, you can reach me at roldanjrg@protonmail.com

🔋Pilas: y25-w20

Models/Systems INTELLECT-2 - Prime Intellect - 05/12/25 Organization: Prime Intellect Paper:INTELLECT-2: A Reasoning Model Trained Through Globally Decentralized Reinforcement Learning Blog: INTELLECT-2 Release: The First 32B Parameter Model Trained Through Globally Distributed Reinforcement Learning Hugginface Model Card: INTELLECT-2 Releasing INTELLECT-2: We’re open-sourcing the first 32B parameter model trained via globally distributed reinforcement learning: • Detailed Technical Report • INTELLECT-2 model checkpointhttps://t.co/iHDDHRyKN2 — Prime Intellect (@PrimeIntellect) May 12, 2025 SWE-1 - Windsurf - 05/15/25 Announcement: SWE-1: Our First Frontier Models The Open Molecules 2025 (OMol25) Dataset, Evaluations, and Models Announcement: Sharing new breakthroughs and artifacts supporting molecular property prediction, language processing, and neuroscience Hugging Face model card: facebook/OMol25 facebook/UMA Hugging Face collections: FAIR Chemistry Paper: UMA: A Family of Universal Models for Atoms The Open Molecules 2025 (OMol25) Dataset, Evaluations, and Models Announcing the newest releases from Meta FAIR. We’re releasing new groundbreaking models, benchmarks, and datasets that will transform the way researchers approach molecular property prediction, language processing, and neuroscience. 1️⃣ Open Molecules 2025 (OMol25): A dataset… pic.twitter.com/PAmnNgTVnB ...

May 12, 2025 · 2 min · 410 words · Jorge Roldan

🔋 Pilas: y25-w19

Models/Systems D-FINE: realtime object detector - 05/05/25 Organization: University of Science and Technology of China Paper: D-FINE: REDEFINE REGRESSION TASK IN DETRS AS FINE-GRAINED DISTRIBUTION REFINEMENT Hugging Face space A real-time object detector much faster and accurate than YOLO with Apache 2.0 license just landed to @huggingface transformers 🔥 D-FINE is the sota real-time object detector that runs on T4 (free Colab) 🤩 Keep reading for the paper explainer, notebooks & demo 👀 pic.twitter.com/GNj2MMa8sK — merve (@mervenoyann) May 5, 2025 Kevin-32B: Multi-Turn RL for Writing CUDA Kernels - 05/06/25 Organization: Stanford University, Cognition AI Announcement Figure 1: Kevin-32B correctess and performance results Gemini 2.5 Pro (I/O Edition) - 05/06/25 Organization: Google Announcement Very excited to share the best coding model we’ve ever built! Today we’re launching Gemini 2.5 Pro Preview 'I/O edition' with massively improved coding capabilities. Ranks no.1 on LMArena in Coding and no.1 on the WebDev Arena Leaderboard. It’s especially good at building… pic.twitter.com/9vRaP6RTTo ...

May 5, 2025 · 3 min · 454 words · Jorge Roldan

🔋 Pilas: y25-w18

Models/Systems Qwen3 - 04/29/25 Released by: Model - Alibaba unveils Qwen3, a family of ‘hybrid’ AI reasoning models ⭐️ https://qwenlm.github.io/blog/qwen3/ Introducing Qwen3! We release and open-weight Qwen3, our latest large language models, including 2 MoE models and 6 dense models, ranging from 0.6B to 235B. Our flagship model, Qwen3-235B-A22B, achieves competitive results in benchmark evaluations of coding, math, general… pic.twitter.com/JWZkJeHWhC — Qwen (@Alibaba_Qwen) April 28, 2025 Byte Latent Transformer (blt) - Meta - 04/30/25 Hugging Face model card: facebook/blt Paper: Byte Latent Transformer: Patches Scale Better Than Tokens code: facebookresearch/blt Phi-4 - Microsoft - 04/30/25 Announcement: One year of Phi: Small language models making big leaps in AI Article: Microsoft’s most capable new Phi 4 AI model rivals the performance of far larger systems Paper: Phi-4-reasoning Technical Report Mellum - JetBrains - 04/30/25 Announcement: Mellum Goes Open Source: A Purpose-Built LLM for Developers, Now on Hugging Face Hugging Face model card: JetBrains/Mellum-4b-base OLMo 2 - AllenAI - 05/01/25 Project page: OLMo 2 Hugging Face Collection: OLMo 2 Paper: 2 OLMo 2 Furious Llama-Nemotron: Efficient Reasoning Models - 05/02/25 Paper NVIDIA Llama Nemotron Ultra Open Model Delivers Groundbreaking Reasoning Accuracy Hugging Face space F Lite - 04/29/25 F Lite - freepik - 04/29/25 Agents AMIE gains vision: A research AI agent for multimodal diagnostic dialogue Papers OLMOTRACE: Tracing Language Model Outputs Back to Trillions of Training Tokens The Leaderboard Illusion - 04/29/2 Phi-4-reasoning Technical Report - 04/30/25 Byte Latent Transformer: Patches Scale Better Than Tokens All Roads Lead to Likelihood: The Value of Reinforcement Learning in Fine-Tuning - 05/03/25 WebThinker: Empowering Large Reasoning Models with Deep Research Capability Talk Before You Retrieve: Agent-Led Discussions for Better RAG in Medical QA Practical Efficiency of Muon for Pretraining - 05/04/25 Articles Why We Think by Lilian Weng Lectures Yann LeCun: Models of SSL - 04/29/25

April 28, 2025 · 2 min · 302 words · Jorge Roldan

Huggingface deep dive: Sequence Classification with BERT

Introduction LLMs (Large Language Models) have revolutionized NLP (Natural Language Processing) and are still transforming the field and its applications as of 2025. These models excel at common NLP tasks such as summarization, question answering, and text generation. A common trend in state-of-the-art LLMs is that they base their architecture on the Transformer’s architecture 1 , and decoder-only models have gained favorability compared to encoder-only or encoder-decoder models 2 . In this article, I will discuss how to use the BERT (Bidirectional Encoder Representations from Transformers) model 3 for a sequence classification task with the Huggingface’s transformers library. Remember that BERT is technically just a language model due to its relatively small sizes (~ 100 to ~350 million, depending on the version) compared to large language models with billions of parameters, and it is an encoder-only model. Nevertheless, as I will argue next, understanding and knowing how to use this model is essential. ...

March 8, 2025 · 13 min · 2702 words · Jorge Roldan

N-Gram Language Models

This post is based chapter 3 from Speech and Language Processing by Dan Jurafsky and James H. Martin N-Grams models N-gram models are the simplest type of language models. The N-gram term has two meanings. One meaning refers to a sequence of n words, so a 2-gram, and 3-gram are sequences of 2, and 3 words, respectively. The second meaning refers to a probabilistic model that estimates the probability of a word given the n-1 previous words. ...

February 27, 2025 · 3 min · 450 words · Jorge Roldan

Setting up a Conda environment

TLDR 1 conda create --name my_conda_env python=3.11 1 conda env list 1 conda activate my_conda_env 1 pip install <package_name> or 1 pip install -r requirements.txt What’s a conda environment? Knowing how to set up a Conda environment is an essential skill for any data scientist or Python developer. Conda is an open source package management system for Python. In this post, I will show you how to set up a Conda environment for your project, doing this, will help you to easily install and use any dependency you will need. ...

February 22, 2025 · 3 min · 493 words · Jorge Roldan