🔋Pilas: y25_w19

Models/Systems D-FINE: realtime object detector - 05/05/25 Organization: University of Science and Technology of China Paper: D-FINE: REDEFINE REGRESSION TASK IN DETRS AS FINE-GRAINED DISTRIBUTION REFINEMENT Hugging Face space A real-time object detector much faster and accurate than YOLO with Apache 2.0 license just landed to @huggingface transformers 🔥 D-FINE is the sota real-time object detector that runs on T4 (free Colab) 🤩 Keep reading for the paper explainer, notebooks & demo 👀 pic.twitter.com/GNj2MMa8sK — merve (@mervenoyann) May 5, 2025 Kevin-32B: Multi-Turn RL for Writing CUDA Kernels - 05/06/25 Organization: Stanford University, Cognition AI Announcement Figure 1: Kevin-32B correctess and performance results Gemini 2.5 Pro (I/O Edition) - 05/06/25 Organization: Google Announcement Very excited to share the best coding model we’ve ever built! Today we’re launching Gemini 2.5 Pro Preview 'I/O edition' with massively improved coding capabilities. Ranks no.1 on LMArena in Coding and no.1 on the WebDev Arena Leaderboard. It’s especially good at building… pic.twitter.com/9vRaP6RTTo ...

May 12, 2025 Â· 2 min Â· 412 words Â· Jorge Roldan

Huggingface deep dive: Sequence Classification with BERT

Introduction LLMs (Large Language Models) have revolutionized NLP (Natural Language Processing) and are still transforming the field and its applications as of 2025. These models excel at common NLP tasks such as summarization, question answering, and text generation. A common trend in state-of-the-art LLMs is that they base their architecture on the Transformer’s architecture 1 , and decoder-only models have gained favorability compared to encoder-only or encoder-decoder models 2 . In this article, I will discuss how to use the BERT (Bidirectional Encoder Representations from Transformers) model 3 for a sequence classification task with the Huggingface’s transformers library. Remember that BERT is technically just a language model due to its relatively small sizes (~ 100 to ~350 million, depending on the version) compared to large language models with billions of parameters, and it is an encoder-only model. Nevertheless, as I will argue next, understanding and knowing how to use this model is essential. ...

March 8, 2025 Â· 13 min Â· 2702 words Â· Jorge Roldan

N-Gram Language Models

This post is based chapter 3 from Speech and Language Processing by Dan Jurafsky and James H. Martin N-Grams models N-gram models are the simplest type of language models. The N-gram term has two meanings. One meaning refers to a sequence of n words, so a 2-gram, and 3-gram are sequences of 2, and 3 words, respectively. The second meaning refers to a probabilistic model that estimates the probability of a word given the n-1 previous words. ...

February 27, 2025 Â· 3 min Â· 450 words Â· Jorge Roldan

Setting up a Conda environment

TLDR 1 conda create --name my_conda_env python=3.11 1 conda env list 1 conda activate my_conda_env 1 pip install <package_name> or 1 pip install -r requirements.txt What’s a conda environment? Knowing how to set up a Conda environment is an essential skill for any data scientist or Python developer. Conda is an open source package management system for Python. In this post, I will show you how to set up a Conda environment for your project, doing this, will help you to easily install and use any dependency you will need. ...

February 22, 2025 Â· 3 min Â· 493 words Â· Jorge Roldan