Hardware for AI and ML is offered again in the Spring term

ECE 410/510 “Hardware for AI and ML” is offered again in the spring term!

This course covers the design, simulation, optimization, and evaluation of specialized hardware for artificial intelligence and machine learning workloads. Students begin with the mathematical foundations that connect AI/ML algorithms to their hardware implementations, then apply HW/SW co-design methods to map algorithms such as CNNs, DNNs, and transformer-based LLMs onto architectures including GPUs, TPUs, FPGAs, systolic arrays, and neuromorphic processors. Additional topics include computational profiling with Python and CUDA, benchmarking across hardware platforms, in-memory computing with memristive crossbar arrays, and the use of LLMs as hardware design tools for HDL generation and physical design automation. The course alternates between lectures and hands-on codefest sessions where students tackle open-ended design challenges, from implementing neurons and systolic sorting to running simulations on neuromorphic hardware platforms such as BrainScaleS-2.

More info here.

MCECS Undergraduate Research and Mentoring Program is open for applications!

The Undergraduate Research and Mentoring Program is open for applications, due March 1st! URMP provides opportunities for students to get hands-on experience working in a lab with graduate students and faculty. It funds summer undergraduate research for students majoring in all Maseeh College undergraduate disciplines. Students will get a stipend of $6000 to work on a full time 10 week project with faculty mentors in June-August. Interested students are encouraged to reach out to potential mentors to learn more about their research and the summer research experience.

See the URMP website for additional details on the program: https://www.pdx.edu/engineering/urmp

NSF REU and altREU sites are now accepting applications

The 2026 summer NSF REU and altREU sites are now accepting applications!

Both sites are fully virtual. For a direct comparison of the two programs, see https://teuscher-lab.com/altreu/program-details

We look forward to receiving your application!

NEW PAPER: Winning the Lottery by Preserving Network Training Dynamics with Concrete Ticket Search

T. Arora and C. Teuscher, Winning the Lottery by Preserving Network Training Dynamics with Concrete Ticket Search, 2025 Under review. https://arxiv.org/abs/2512.07142

Abstract:

The Lottery Ticket Hypothesis asserts the existence of highly sparse, trainable subnetworks (‘winning tickets’) within dense, randomly initialized neural networks. However, state-of- the-art methods of drawing these tickets, like Lottery Ticket Rewinding (LTR), are computationally prohibitive, while more efficient saliency-based Pruning-at-Initialization (PaI) techniques suffer from a significant accuracy-sparsity trade-off and fail basic sanity checks. In this work, we argue that PaI’s reliance on first-order saliency metrics, which ignore inter-weight dependencies, contributes substantially to this performance gap, especially in the sparse regime. To address this, we introduce Concrete Ticket Search (CTS), an algorithm that frames subnetwork discovery as a holistic combinatorial optimization problem. By leveraging a Concrete relaxation of the discrete search space and a novel gradient balancing scheme (GRADBALANCE) to control sparsity, CTS efficiently identifies high-performing subnetworks near initialization without requiring sensitive hyperparameter tuning. Motivated by recent works on lottery ticket training dynamics, we further propose a knowledge distillation-inspired family of pruning objectives, finding that minimizing the reverse Kullback-Leibler divergence between sparse and dense network outputs (CTSKL) is particularly effective. Experiments on varying image classification tasks show that CTS produces subnetworks that robustly pass sanity checks and achieve accuracy comparable to or exceeding LTR, while requiring only a small fraction of the computation. For example, on ResNet-20 on CIFAR10, CTSKL produces subnetworks of 99.3% sparsity with a top-1 accuracy of 74.0% in just 7.9 minutes, while LTR produces subnetworks of the same sparsity with an accuracy of 68.3% in 95.2 minutes. However, while CTS outperforms saliency-based methods in the sparsity-accuracy tradeoff across all sparsities, such advantages over LTR emerge most clearly only in the highly sparse regime.