Online Deduplicated Data Migration
Abstract: In storage systems, the data migration process periodically remaps files between volumes with the goal of preserving the system’s load balance and deduplication efficiency. Previous studies focused on offline selection of files to migrate, a task complicated by the inter-file dependencies introduced by deduplication. However, they did not address the possibility of files entering and leaving the system due to user actions, nor the order between individual file transfers. Our motivational study reveals that naïve ordering may create traffic spikes and leave the system in poorly balanced intermediate states. To address these challenges, we present Slide—a novel online migration approach based on sliding windows. Slide takes advantage of long-term planning to maximize deduplication efficiency while maintaining short-term load balance and adapting to system changes. It achieves superior load balancing than alternative approaches while incurring minimal increase in the overall system size.
M.Sc. student under the supervision of Prof. Gala Yadgar.
Towards Energy-Efficient AI Hardware: Mixed-Signal In-Memory Computing and Ultra-Dense Die-to-Die Links
Abstract: The rapid advancement of artificial intelligence (AI) is pushing the limits of conventional digital computing architectures. Key performance bottlenecks–computational efficiency, memory bandwidth, and interconnect performance–are becoming increasingly critical, especially for edge AI applications that demand low latency and ultra-low power consumption. In this talk, I will present my recent research aimed at overcoming these challenges through a multidisciplinary approach focusing on three main pillars: 1) analog/mixed-signal circuits, 2) in-memory computing (IMC), and 3) high-speed die-to-die (D2D) links. I will show how time-domain IMC using nonvolatile emerging memory technologies, such as ferroelectric FETs, can improve the energy-efficiency of AI hardware, as demonstrated by a prototype in 28 nm CMOS. I will also present a mixed-mode fast-locking delay-locked loop for latency-critical parallel links (such as D2D), implemented in a 3 nm FinFET CMOS process. These works span CMOS and emerging device technologies, combining insights from device physics, circuit design, and system architecture to enable the next generation of high-performance, energy-efficient AI hardware.
Bio: Nicolas Wainstein is a Research Fellow at the ECE faculty at the Technion. From 2021 to 2024, he was a Senior Analog/Mixed-Signal Design Engineer and Technical Lead at Intel, Israel, working on high-speed parallel wireline links, such as DDR and die-to-die (D2D) communication. He earned his PhD in Electrical Engineering from the Technion, supervised by Prof. Shahar Kvatinsky and Prof. Eilam Yalon. Nicolás was the recipient of several awards, including the Hershel Rich Innovation Award, the IEEE Electron Devices Society Ph.D. Student Fellowship (Europe and Middle East region), the Yablonovitch Research Prize, the 1st place in the RBNI Prize for Excellence in Nanoscience and Nanotechnology, and the Jury Award for Outstanding Students.
Yotam Gafni
Weizmann
On 7/5/2025 at 11:30
Mayer 1061 and Zoom
Designing Blockchain Fees
Abstract: Miner fees are a key component in the incentive scheme of properly running Blockchains’ decentralized transaction processing. In determining how fees work, we should consider many different goals: Efficient allocation, simplicity for users, and robustness to possible manipulation vectors. We consider this problem through the lens of auction theory, and characterize the tradeoffs different mechanisms may offer, in particular w.r.t. the threat of miner-user collusion.
The talk is based on joint works with Aviv Yaish, Matheus Ferreira, and Max Resnick.
On 23/4/2025 at 11:30
Blockchain networks like Bitcoin and Ethereum underpin billions in value and promise decentralized trust, yet they face critical challenges: their security depends on fragile economic incentives, their energy consumption raises sustainability concerns, and their limited computational capacity constrains scalability.
When a File Means a File: Proper Huge Pages for Code
Abstract: Despite huge pages dramatically reducing CPU frontend stalls from address translation, their use for executable code remains limited due to operating system constraints and impracticality of rebuilding system binaries with special alignment. Current solutions that copy code into huge pages break essential system functionality – preventing memory sharing between processes, disrupting debugging tools, and interfering with memory management operations.In this talk, I will present a practical userspace solution that achieves huge page performance benefits while preserving critical system services. Our approach transforms binaries to align code segments with huge page boundaries post-linkage while maintaining all internal references, and orchestrates page cache operations to ensure proper mapping. PostgreSQL evaluations demonstrate up to 7% performance improvement through a 94% reduction in iTLB misses, while maintaining memory sharing, debugging support, and proper memory management.
Oblivious Reconfigurable Datacenter Networks
Tel-Aviv University
On Cryptography and Kolmogorov Complexity
Marwa Mouallem
Technion
Meyer building 1061 and Zoom
Abstract: A myriad of authentication mechanisms embody a continuous evolution from verbal passwords in ancient times to contemporary multi-factor authentication: Cryptocurrency wallets advanced from a single signing key to using a handful of well-kept credentials, and for online services, the infamous “security questions” were all but abandoned. Nevertheless, digital asset heists and numerous identity theft cases illustrate the urgent need to revisit the fundamentals of user authentication.
Oleg Kolosov
Taub Building 8
Abstract. Edge computing extends cloud capabilities to the proximity of end-users, offering ultra-low latency, which is essential for real-time applications. Unlike traditional cloud systems that suffer from latency and reliability constraints due to distant datacenters, edge computing employs a distributed model, leveraging local edge datacenters to process and store data.
This talk explores key challenges in edge computing across three domains: workloads, storage, and service allocation. The first part focuses on the absence of comprehensive edge workload datasets. Current datasets do not accurately reflect the unique attributes of edge systems. To address this, we propose a workload composition methodology and introduce WoW-IO, an open-source trace generator. The second part examines aspects of edge storage. Edge datacenters are significantly smaller than their cloud counterparts and require dedicated solutions. We analyze the applicability of a promising mathematical model for edge storage systems and raise inherent gaps between theory and practice. The final part addresses the virtual network embedding problem (VNEP). In VNEP, given a set of requests for deploying virtualized applications, the edge provider has to deploy a maximum number of them to the underlying physical network, subject to capacity constraints. We propose novel solutions, including a proactive service allocation strategy for mobile users, a flexible algorithm for service allocation that is adaptable to the underlying physical topology, and an algorithm for scalable online service allocation.
Zisapel Building 506
In the first part of this talk, we will explore the security and privacy concerns of cyber-physical systems. Specifically, we will examine new threats that have emerged with the deployment of technologies like drones and Teslas in real-world environments. Our discussion will highlight methods for detecting intrusive drone filming and securing Teslas against time-domain adversarial attacks.The second part of the talk focuses on the challenges posed by the coexistence of functional devices with limited computational power (that do not adhere to Moore’s law) alongside sensors with ever-increasing sampling rates. We will explore how threats such as cryptanalysis and speech eavesdropping—previously accessible only to well-resourced adversaries—can now be executed by ordinary attackers using readily available hardware like photodiodes and video cameras. These attacks leverage optical traces or video footage from a device’s power LED to extract sensitive information.
Finally, in the last part of the talk, we will address the emerging need to secure GenAI-powered applications against a new category of threats we call Promptware. This threat highlights the evolving landscape of vulnerabilities introduced by generative AI systems.