Summit Lake of Mt Evans, CO. 12,836 feet (3,912 m).

alternative headshot  

Mert Hidayetoğlu

  Gates 472, 353 Serra Mall, Stanford, CA, 94305
merth at stanford dot edu

I am a postoctoral scholar with Alex Aiken's group at Stanford University.

My research interests are parallel computing, fast algorithms, inverse problems, and programming models.

[Extended CV] [LinkedIn] [Google Scholar] [GitHub]

Short Bio

Mert obtained his PhD at University of Illinois under supervision of Wen-mei Hwu in 2022. His dissertation is on sparse communications and memory accesses across hierarchical GPU interconnects. Mert's motivating applications are large-scale inverse problems in computational imaging and sparse deep learning. He was a Givens Fellow at Argonne National Laboratory in 2018, the leading author of SC20 best paper, and recipient of ACM/IEEE-CS George Michael Memorial HPC Fellowship in 2021.

Research

  • HiCCL: A Hierarchical Communication Library.
  •      Arxiv, 2024: Paper|Code|Talk
  • CommBench: Micro-benchmarking hierarchical networks with multi-GPU, multi-NIC nodes.
  •      ICS, 2024: Paper|Code|Talk
  • Hector: An efficient programming and compilation framework for implementing relational graph neural networks in GPU architectures.
  •      ASPLOS, 2024: Paper
  • Performance modeling and optimizing of sparse matrix multiplication on GPUs.
  •      Unpublished, 2022: Paper|Talk
  • Graph neural network training with data tiering.
  •      KDD, 2022: Paper
  • Fast numerical integration techniques for 2.5-dimensional inverse problems.
  •      Arxiv, 2022: Paper
  • MemXCT: Design, optimization, scaling, and reproducibility of X-ray tomography imaging.
  •      TPDS, 2022: Paper|CPU Code|GPU Code
  • Accelerating Fourier and number-theoretic transforms using tensor cores and warp shuffles.
  •      PACT, 2021: Paper
  • Large graph convolutional network training with GPU-oriented data communication architecture.
  •      VLDB, 2021: Paper
  • PyTorch-direct: Enabling GPU centric data access for very large graph neural network training with irregular accesses.
  •      Arxiv, 2021: Paper
  • Petascale XCT: 3D image reconstruction with hierarchical communications on multi-GPU nodes.
  •      SC, 2020: Paper|Video
  • At-scale sparse deep neural network inference with efficient GPU implementation.
  •      HPEC, 2020: Paper
  • MemXCT: Memory-centric X-ray CT reconstruction with massive parallelization.
  •      SC, 2019: Paper
  • An efficient GPU implementation technique for higher-order 3D stencils.
  •      HPCC, 2019: Paper
  • A fast and massively parallel inverse multiple scattering solver.
  •      IPDPS, 2018: Paper|Talk

In his free time, Mert likes to ride bike and spend time with his family.

Check out my Master's work on solving integral equations with billions of unknowns (it was a world record in 2014) here.

[Extended CV] [LinkedIn] [Google Scholar] [GitHub]

Illinois Website Stanford Website

Last modified: .