Jie Li ☕️

Jie Li

Ph.D. candidate in Computer Science

DISCL @ Texas Tech University

Jie Li is a Ph.D. candidate in Computer Science at Texas Tech University, where he is a member of the Data-Intensive Scalable Computing Laboratory (DISCL) under the guidance of Dr. Yong Chen. Jie’s research interests lie in the areas of High-Performance Computing, Advanced Computer Architecture, and Parallel and Distributed Computing. Jie completed his Master of Science degree in Computer Science from Texas Tech University in 2019. Prior to that, he earned a bachelor’s degree in architecture.


Data-Intensive Scalable Computing Laboratory (DISCL)
Research Assistant
September 2019 – Present Lubbock, TX

Responsibilities include:

  • Conduct research in the areas of High-Performance Computing, Computer Architecture, and Parallel and Distributed Computing.
  • Attend conferences, workshops, and seminars to stay up-to-date with the latest research developments and technologies
  • Participate in the development and maintenance of research software and tools.
  • Mentor graduate and undergraduate students on their independent studies.
  • Administer two high-end servers (Hugo and Alita) hosted in the High-Performance Computing Center at Texas Tech University.
NERSC + LBL-Computer Architecture Group
Graduate Student Intern
June 2022 – August 2022 Berkeley, CA (remote)

Responsibilities include:

  • Simplified the code structure and create a more efficient and streamlined codebase by refactoring and combining the data collection codes used to access the system monitoring data collected from NERSC’s Perlmutter.
  • Analyzed the system monitoring data on a large scale to evaluate resource utilization by examining metrics such as CPU and GPU utilization, host DRAM utilization, and GPU HBM2 utilization. Identified trends and patterns in the data to gain insights into system performance.
  • Summarized the analysis and published a system resource analysis paper in ISC 2023.
National Energy Research Scientific Computing Center (NERSC)
Graduate Student Intern
June 2021 – August 2021 Berkeley, CA (remote)

Responsibilities include:

  • Integrated data from multiple sources to analyze system-wide architectural efficiency and workload patterns.
  • Conducted statistical analysis of job-level monitoring data and applied various machine learning models (e.g., Random Forests, Support Vector Classification, LinearSVC) to classify jobs based on extracted time-series features.
  • Developed a novel approach to encoding time-series monitoring data as images and trained a Convolutional Neural Network (CNN) to classify job signatures with high accuracy.
Teaching, Learning and Professional Development Center (TLPDC)
Graduate Student Programmer
August 2018 – August 2019 Lubbock, TX

Responsibilities include:

  • Managed the maintenance and regular updates of the TLPDC Websites, ensuring that all content was current, accurate, and accessible to internal and external stakeholders.
  • Developed and implemented comprehensive backup strategies to safeguard critical data assets, reducing the risk of data loss and ensuring business continuity in the event of system failures or other disruptions.
  • Maintained a deep understanding of emerging technologies and best practices related to web development, software applications, and data management, leveraging this knowledge to continuously improve processes and approaches to project management and delivery.


Advanced Visualization and Data Analysis of HPC Cluster and User Application Behavior
A Project Presentation at SC21.
Advanced Visualization and Data Analysis of HPC Cluster and User Application Behavior


(2023). Analyzing Resource Utilization in an HPC System: A Case Study of NERSC’s Perlmutter. In ISC 2023.


(2022). JobViewer: Graph-based Visualization for Monitoring High-Performance Computing System. In BDCAT.

Cite DOI

(2021). The Gap between Visualization Research and Visualization Software in High-Performance Computing Center. In VisGap.

Cite DOI

(2021). HAM: Hotspot-Aware Manager for Improving Communications with 3D-Stacked Memory. In TC.

Cite DOI

(2020). MonSTer: an out-of-the-box monitoring tool for high performance computing systems. In CLUSTER.

Cite DOI