International Workshop on Resource-Efficient Learning for Knowledge Discovery
Call for Papers
Modern machine learning techniques, especially deep neural networks, have demonstrated excellent performance for various knowledge discovery and data mining applications. However, the development of many of these techniques still encounters resource constraint challenges in many scenarios, such as limited labeled data (data-level), small model size requirements in real-world computing platforms (model-level), and efficient mapping of the computations to heterogeneous target hardware (system-level). Addressing all of these metrics is critical for the effective and efficient usage of the developed models in a wide variety of real systems, such as large-scale social network analysis, large-scale recommendation systems, and real-time anomaly detection. Therefore, it is desirable to develop efficient learning techniques to tackle challenges of resource limitations from data, model/algorithm, or (and) system/hardware perspectives. The proposed international workshop on "Resource-Efficient Learning for Knowledge Discovery (RelKD 2024)" will provide a great venue for academic researchers and industrial practitioners to share challenges, solutions, and future opportunities of resource-efficient learning.
The goal of this workshop is to create a venue to tackle the challenges that arise when modern machine learning techniques (e.g., deep neural networks) encounter resource limitations (e.g., scarce labeled data, constrained computing devices, low power/energy budget). The workshop shall focus on machine learning techniques used for knowledge discovery and data science applications, with a focus on efficient learning from three angles: data, algorithm/model, and system/hardware. The topics of interest will include:
  • Data-efficient learning: Self-supervised/unsupervised learning, semi/weakly-supervised learning, few-shot learning, and their applications to various data modalities (e.g., graph, user behavior, text, web, image, time series) and data science problems (e.g., social media, healthcare, recommendation, finance, multimedia)
  • Algorithm/model-efficient learning: Neural network pruning, quantization, acceleration, sparse learning, neural network compression, knowledge distillation, neural architecture search, and their applications on various data science problems.
  • System/hardware-efficient learning: Neural network-hardware co-design, real-time and energy-efficient learning system design, hardware accelerators for machine learning, and their applications on various data science problems.
  • Joint-efficient learning: Any kind of joint-efficient learning algorithms/methods (e.g., data-model joint learning, algorithm-hardware joint learning) and their application on various data science problems.
The workshop will be a half-day session comprising several invited talks from distinguished researchers in the field, spotlight lightning talks and a poster session where contributing paper presenters can discuss their work, and a concluding panel discussion focusing on future directions. Attendance is open to all registered participants.
Submitted technical papers should be at least 4 pages long. All papers must be submitted in PDF format (any template is ok). Papers will be peer-reviewed and selected for spotlight and/or poster presentation. There will be no formal proceedings for the workshop papers, and we welcome any kinds of submissions, e.g., papers already accepted to or currently under review by other venues, ongoing studies, and so on. We will also select a few outstanding paper awards. Submission site:
Important Dates (Barcelona Time)
Paper Submission Deadline:
07/05/2024
Notification of Acceptance:
07/20/2024
Workshop Date:
08/25/2024
Contact us
For any questions, please reach out to the organization's email address: chuxuzhang@gmail.com or any organizer’s email address.
Accepted Paper List
Data-Centric Approach to Constrained Machine Learning: A Case Study on Conway's Game of Life
Structure-Aware Meta-Learning for Few-Shot Knowledge Graph Link Prediction
Mamba4Rec: Towards Efficient Sequential Recommendation with Selective State Space Models
A Bandit Approach With Evolutionary Operators for Model Selection: Application to Neural Architecture Optimization for Image Classification
AdaSelection: Accelerating Deep Learning Training through Adaptive Data Subsampling
Instance-Aware Graph Prompt Learning
UniGLM: Training One Unified Language Model for Text-Attributed Graphs
Faster Neural Net Inference via Forests of Sparse Oblique Decision Trees
An Extremely Data-efficient and Generative LLM-based Reinforcement Learning Agent for Recommenders
MetaOOD: Meta-learning for Automatic Out-of-Distribution Detection Model Selection
Training MLPs on Graphs without Supervision
Agenda
09:00am
Opening remarks
09:00am-09:40am
Less is More: Model & Data Efficiency Research for Recommenders and LLMs
9:40am-10:20am
Invited talk 2:  Jing Gao (Purdue)
Pretrained Language Model Fine-Tuning and Inference with Limited Resources
10:20am-11:00am
Spotlight paper presentations
Mamba4Rec: Towards Efficient Sequential Recommendation with Selective State Space Models
AdaSelection: Accelerating Deep Learning Training through Adaptive Data Subsampling
UniGLM: Training One Unified Language Model for Text-Attributed Graphs
MetaOOD: Meta-learning for Automatic Out-of-Distribution Detection Model Selection
11:00am-11:40am
Invited talk 3:  Hanghang Tong (UIUC)
Graph Neural Networks Beyond Homophily: A Spectral Perspective
11:40pm-12:40pm
Panel talk and discussion:
Guests:
Accelerating Outlier Detection: The Power of Multi-GPU Tensor Operations
Reinforcement learning in Real World: A Perspective From Transportation
Data Reduction for Graphs
12:40pm-1:10pm
Poster session/Closing remarks
Keynote Speakers
Derek Cheng
Google DeepMind
Principle Engineer and Research Director
Keynote Talk: Less is More: Model & Data Efficiency Research for Recommenders and LLMs
Hanghang Tong
University of Illinois Urbana-Champaign
Associate Professor of Computer Science
Keynote Talk: Graph Neural Networks Beyond Homophily: A Spectral Perspective
Jing Gao
Purdue University
Associate Professor of Electrical and Computer Engineering
Keynote Talk: Pretrained Language Model Fine-Tuning and Inference with Limited Resources
Panel Talk and Discussion
Assistant Professor
University of Southern California
Panel Talk: Accelerating Outlier Detection: The Power of Multi-GPU Tensor Operations
Assistant Professor
Arizona State University
Panel Talk: Reinforcement learning in Real World: A Perspective From Transportation
Assistant Professor
Emory University
Panel Talk: Data Reduction for Graphs
Organizing Chairs
Associate Professor
University of Connecticut
Assistant Professor
North Carolina State University
Assistant Professor
Northwestern University
Chief Science Officer
Hippocratic AI
Senior Researcher
Microsoft Research
Assistant Professor
University of Virginia
Senior Chairs
Professor
University of Notre Dame
Professor
Arizona State University
©2024 International Workshop on Resource-Efficient Learning for Knowledge Discovery, All rights reserved
(Last update: May 8, 2024)