International Workshop on Resource-Efficient Learning for Knowledge Discovery
Call for Papers
Modern machine learning techniques, especially deep neural networks, have demonstrated excellent performance for various knowledge discovery and data mining applications. However, the development of many of these techniques still encounters resource constraint challenges in many scenarios, such as limited labeled data (data-level), small model size requirements in real-world computing platforms (model-level), and efficient mapping of the computations to heterogeneous target hardware (system-level). Addressing all of these metrics is critical for the effective and efficient usage of the developed models in a wide variety of real systems, such as large-scale social network analysis, large-scale recommendation systems, and real-time anomaly detection. Therefore, it is desirable to develop efficient learning techniques to tackle challenges of resource limitations from data, model/algorithm, or (and) system/hardware perspectives. The proposed international workshop on "Resource-Efficient Learning for Knowledge Discovery (RelKD 2025)" will provide a great venue for academic researchers and industrial practitioners to share challenges, solutions, and future opportunities of resource-efficient learning.
The goal of this workshop is to create a venue to tackle the challenges that arise when modern machine learning techniques (e.g., deep neural networks) encounter resource limitations (e.g., scarce labeled data, constrained computing devices, low power/energy budget). The workshop shall focus on machine learning techniques used for knowledge discovery and data science applications, with a focus on efficient learning from three angles: data, algorithm/model, and system/hardware. The topics of interest will include:
  • Data-efficient learning: Self-supervised/unsupervised learning, semi/weakly-supervised learning, few-shot learning, and their applications to various data modalities (e.g., graph, user behavior, text, web, image, time series) and data science problems (e.g., social media, healthcare, recommendation, finance, multimedia)
  • Algorithm/model-efficient learning: Neural network pruning, quantization, acceleration, sparse learning, neural network compression, knowledge distillation, neural architecture search, and their applications on various data science problems.
  • System/hardware-efficient learning: Neural network-hardware co-design, real-time and energy-efficient learning system design, hardware accelerators for machine learning, and their applications on various data science problems.
  • Joint-efficient learning: Any kind of joint-efficient learning algorithms/methods (e.g., data-model joint learning, algorithm-hardware joint learning) and their application on various data science problems.
The workshop will be a half-day session comprising several invited talks from distinguished researchers in the field, spotlight lightning talks and a poster session where contributing paper presenters can discuss their work, and a concluding panel discussion focusing on future directions. Attendance is open to all registered participants.
Submitted technical papers should be at least 4 pages long. All papers must be submitted in PDF format (any template is ok). Papers will be peer-reviewed and selected for spotlight and/or poster presentation. We welcome any kinds of submissions, e.g., papers already accepted to or currently under review by other venues, ongoing studies, and so on. We will also select the Best Paper award. Submission site:
Important Dates
Paper Submission Deadline:
06/15/2025
Notification of Acceptance:
07/01/2025
Workshop Date:
08/04/2025
Contact us
For any questions, please reach out to the organization's email address: chuxuzhang@gmail.com or any organizer’s email address.
Agenda
08:00am
Opening remarks
08:00am-08:40am
Invited talk 1:  TBD
TBD
08:40am-09:20am
Invited talk 2:  TBD
TBD
09:20am-10:20am
Spotlight paper presentations
10:20am - 10:40am
Break/Poster Session
10:40am-11:20am
Invited talk 3:  TBD
TBD
11:20am-12:00am
Invited talk 4:  TBD
TBD
12:00am
Closing remarks
Invited Speakers
TBD
Organizing Chairs
Associate Professor
University of Connecticut
Assistant Professor
North Carolina State University
Assistant Professor
Northwestern University
Assistant Professor
University at Albany
Principal Scientist
Google DeepMind
Assistant Professor
University of Virginia
Senior Chair
Professor
Arizona State University
CMT Acknowledgement
The Microsoft CMT service was used for managing the peer-reviewing process for this conference. This service was provided for free by Microsoft and they bore all expenses, including costs for Azure cloud services as well as for software development and support.