RelKD 2025
International Workshop on Resource-Efficient Learning for Knowledge Discovery
Colocated with the
2025 ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING
Call for Papers
Modern machine learning techniques, especially deep neural networks, have demonstrated excellent performance for various knowledge discovery and data mining applications. However, the development of many of these techniques still encounters resource constraint challenges in many scenarios, such as limited labeled data (data-level), small model size requirements in real-world computing platforms (model-level), and efficient mapping of the computations to heterogeneous target hardware (system-level). Addressing all of these metrics is critical for the effective and efficient usage of the developed models in a wide variety of real systems, such as large-scale social network analysis, large-scale recommendation systems, and real-time anomaly detection. Therefore, it is desirable to develop efficient learning techniques to tackle challenges of resource limitations from data, model/algorithm, or (and) system/hardware perspectives. The proposed international workshop on "Resource-Efficient Learning for Knowledge Discovery (RelKD 2025)" will provide a great venue for academic researchers and industrial practitioners to share challenges, solutions, and future opportunities of resource-efficient learning.
The goal of this workshop is to create a venue to tackle the challenges that arise when modern machine learning techniques (e.g., deep neural networks) encounter resource limitations (e.g., scarce labeled data, constrained computing devices, low power/energy budget). The workshop shall focus on machine learning techniques used for knowledge discovery and data science applications, with a focus on efficient learning from three angles: data, algorithm/model, and system/hardware. The topics of interest will include:
- Data-efficient learning: Self-supervised/unsupervised learning, semi/weakly-supervised learning, few-shot learning, and their applications to various data modalities (e.g., graph, user behavior, text, web, image, time series) and data science problems (e.g., social media, healthcare, recommendation, finance, multimedia)
- Algorithm/model-efficient learning: Neural network pruning, quantization, acceleration, sparse learning, neural network compression, knowledge distillation, neural architecture search, and their applications on various data science problems.
- System/hardware-efficient learning: Neural network-hardware co-design, real-time and energy-efficient learning system design, hardware accelerators for machine learning, and their applications on various data science problems.
- Joint-efficient learning: Any kind of joint-efficient learning algorithms/methods (e.g., data-model joint learning, algorithm-hardware joint learning) and their application on various data science problems.
The workshop will be a half-day session comprising several invited talks from distinguished researchers in the field, spotlight lightning talks and a poster session where contributing paper presenters can discuss their work, and a concluding panel discussion focusing on future directions. Attendance is open to all registered participants.
Submitted technical papers should be at least 4 pages long. All papers must be submitted in PDF format (any template is ok). Papers will be peer-reviewed and selected for spotlight and/or poster presentation.
We welcome any kinds of submissions, e.g., papers already accepted to or currently under review by other venues, ongoing studies, and so on. We will also select the Best Paper award. Submission site:
Important Dates
Paper Submission Deadline
Notification of Acceptance
Workshop Date
6/15/2025
07/01/2025
08/04/2025
Paper Submission Deadline:
06/15/2025
Notification of Acceptance:
07/01/2025
Workshop Date:
08/04/2025
Contact us
For any questions, please reach out to the organization's email address:
chuxuzhang@gmail.com
or any organizer’s email address.
Agenda
Invited Speakers
TBD