International Workshop on Resource-Efficient Learning for the Web
Call for Papers
In recent years, deep learning has rapidly advanced in its capacity to model diverse data and tackle a wide range of applications. For instance, Large Language Models (LLMs) and graph neural networks (GNNs) have attracted considerable research attention due to their significant contributions to real-world problem-solving. The methodological advancements in LLMs and GNNs have led to promising results in areas such as social networks, question answering, search engines, recommendations, and content analysis. However, existing deep learning techniques often rely on the assumption of ample data and substantial computing resources during model training. This assumption can be impractical, especially given the high costs of data labeling and the large sizes of foundational models, particularly in resource-constrained settings like academic labs. Therefore, it is both challenging and crucial to explore these techniques in resource-constrained environments. Addressing these challenges is essential for the effective and efficient deployment of models in various real-world web systems and applications. Consequently, these fundamental issues have attracted increasing research interest in resource-efficient learning. The proposed international workshop on "Resource-Efficient Learning for the Web (RelWeb 2025)" will provide a great venue for academic researchers and industrial practitioners to share challenges, solutions, and future opportunities of resource-efficient learning.
The goal of this workshop is to create a venue to tackle the challenges that arise when modern machine learning techniques (e.g., deep learning) encounter resource limitations (e.g., scarce labeled data, constrained computing devices, low power/energy budget). The workshop will center on deep learning techniques utilized in data and web science applications, with a focus on efficient learning from three angles: data, model, and system/hardware. Specifically, the topics of this workshop include:
  • Data-efficient learning: Self-supervised/unsupervised learning, semi/weakly-supervised learning, few-shot learning, and their applications to various data modalities (e.g., graph, user behavior, text, web, image, time series) and web/data science problems (e.g., social media, healthcare, recommendation, finance, multimedia).
  • Model-efficient learning: Neural network pruning, quantization, acceleration, sparse learning, neural network compression, knowledge distillation, neural architecture search, and their applications on various web/data science problems.
  • System-efficient learning: Neural network-hardware co-design, real-time and energy-efficient learning system design, hardware accelerators for machine learning, and their applications on various web/data science problems.
  • Joint-efficient learning: Any kind of joint-efficient learning algorithms/methods (e.g., data-model joint learning, algorithm-hardware joint learning) and their application on various web/data science problems.
The workshop will be a half-day session comprising several invited talks from distinguished researchers in the field, spotlight lightning talks and a poster session where contributing paper presenters can discuss their work. Attendance is open to all registered participants.
Workshop papers must be written in English, in double-column format, and must adhere to the ACM template and formatting, at least 4 pages in length. (The same format as the main conference papers: https://www2025.thewebconf.org/research-tracks). Word users may use the Word Interim Template. Papers will be peer-reviewed and selected for spotlight and/or poster presentation. We also welcome recent and ongoing research studies submissions. We will also select the best paper award. Paper Submission site: https://cmt3.research.microsoft.com/RelWeb2025/Submission/Index
Important Dates (AoE)
Paper Submission Deadline:
01/07/2024
Notification of Acceptance:
01/31/2025
Camera-Reday Deadline:
02/19/2025
Workshop Date:
04/29/2025
Main Conference Recycled Paper Submission
We have reopened the submission system for recycled papers from the main conference, and these will undergo a fast-track review process. Please include the original review as an appendix. We assure that the review will remain confidential. The deadline for fast-track submissions is January 26, 2025.
Contact us
For any questions, please reach out to the organization's email address: chuxuzhang@gmail.com or any organizer’s email address.
Accepted Paper List
APEER : Automatic Prompt Engineering Enhances Large Language Model Reranking
RANKFLOW : A Multi-Role Collaborative Reranking Workflow Utilizing Large Language Models
DMSNet: A Lightweight and Efficient Facial Expression Recognition Model for IoT and WoT Applications
Cost-Efficiency Trade-offs for Neural Cascade Rankers in Web Search
Optimize Quantization for Large Language Models via Progressive Training
Enhancing E-commerce Representation Learning via Hypergraph Contrastive Learning and Interpretable LLM-Driven Analysis
SparseNet: Sparse Tweet Network for Classification of Informative Posts using Graph Convolutional Network
Improving Out-of-Vocabulary Hashing in Recommendation Systems
Agenda
09:00am
Opening remark
09:00am-09:45am
Keynote talk:  Ed Chi (Google DeepMind)
Title: TBD
9:45am-10:30am
Title: TBD
10:30am-11:00am
Coffee break/Poster session
11:00am-11:30am
11:30am-12:00pm
Title: TBD
12:00am-12:30pm
Spotlight paper presentations
PEER : Automatic Prompt Engineering Enhances Large Language Model Reranking
Optimize Quantization for Large Language Models via Progressive Training
Enhancing E-commerce Representation Learning via Hypergraph Contrastive Learning and Interpretable LLM-Driven Analysis
12:30pm
Closing remark
Speakers
Ed Chi
Vice President
Google DeepMind
Keynote Talk: TBD
James Caverlee
Professor of Computer Science
Texas A&M University
Keynote Talk: TBD
Xiang Wang
Professor of Data Science
University of Science and Technology of China
Invited Talk: TBD
Yuan Fang
Assistant Professor of Computer Science
Singapore Management University
Invited Talk: TBD
Organizing Chairs
Associate Professor
University of Connecticut
Assistant Professor
North Carolina State University
Assistant Professor
Northwestern University
Assistant Professor
University at Albany
Engineering Director
Google DeepMind
Assistant Professor
University of Virginia
Senior Chair
Professor
Arizona State University
©2025 International Workshop on Resource-Efficient Learning for the Web, All rights reserved
(Last update: Dec 1, 2024)