Call for Papers

This workshop focuses on advancing secure, private, and trustworthy large language models (LLMs) for real-world applications. As LLMs become embedded in everyday tools and decision-making pipelines across domains such as healthcare, education, and autonomous systems, it is critical to ensure they operate with high reliability, transparency, and respect for user privacy. The increasing scale and scope of LLM usage raises urgent questions about model robustness, safety under adversarial conditions, and the integrity of generated outputs in dynamic environments.

We aim to bring together researchers and practitioners working on the foundations and applications of trustworthy LLMs. Topics of interest include privacy-preserving techniques, watermarking and integrity verification, explainability in model decision-making, robustness under distribution shift, and evaluation frameworks for safety and trust. We also encourage submissions that explore scalable inference on edge and IoT devices, secure aggregation and compression, and real-world case studies of LLM deployment. This workshop will serve as a platform to share new insights, tools, and best practices for building LLMs that are not only powerful, but also safe, transparent, and accountable.

Topics

Topics of interest include, but are not limited to:

  • Privacy-preserving techniques for LLMs in distributed environments
  • Federated learning with LLMs: challenges and solutions
  • Differential privacy in decentralized LLM applications
  • Trustworthy and explainable LLM-based decision-making
  • Adversarial attacks and defenses in distributed LLM systems
  • Watermarking and integrity verification for robust LLMs
  • Robustness evaluation of LLMs under distribution shift
  • Cross-domain and cross-institutional data governance for LLMs
  • Scalable LLM inference in edge and IoT-based systems
  • Secure aggregation and compression of LLM outputs
  • Benchmarking privacy, security, and trust in LLM-powered applications
  • Case studies and real-world implementations of secure distributed LLMs

Submission

Workshop papers should follow the same submission guidelines and instructions for the main conference: IEEE TPS 2025. The long paper should not exceed 8 pages, including references. Standard IEEE conference paper format should be used. The IEEE two-column conference template can be downloaded from here.

Submit your paper through EasyChair and select the “IEEE DISTILL 2025” Track.

Proceedings & Paper Types

All accepted papers will be submitted for inclusion in the IEEE Xplore conference proceedings.
Authors may choose to submit either full-length (up to 10 pages) or short papers (up to 5 pages).

For questions, please contact the workshop organizers.

Important Dates

  • Submission Deadline: August 1, 2025
  • Notification of Acceptance: August 28, 2025
  • Camera-Ready Due: September 10, 2025
  • Workshop Date: November 14, 2025