Second Workshop on Customizable NLP (CustomNLP4U)


@ ACL 2026, July 3 (9:00 AM - 12:30 PM PT)
San Diego, California, United States

OpenReview Portal
Contact Us: customnlp4u@gmail.com


Call for Papers


Important Dates

All deadlines are 11:59 PM AoE (Anywhere on Earth) time.

Topics of Interest

Most language modeling research today is focused on building generalist models capable of solving a wide range of tasks with the recipe of pretraining at scale, and reinforcement learning based post-training. The increased capabilities of large language models (LLMs) have promised increased productivity and innovation [1] and seen broad consumption by an ever-widening audience for personal and commercial use [2]. However, users’ expectations, values, and workflows can vary significantly across domains, applications, organizations, geography, social groups, cultures, and individuals. Attention to these factors is often missing or under-considered in existing Language Modeling pipelines. Specifically, generalist models have non-uniform performance, for example, for consumers working in sensitive and specialized domains such as law, finance, or health [3, 4]; for individuals, demographics, or cultures less represented online such as speakers of different language varieties [5]. For language models to deliver on their promise of productivity and innovation, particularly in emerging scenarios with widely varying use cases, there is a need to develop models that can be tailored to different consumers (individuals, groups, or organizations), easily controlled by them, and learn over time [6, 7]; models that can reason about their users’ private knowledge and context to provide personalized responses [8, 9]. Alongside methodological questions, model customization raises ethical and security questions, related to learning from copyrighted data, protecting user privacy, and addressing pernicious biases. This is especially relevant in sensitive domains, where model customization can lead to large benefits for stakeholders but is also associated with higher risks.

The topics of this workshop include (but not limited to):

Guidelines

Organizers

Name 1 Sheshera Mysore

Office of Applied Research, Microsoft

Name 1 Sachin Kumar

The Ohio State University, Allen Institute for AI

Name 4

Vidhisha Balachandran
Microsoft Research

Name 5

Shirley Anugrah Hayati
University of Minnesota

Name 5

Faeze Brahman
Allen Institute for AI

Name 5

Hanane Nour Moussa
The Ohio State University

Name 5

Alireza Salemi
University of Massachusetts Amherst


Steering Committee

Name 2

Hamed Zamani
University of Massachusetts Amherst

Name 2

Dongyeop Kang
University of Minnesota

Name 1 Yulia Tsvetkov

University of Washington

References

[1] Singla, A., Sukharevsky, A., Berteletti, E., Yee, L., & Chui, M. (2025). The next innovation revolution—powered by AI. McKinsey & Company.

[2] Liang, W., Zhang, Y., Codreanu, M., Wang, J., Cao, H., & Zou, J. (2025). The Widespread Adoption of Large Language Model-Assisted Writing Across Society. arXiv preprint arXiv:2502.09747.

[3] Mahari, R., Stammbach, D., Ash, E., & Pentland, A. (2023). The Law and NLP: Bridging Disciplinary Disconnects. In Findings of the Association for Computational Linguistics: EMNLP 2023. Association for Computational Linguistics.

[4] Tam, T. Y. C., Sivarajkumar, S., Kapoor, S., Stolyar, A. V., Polanska, K., McCarthy, K. R., Osterhoudt, H., Wu, X., Visweswaran, S., Fu, S., et al. (2024). A Framework for Human Evaluation of Large Language Models in Healthcare Derived from Literature Review. NPJ digital medicine, 7(1), 258.

[5] Rystrøm, J., Kirk, H. R., & Hale, S. (2025). Multilingual != Multicultural: Evaluating Gaps Between Multilingual Capabilities and Cultural Alignment in LLMs. arXiv preprint arXiv:2502.16534.

[6] Narayanan, A., & Kapoor, S. (2025). AI as Normal Technology. Knight First Amend. Inst.

[7] Challapally, A., Pease, C., Raskar, R., & Chari, P. (2025). The GenAI Divide: State of AI in Business 2025.

[8] Mireshghallah, N., Kim, H., Zhou, X., Tsvetkov, Y., Sap, M., Shokri, R., & Choi, Y. (2024). Can LLMs Keep a Secret? Testing Privacy Implications of Language Models via Contextual Integrity Theory. In The Twelfth International Conference on Learning Representations.

[9] Sorensen, T., Moore, J., Fisher, J., Gordon, M. L., Mireshghallah, N., Rytting, C. M., Ye, A., Jiang, L., Lu, X., Dziri, N., Althoff, T., & Choi, Y. (2024). Position: A Roadmap to Pluralistic Alignment. In Forty-first International Conference on Machine Learning.