Tuhin Chakrabarty

Assistant Professor, Computer Science, SUNY at Stony Brook

I am currently a Research Scientist at Salesforce AI Research and an Assistant Professor at the Computer Science Department in Stony Brook University (SUNY). My work has been published at top peer reviewed venues such as ACL, EMNLP, NAACL, TACL, CHI, ICML and CogSci.

My research interests are broadly in AI, NLP and Human AI Interaction and my goal is to design and build reliable AI systems that can handle implicature and ambiguity, understand human behavior and are aligned with the requirements humans have from technology. I often rely on knowledge, methods, and perspectives from multiple disciplines to address complex problems or questions that cannot be fully understood or solved within the boundaries of Computer Science. Some of the research directions I am very excited about:

1) Long Form Text Generation Evaluation: How can we design better ways to evaluate long-form text generation by drawing on technical skills from computer science and design in combination with other disciplines, including the humanities, to expand the communities?

2) Evaluation for Reasoning and Explainability: How can we build good evaluations that facilitate both understanding and explainability of complex reasoning patterns in both language ( [1],[2] ) and vision.

3) Better Design and Improvement of Human AI Alignment: Todays powerful AI systems are supported by RLHF which converts human feedback/preferences to meaninful training signal. For complex tasks this is a fundamental bottlenck as feedback can be inherently noisy. How can we design better ways to elicit human feedback that improve alignnment?

4) Human AI Collaboration: AI technologies, created by humans and for humans, will increasingly shape future of workforce. How can we design better collaboration strategies and human computer interaction interfaces that understand user intention and preferences, and help them solve tasks efficiently.

5) Impact of Generative on the AI Labor Market: Todays AI models have zero consideration in fair and responsible data use and their deployment and safety strategy is often myopic without considering longitudinal harms. Most powerful models are trained on huge amount of data from professionals without consent, disrupting their livelihoods. My research aims to evaluate and understand what aspects of skilled labor are being automated and how can we empirically measure the impact on labor market dilution.

I am open to recruiting visiting researchers (pre-doc or PhD students). You should be able to dedicate atleast 6 months

PhD students



Media



Recent News

  • Papers on AI Safety and Future of Work accepted to ICML 2025 as an oral presentation (Top 1%)
  • Paper on Creativity Evaluation accepted to Cog Sci 2025
  • New paper on Learning better rewards for improving AI writing
  • Grateful to receive Best Paper Honorable mention at CHI 2025
  • CHI 2025 paper accepted on quantifying and mitigating idiosyncracies in AI writing
  • Recognized as an outstanding AC at EMNLP 2024
  • Paper on LLM and abstract reasoning accepted to EMNLP
  • First author paper on Creativity Evaluation accepted to CHI 2024, Honolulu
  • First author paper on Creativity Support with LLM accepted to Creativity and Cognition 2024, Chicago

Education

2017-now
Columbia University

Ph.D. in Computer Science

2017-2019
Columbia University

M.S. in Computer Science

2010-2014
Jadavpur University

Bachelors of Engineering in Computer Science

Professional Experience

2023
Google Deepmind

Research Intern

Host: David Reitter and Hannah Rashkin

2023
Salesforce AI Research

PhD Research Intern

Host: Philippe Laban, Jason Wu, Divyansh Agarwal

2021
NYTimes R&D

NLP Research Fellow

2021
Mosaic (Machine Commonsense Team)

Research Intern (PhD)

Host: Yejin Choi and Vered Shwartz (PhD)

2018
Amazon Alexa

Applied Scientist Intern

2016-2017
UBER (Revenue Team)

Machine Learning Engineer




For more details, please see my full CV (PDF).


See my Full List of Publications here.

Selected Publications




Future of Work


AI Safety should prioritize the Future of Work

Sanchaita Hazra, Bodhisattwa Majumder, Tuhin Chakrabarty
Accepted to ICML 2025
Oral (Top 1%)
[PDF]
Tags: Generative AI, AI Safety, Economics

AI and Human Behavior


AI-Slop to AI-Polish? Aligning Language Models through Edit-Based Writing Rewards and Test-time Computation

Tuhin Chakrabarty* , Philippe Laban*, Chien-Sheng Wu
Tags: LLM and Writing, Reward Modeling, Text Edits, Human AI Alignment, Test Time Compuattion, Calibration

Can AI writing be salvaged? Mitigating Idiosyncrasies and Improving Human-AI Alignment in the Writing Process through Edits

Tuhin Chakrabarty , Philippe Laban, Chien-Sheng Wu
🏆 Best Paper Honorable Mention
Tags: LLM and Writing, Text Edits, Human AI Alignment, Behavioral Science

Art or Artifice? Large Language Models and the False Promise of Creativity

Tuhin Chakrabarty, Philippe Laban, Divyansh Agarwal, Smaranda Muresan, Chien-Sheng Wu
Tags: Creativity Evaluation, Divergent Thinking, Story Generation, HCI, Generative AI

Human AI Interaction


Creativity Support in the Age of Large Language Models: An Empirical Study Involving Emerging Writers

Tuhin Chakrabarty*, Vishakh Padmakumar*, Faeze Brahman, Smaranda Muresan
* denotes Co-First Authors
Tags: Co-Creative Generation, Natural Language Instructions, HCI, Generative AI

I Spy a Metaphor: Large Language Models and Diffusion Models Co-Create Visual Metaphors

Tuhin Chakrabarty*, Arkadiy Saakyan*, Olivia Winn*, Artemis Panagopoulou, Yue Yang, Marianna Apidianaki, Smaranda Muresan
* denotes Co-First Authors
Accepted to ACL 2023 Findings
Tags: Co-Creative Generation, Natural Language Instructions, Vision and Language Models, AI Art, Generative AI


Machine Learning for NLP


Fine-tuned Language Models are Continual Learners

Thomas Scialom*, Tuhin Chakrabarty*, and Smaranda Muresan
* denotes Co-First Authors
Tags: Continual Learning, Instruction Tuning

Natural Language and Ambiguity


FLUTE: Figurative Language Understanding through Textual Explanations

Tuhin Chakrabarty, Arkadiy Saakyan, Debanjan Ghosh, Smaranda Muresan
Tags: Figurative Language, Natural Language Inference, Free Text Explanation

It’s not Rocket Science: Interpreting Figurative Language in Narratives

Tuhin Chakrabarty, Yejin Choi, Vered Shwartz
Tags: Figurative Language, Multiword Expression, Commonsense


Contact


E-mail: <x>@cs.columbia.edu, where x=tuhin.chakr.
The Interchurch Center 61 Claremont Avenue, Data Science Institute , 3rd floor (map).