Accepted with Minor Revisions: Value of AI-Assisted Scientific Writing
Sanchaita Hazra, Doeun Lee, Bodhisattwa P. Majumder, Sachin Kumar
ACM Conference on Intelligent User Interfaces (ACM IUI), 2026
[Abstract]

In an incentivized randomized controlled trial with a hypothetical conference setup we divide participants with relevant expertise into an author and reviewer pool. Our 2×2 between-subject design varies the implicit source of the provided abstract and disclosure of that source. We find authors make most edits when editing human-written abstracts without source attribution. Upon disclosure, authors make significantly more edits when abstracts are known to be AI-generated. Reviewer decisions remain unaffected by source, but careful stylistic edits improve acceptance chances. LLM-generated abstracts hold potential to reach comparable acceptability to human-written ones with minimal revision.

[paper]
Position: AI Safety should Prioritize the Future of Work
Sanchaita Hazra, Bodhisattwa P. Majumder, Tuhin Chakrabarty
International Conference on Machine Learning (ICML, Oral), 2025
🏆 Outstanding Paper Award — top 8 of 12,000 submissions
[Abstract]

Current efforts in AI safety prioritize filtering harmful content, preventing manipulation of human behavior, and eliminating existential risks in cybersecurity or biosecurity. While pressing, this narrow focus overlooks critical human-centric considerations. We identify the risks of overlooking the impact of AI on the future of work and recommend comprehensive transition support toward the evolution of meaningful labor with human agency. Through the lens of economic theories, we highlight the intertemporal impacts of AI on human livelihood and structural changes in labor markets that exacerbate income inequality. We strongly recommend a pro-worker framework of global AI governance to enhance shared prosperity and economic justice.

[paper]
Data-driven Discovery with Large Generative Models
Bodhisattwa P. Majumder, Harshit Surana, Dhruv Awarwal, Sanchaita Hazra, Ashish Sabharwal, Peter Clark
International Conference on Machine Learning (ICML), 2024
[Abstract]

A practical first step toward end-to-end automation for scientific discovery. We posit that Large Generative Models (LGMs) present incredible potential for automating hypothesis discovery; however, LGMs alone are not enough to fully realize this potential.

[paper]
To Tell The Truth: Language of Deception and Language Models
Sanchaita Hazra, Bodhisattwa P. Majumder
North American Chapter of the ACL (NAACL, Oral), 2024
[Abstract]

We analyze a novel TV game show dataset where high-stakes conversations between individuals with conflicting objectives result in lies in the presence of an objective truth. We show that a class of detectors achieves similar truth-detection performance as humans, even when accessing only language cues. Our model detects novel but accurate language cues in many cases where humans failed, opening up the possibility of human-algorithm collaboration to improve lie detection.

[paper]
Experience, Learning and the Detection of Deception
Priyodorshi Banerjee, Sanmitra Ghosh, Sanchaita Hazra
Journal of Economic Criminology, 2023
[Abstract]

This article pursues the hypothesis that repeated exposure and experience cause learning and hence better detection of deception. Using data from a TV game show, we find that increased experience reduced average detection error, and analysis confirmed this relationship was significant and driven by learning.

[paper]
The Good, the Bad, and the Ugly: The Role of AI Quality Disclosure in Deception Detection
Haimanti Bhattacharya, Subhasish Dugar, Sanchaita Hazra, Bodhisattwa P. Majumder
Job Market Paper Minor Revisions, JBEE
[Abstract]

We investigate how low-quality AI advisors, lacking quality disclosures, can help spread text-based lies while seeming to help people detect them. Participants discern truth from lies by evaluating transcripts from a game show that mimicked deceptive social media exchanges. We find that when relying on low-quality advisors without disclosures, participants' truth-detection rates fall below their own unaided abilities, which recover once the AI's true effectiveness is revealed. Conversely, a high-quality advisor enhances truth detection regardless of disclosure status.

[paper]
Uneven Trust in LLMs: Beliefs About Accuracy Vary Across 11 Countries
Sanchaita Hazra, Marta Serra-Garcia
CESifo Working Paper
[Abstract]

LLMs are emerging as information sources that influence organizational knowledge, though trust in them varies considerably. This paper combines a large-scale experiment with World Values Survey data to examine determinants of trust in LLMs among over 2,900 participants across 11 countries. Trust is significantly lower in high-income countries — especially among individuals with right-leaning political views and lower educational attainment. Patterns of trust in LLMs differ from generalized trust but closely align with trust in traditional information sources.

[paper]
Groups at a Disadvantage in Detecting Lies from Text
Sanchaita Hazra
[Abstract]

Social media groups play a key role in detecting lies by facilitating fact-checking and collective scrutiny. In an online experiment, groups jointly identify the veracity of text-based lies, with the group guess determined by majority vote after members independently submit their binary guesses. We systematically vary the incentives of group members — homogeneous or heterogeneous material payoffs — to mimic real-world social media groups, and measure whether heterogeneous incentives affect lie detection performance.

[paper on request]
Funding Fanny — Microfinance and Empowerment of Women in India
Sanchaita Hazra, Sanchaita Sen
More than a Glitch: Confronting Race, Gender, and Ability Bias in Tech?
Sanchaita Hazra
Feminist Economics, 2026