|
Sanchaita Hazra
sanchaita[dot]hazra[at]utah[dot]edu
Department of Economics
University of Utah
I am a PhD candidate in Economics at the University of Utah. I am co-advised by Haimanti Bhattacharya and Subhasish Dugar. I also actively collaborate with
Allen Institute for Artificial Intelligence.
My research interests are behavioral economics, experimental economics (Lab/Field), applied microeconomics, and artificial intelligence (AI). My work focuses on applying the methodologies of experimental economics in neoclassical economics to explore and gain deeper insights human decision-making. I study the human perception and trust in AI-assisted decision-making tasks such as in information gathering, detecting lies, and scientific writing.
My PhD committee includes my advisors, Gabriel A Lozada (UUtah), Daniel Martin (UCSB), and Chris Callison-Burch (UPenn).
In past, I worked as a statistician at DeepFlux (now accquired by CSA India) and a research assistant at Indian Statistical Institute Kolkata.
In 2021, I was also a Lecturer of Economics at Women's Christian College, University of Kolkata. I founded Alankar, a women-run online jewelry brand fostering positive social impact on employbility.
|
CV |
LinkedIn
|
|
Published Works
|
|
Position: AI Safety should prioritize the Future of Work
with Tuhin Chakrabarty and Bodhisattwa P. Majumder
Published, July 2025
🏆 Outstanding Paper Award (one of 8 out 12000 submissions)
International Conference on Machine Learning (ICML, Oral), 2025
[paper]
Current efforts in AI safety prioritize filtering harmful content, preventing manipulation of human behavior, and eliminating existential risks in cybersecurity or biosecurity. While pressing, this narrow focus overlooks critical human-centric considerations that shape the long-term trajectory of a society. We identify the risks of overlooking the impact of AI on the future of work and recommend comprehensive transition support towards the evolution of meaningful labor with human agency. Through the lens of economic theories, we highlight the intertemporal impacts of AI on human livelihood and the structural changes in labor markets that exacerbate income inequality. To address this, we strongly recommend a pro-worker framework of global AI governance to enhance shared prosperity and economic justice while reducing technical debt.
|
|
Position: Data-driven Discovery with Large Generative Models
with Bodhisattwa P. Majumder, Harshit Surana, Dhruv Agarwal, Ashish Sabharwal, Peter Clark
Published, July 2024
International Conference on Machine Learning (ICML), 2024
[paper]
A practical first step toward an end-to-end automation for scientific discovery. We posit that Large Generative Models (LGMs) present an incredible potential for automating hypothesis discovery, however, LGMs alone are not enough.
|
|
To Tell The Truth: Language of Deception and Language Models
with Bodhisattwa P. Majumder
Published, June 2024
North American Chapter of the Association for Computational Linguistics (NAACL, Oral), 2024
[paper]
We analyze a novel TV game show data where conversations in a high-stake environment between individuals with conflicting objectives result in lies in the presence of an objective truth, a distinguishing feature absent in previous text-based deception datasets. We show that there exists a class of detectors with similar truth detection performance as humans, even when the former accesses only the language cues while the latter detects lies using both language and audio-visual cues. Our model detects novel but accurate language cues in many cases where humans failed to detect deception, opening up the possibility of humans collaborating with algorithms and ameliorating their ability to detect the truth.
|
|
Experience, Learning and the Detection of Deception
with Priyodarshi Banerjee and Sanmitra Ghosh
Published, July 2023
Journal of Economic Criminology
[paper]
Deceptive communication or behavior can inflict loss, making it important to be able to distinguish these from trustworthy ones. This article pursues the hypothesis that repeated exposure or experience can cause learning and hence better detection of deception. We investigate using data culled from events in a TV game show. Decision-makers in the show repeatedly faced situations where they had to correctly identify an individual from within a group all claiming to be that individual. Increased experience reduced average detection error in the sample. Analysis of the data suggested this relationship was significant and driven by learning.
|
Working Papers
|
|
The Good, the Bad, and the Ugly: The Role of AI Quality Disclosure in Lie Detection
with Bodhisattwa P. Majumder, Haimanti Bhattacharya, and Subhasish Dugar
Job Market Paper Under Review, JBEE
[paper]
We investigate how low-quality AI advisors, lacking quality disclosures, can help spread text-based lies while seeming to help people detect lies. Participants in our experiment discern truth from lies by evaluating transcripts from a game show that mimicked deceptive social media exchanges on topics with objective truths. We find that when relying on low-quality advisors without disclosures, participants' truth-detection rates fall below their own abilities, which recovered once the AI's true effectiveness was revealed. Conversely, high-quality advisor enhances truth detection, regardless of disclosure. We discover that participants' expectations about AI capabilities contribute to their undue reliance on opaque, low-quality advisors.
|
|
Uneven Trust in LLMs: Beliefs About Accuracy Vary Across 11 Countries
with Marta Serra-Garcia
[paper]
LLMs are emerging as information sources that influence organizational knowledge, though trust in them varies. This paper combines data from a large-scale experiment and the World Values Survey (WVS) to examine the determinants of trust in LLMs. The experiment measures trust in LLM-generated answers to policy-relevant questions among over 2,900 participants across 11 countries. Trust in the LLM is significantly lower in high-income countries—especially among individuals with right-leaning political views and lower educational attainment—compared to low- and middle-income countries. Using large-scale data on trust from the WVS, we show that patterns of trust in the LLM differ from those in generalized trust but closely align with trust in traditional information sources. These findings highlight that comparing trust in LLMs to other forms of societal trust can deepen our understanding of the potential societal impacts of AI.
|
|
Accepted with Minor Revisions: The Value of AI-Assisted Scientific Writing
with Doeun Lee, Bodhisattwa P. Majumder, and Sachin Kumar
Under Submission, October 2025
[paper]
Utilizing incentive-compatible randomized experiments, we evaluate the potential of LLMs to support domain experts in writing scientific text, with a focus on abstract composition. We design a peer-review setting where participants with relevant experience are grouped into authors and reviewers. Authors edit original (control) or their AI-generated (treatment) abstracts of published research from top-tier conferences. We find that authors make most edits when provided with original abstracts, especially when the source remains undisclosed. With disclosure, we see an opposite trend. When reviewers evaluate if the edited abstract provides adequate justice to the research presented in the published work, extensively edited versions of either original or AI-generated abstracts do not necessarily improve acceptance.
|
|
Groups at a Disadvantage in Detecting Lies from Text
Paper on Request, November 2025
Social media groups (e.g., Whatsapp groups) play a key role in detecting lies by facilitating fact-checking, collective scrutiny, and spreading or debunking misinformation. While true, group members often act with varying incentives ranging from genuine truth-seeking to reinforcing biases or commercial interests. In an online experiment, groups jointly identify the veracity of the text-based lies, where the group guess is determined based on the majority after members independently submit their binary guesses. We systematically vary the incentives of the group members with homogeneous or heterogeneous material payoffs (incentives) to mimic real-world social media groups. We measure if the groups with heterogeneous incentives perform differently at detecting lies than homogeneous groups.
|
|
Funding Fanny - Microfinance and Empowerment of Women in India
with Sanchita Sen
Bachelors Thesis
[paper]
Oral presentation at International Conference on Sustainable Development and Education, 2020
Oral presentation at Research Scholar's Workshop 2020, Visva-Bharati
Women make up a substantial majority of India's poor and they are the cruelest victims of the society. Organizing women through Self Help Groups and equipping them to undertake income-generating activities through the formation of microenterprises have created an economic revolution in the country. The paper focuses on the scope and rationale of microfinance in India and how the Self Help Group-Bank Linkage Programme by NABARD has played its part in empowering rural women financially. We find positive increase in loan disbursements, but sheer increase in loan outstanding over a period of ten years.
|
Awards
- [2025] Featured Job Market Candidate for Lightning Talk & Poster, NABE Tech Conference
- [2025] Outstanding Paper Award, International Conference on Machine Learning
- [2025] J.J. Rasmussen Research Award of $3000 for Thesis Dissertation, University of Utah
- [2024] CSBS Graduate Travel Awards, The University of Utah
- [2024] Haskell Graduate Student Research Award of $1,500, Department of Economics, The University of Utah
- [2023] Research Award of $3,000, Global Change and Sustainability Center and the Wilkes Center for Climate Science & Policy, The University of Utah
- [2023] Graduate Student Travel Assistance Award, The University of Utah
- [2019-2020] Swami Vivekananda Merit-cum-Means Scholarship for Research in Economics, University of Calcutta
|
Talks
- [2025] AI, Misperceptions, and Economic Loss at HAIC, Kellogg School of Management, Northwestern University.
- [2024] The Good, the Bad and the Ugly: Effects of AI Quality Information on Detecting Text-Based Lies at NABETech, Seattle.
- [2024] The Good, the Bad and the Ugly: Effects of AI Quality Information on Detecting Text-Based Lies at Summer School, Soleto, Italy.
- [2024] To Tell The Truth: Language of Deception and Language Models at NAACL, Mexico. Talk, starts from 1:02:00
- [2024] Humans, Artificial Intelligence, and (Text-based) Misinformation at WEAI, Seattle.
- [2023] Humans, Artificial Intelligence, and (Text-based) Misinformation at ESA, Charlotte.
- [2023] Experience, Learning and the Detection of Deception at WEAI, San Diego.
- [2021] Experience, Learning and the Detection of Deception at Behavioral Econ Workshop, UofU.
- [2020] Funding Fanny--Microfinance and Empowerment of Women in India at Intl Conf on Sustainable Dev & Edu.
|
Teaching
- Associate Instructor, Probability and Statistics, Econ 3640, Summer 2025, UofU
- Associate Instructor, Principles of Macroeconomics, Econ 2010, Summer 2024, UofU
- Associate Instructor, Principles of Microeconomics, Econ 2010, Fall 2024, Spring 2024, Fall 2023, UofU
- Associate Instructor, Intermediate Microeconomics, Econ 4010, 6010; Summer 2023, UofU
- Teaching Instructor, Q-Pod Tutoring; Spring 2025, Spring 2024, Spring 2023, Fall 2022
- Visiting Lecturer, Spring Semester 2021, Women's Christian College, University of Calcutta
|
© Sanchaita Hazra
Thanks to Jon Barron for this nice template
Vibrant Kolkata skyline art is from here
|
|