By Beril Borali
How does the educational background shape user experience with conversational AI?
Our study, inspired by the study of Skjuve and others, set out to explore how users from varied educational backgrounds engage with ChatGPT (version 3.5), focusing on their behavior and satisfaction levels. The participant pool consisted of 9 university students, representing a range of disciplines including Computer Science and Engineering (CSE) and other disciplines (non-CSE) included Social Sciences, Health, and Professional Studies.
Two User Groups, One Tool
In this study, participants were grouped to engage in different variations of writing and editing activities using ChatGPT, as well as a collaborative writing task. Our study used a method that blends structured surveys with live, interactive sessions, providing a comprehensive view of how people from various academic backgrounds use and respond to ChatGPT's capabilities in a series of writing-focused exercises.
Both CSE and other disciplines participants displayed an average of intermediate proficiency in using ChatGPT.
ChatGPT as a Writing and Editing Assistant:
• Satisfaction levels increased after refining initial drafts, reaching similar levels for both groups.
• CSE participants showed a notable increase in satisfaction with final essay versions.
• Non-CSE participants rated final essays higher in quality and trustworthiness.
• Interaction ease with ChatGPT rated similarly across tasks and groups.
• Non-CSE users more satisfied with ChatGPT responses; error frequency varied between groups.
Collaborative Writing with ChatGPT
• High engagement across all groups, slightly more in non-CSE disciplines.
• Co-writing dynamics: CSE participants found ChatGPT more complementary.
• Creativity ratings higher among non-CSE participants.
• Story development satisfaction varied slightly, with overall experience rated higher by non-CSE groups.
• Future interest in collaborative writing with ChatGPT was mixed, generally higher in non-CSE groups.
Addressing ChatGPT's Content Policy Challenges in Creative and Professional Settings
In our recent exploration of ChatGPT's usability, we uncovered a notable challenge that significantly affects both creative and professional users. This issue centers around the AI's content policy, which can unintendedly hinder the creative process and disrupt professional tasks.
Creative Limitations for Writers
A example of this issue emerged during a co-writing session (presented in the photo below), where a participant's narrative involving mature themes triggered ChatGPT's content filters.
This resulted in the AI generating error messages and halting the creative flow, leading to user frustration and disengagement.
Creative Limitations for Writers
The censorship or flagging of ChatGPT could potentially be a problem for fiction writers. As it is apparent, a large number of adult books will deal with mature subject matter, such as profane language, violence or sex. However, since ChatGPT will flag or censor this type of content, this heavily limits its creative use to mostly children books. If ChatGPT wants to be seen as a tool for writers targeting at adult audiences, there will be a need for a model that is able to deal with classic literary tropes such as a detective solving a murder or a cowboy shooting down enemies.
Professional Implications in Legal Contexts
However, the censorship issue doesn’t just limit itself to creativity, it can also be problematic in professional fields, such as law for example. Taking the example of the field of law related to domestic abuse, a statistic from Canadian Centre for Justice and Community Safety Statistics said that there were 358, 244 victims of police-reported violence in the country last 2019. Assuming that a large number of these domestic abuse incidents in this statistic resulted in a court case, this means that lawyers working on any of these potentially hundreds of thousands domestic abuse court cases that year are unable to use ChatGPT to help them since they will most likely include triggering material which could result in flagging or censorship.
Tailoring ChatGPT for Diverse Content Sensitivities
It is important to note that there are some use cases for a ChatGPT that doesn't deal with mature content, such as a help desk ChatGPT or a elementary school tutor chatbot. However, as it was previously explained, there are many instances where a ChatGPT that can deal with potentially triggering material is necessary. Hence, OpenAI should be able to provide various types of ChatGPT with different sensitivities to mature themes depending on the use case and audience.