<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.3 20210610//EN"
  "https://jats.nlm.nih.gov/publishing/1.3/JATS-journalpublishing1-3.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink" dtd-version="1.3" article-type="research-article">
  <front>
    <journal-meta>
      <journal-id journal-id-type="publisher-id">IJAR</journal-id>
      <journal-title-group>
        <journal-title>Indonesian Journal of Advanced Research</journal-title>
      </journal-title-group>
      <issn pub-type="epub">2986-0768</issn>
      <publisher>
        <publisher-name>Formosa Publisher</publisher-name>
      </publisher>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="doi">10.55927/ijar.v4i7.14989</article-id>
      <title-group>
        <article-title>An Existentialist Philosophical Perspective on the Ethics of ChatGPT Use</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author" corresp="yes">
          <name>
            <surname>Andreas</surname>
            <given-names>Otto Mart</given-names>
          </name>
          <aff>Universitas Pelita Harapan, Indonesia</aff>
          <email>ottomart31@gmail.com</email>
        </contrib>
        <contrib contrib-type="author">
          <name>
            <surname>Samosir</surname>
            <given-names>Elisabet Marthawati</given-names>
          </name>
          <aff>Universitas Pelita Harapan, Indonesia</aff>
        </contrib>
      </contrib-group>
      <pub-date pub-type="epub">
        <day>27</day>
        <month>07</month>
        <year>2025</year>
      </pub-date>
      <history>
        <date date-type="received">
          <day>11</day>
          <month>05</month>
          <year>2025</year>
        </date>
        <date date-type="rev-recd">
          <day>25</day>
          <month>06</month>
          <year>2025</year>
        </date>
        <date date-type="accepted">
          <day>27</day>
          <month>07</month>
          <year>2025</year>
        </date>
      </history>
      <volume>4</volume>
      <issue>7</issue>
      <fpage>1411</fpage>
      <lpage>1426</lpage>
      <abstract>
        <p>The advancement of artificial intelligence (AI), particularly through ChatGPT, offers significant convenience but also raises ethical concerns related to autonomy, responsibility, and authenticity. This study critically examines AI usage through the lens of existentialist philosophy. Using qualitative methods and literature review, it explores the views of Jean-Paul Sartre, Martin Heidegger, and Søren Kierkegaard to analyze the ethical implications of AI for human existence. The findings suggest that AI use must align with personal freedom and responsibility. ChatGPT is not merely a tool but a medium that can shape human authenticity and may lead to existential alienation if uncritically used. Thus, ethical awareness and philosophical reflection are essential in navigating AI’s role in modern life.</p>
      </abstract>
      <kwd-group>
        <kwd>Ethics</kwd>
        <kwd>AI ChatGPT</kwd>
        <kwd>Existentialism</kwd>
      </kwd-group>
      <permissions>
        <license>
          <ali:license_ref xmlns:ali="http://www.niso.org/schemas/ali/1.0/">http://creativecommons.org/licenses/by/4.0/</ali:license_ref>
          <license-p>This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International License.</license-p>
        </license>
      </permissions>
    </article-meta>
  </front>

  <body>

<sec>
  <title>INTRODUCTION</title>
  <p>The advancement of artificial intelligence (AI) technology over the
  past decade has transformed various aspects of human life, including
  education, healthcare, business, and communication. One of the most
  prominent innovations is the emergence of ChatGPT, a natural
  language-based AI system capable of answering questions, generating
  text, and engaging in conversation in a human-like manner. Despite its
  many practical benefits, the use of this technology also raises
  profound ethical dilemmas.</p>
  <p>A study conducted by Niyu, Dwihadiah, Gerungan, and Purba (2024) on
  the use of ChatGPT among university students and lecturers in
  Indonesia indicates that this AI has become an integral part of
  everyday academic practices. Most students use ChatGPT to complete
  assignments, summarize course materials, and draft papers, while
  lecturers employ it to assist in preparing teaching materials and
  references. However, the study also reveals significant concerns
  regarding the decline in critical thinking, originality, and the
  growing tendency toward passivity in the learning process. Several
  respondents even admitted to accepting ChatGPT’s responses without
  further validation or reflection (Niyu et al., 2024).</p>
  <p>In line with these findings, research by Arochma, Purnaningsih,
  Anggreani, and Faroqi (2023) highlights unethical practices in
  students’ use of ChatGPT, such as plagiarism, overreliance on AI
  without sufficient understanding, and a lack of ethical awareness in
  utilizing information technology. This study emphasizes that low
  levels of digital ethics literacy and weak academic integrity are the
  primary factors contributing to the misuse of AI- based technology.
  These findings suggest that beyond evaluating technological
  effectiveness, attention must also be given to the moral dimensions
  and personal responsibility in its use.</p>
  <p>On the other hand, a systematic literature review conducted by
  Putri, Pradana, Yustraini, and Efansyah (2024) found that ChatGPT also
  holds positive potential for enhancing students' skills,
  collaboration, and creativity. This AI can support idea exploration,
  accelerate the learning process, and facilitate interdisciplinary
  digital collaboration. Nonetheless, they also stress the importance of
  lecturers’ roles in guiding the wise use of ChatGPT to ensure it
  supports the development of students’ thinking capacities rather than
  becoming a substitute for thinking itself. Their study advocates a
  balanced approach— leveraging technology while instilling values of
  responsibility and critical reflection.</p>
  <p>These findings underscore the urgent need to examine more deeply:
  Are users still responsible for the content generated by AI? Does
  dependence on AI weaken human autonomy in thought? Does the use of
  such technology represent an authentic decision or merely a form of
  escape from existential responsibility? In this context,
  existentialist philosophy offers a sharp and critical perspective.
  Existentialism views the human being as a free agent who must choose
  and take responsibility for their own existence. Thinkers such as
  Jean-Paul Sartre, Martin Heidegger, and Søren Kierkegaard emphasize
  the importance of authenticity, personal responsibility, and awareness
  of existential meaning in a</p>
  <p>world that is increasingly complex and filled with distractions—an
  outlook highly relevant to evaluating the use of modern technology,
  including AI.</p>
  <p>This study focuses on an ethical analysis of ChatGPT use through
  the lens of existentialist philosophy. The central research questions
  posed include: How does existentialist philosophy understand the
  relationship between humans and technology, particularly AI? What are
  the ethical implications of ChatGPT use for human freedom and
  existential responsibility? How can human authenticity be preserved in
  interactions with AI? The aim of this study is to offer a
  philosophical reflection as a foundation for ethical consideration in
  the use of AI, and to promote awareness that technology, regardless of
  its sophistication, must remain under the control of a responsible
  human consciousness.</p>
</sec>





<sec>
  <title>THEORETICAL REVIEW</title>
  <sec id="existentialist-philosophy">
    <title>Existentialist Philosophy</title>
    <p>Existentialist philosophy is a school of thought that emphasizes
    freedom, individual responsibility, the search for meaning, and
    existential awareness in the face of life’s absurdities (Camus,
    1942). Jean-Paul Sartre underscores that human beings are
    fundamentally free—&quot;condemned to be free&quot;—and must be held
    accountable for every choice they make (Sartre, 2007). When an
    individual denies their own freedom by pretending they have no
    choice, they fall into a state of bad faith (Sartre, 1947). Bad
    faith occurs when one relinquishes their agency to social norms,
    roles, or external systems in an effort to escape the anxiety that
    comes with existential freedom.</p>
    <p>By pretending not to be free, individuals avoid the
    responsibility of defining the meaning of their own lives. In Being
    and Nothingness, Sartre (1956) warns that humans must not abdicate
    their responsibility to external entities—including technology. In
    The Question Concerning Technology, Heidegger (1977) argues that
    modern technology operates through a mode of enframing (Ge-stell), a
    way of thinking that reduces reality to a mere resource to be
    manipulated. This instrumental mindset risks alienating human beings
    from an authentic relationship with the world, especially if
    everything—including the self—is treated merely as a “technical
    object.”</p>
    <p>Representing religious existentialism, Søren Kierkegaard (1980)
    emphasizes the significance of personal choice and the awareness of
    “despair” as a path to authenticity. He asserts that avoiding
    existential responsibility leads only to alienation. From the
    existentialist perspective, the use of AI systems such as ChatGPT
    must be interrogated not only practically, but also ontologically
    and morally: does this technology guide humans toward existential
    authenticity, or does it distance them from themselves?</p>
  </sec>
  <sec id="morality-between-freedom-and-responsibility">
    <title>Morality: Between Freedom and Responsibility</title>
    <p>Morality is essentially a set of principles, values, or norms
    used to judge whether an action is ethically right or wrong (Gert,
    2004). It is not merely an external guideline but also reflects an
    individual's inner commitment to living with
    value-consciousness.</p>
    <p>Freedom is a fundamental prerequisite for morality. Humans are
    born with the capacity to choose—even the choice not to act is
    itself a form of choice. This freedom is not an unlimited liberty,
    but an existential freedom that demands reflection and value
    judgment. Morality becomes meaningful only when exercised in the
    context of freedom, as only consciously chosen actions can be
    ethically significant.</p>
    <p>In both modern and existentialist ethics, individual freedom is
    invariably accompanied by responsibility. Emmanuel Levinas (1961),
    for instance, emphasizes that moral responsibility arises when one
    becomes aware of the existence of the Other. To act morally is to
    accept responsibility for the consequences of one’s choices. Humans
    are responsible not only for themselves, but also for all of
    humanity, as their actions represent values to others.</p>
    <p>In Kantian moral philosophy, freedom is the foundation of moral
    autonomy. Only a free agent is capable of acting according to the
    categorical imperative—a universal moral principle that can be
    applied without contradiction (Kant, 1993). Responsibility in this
    framework means submitting to a moral law determined by one's own
    practical reason, rather than by external pressures or subjective
    inclinations.</p>
    <p>The relationship among the three—morality, freedom, and
    responsibility— is dialectical: morality cannot be understood
    without freedom, and freedom is hollow without responsibility.
    Without morality, freedom turns into anarchy; without freedom,
    morality becomes coercion; without responsibility, freedom becomes
    unethical (MacIntyre, 1981). Therefore, in many modern and
    contemporary ethical theories, these three elements are seen as an
    inseparable unity in human life as moral agents. The relationship
    between morality, freedom, and responsibility is illustrated in
    Figure 1.</p>
    <graphic mimetype="image" mime-subtype="jpeg" xlink:href="vertopal_1f5f061512e24aa0ac11dc922867cd38/media/image3.jpeg" />
  </sec>
</sec>
<sec id="figure-1.-the-relationship-between-morality-freedom-and-responsibility">
  <title>Figure 1. The relationship between morality, freedom, and
  responsibility</title>
</sec>
<sec id="section">
  <title></title>
  <sec id="the-ethics-of-technology-and-artificial-intelligence">
    <title>The Ethics of Technology and Artificial Intelligence</title>
    <p>Technology ethics is a branch of applied ethics that examines the
    moral implications of technological development and its use in human
    life. According to Hans Jonas (1984) in The Imperative of
    Responsibility, modern technology possesses a profound
    transformative power, thereby necessitating a new form of ethics—one
    that considers the long-term consequences of technology for the
    future of humanity.</p>
    <p>In the context of AI, several prominent ethical issues emerge.
    First is the question of moral responsibility: who is accountable
    when AI causes harm? Second is the tension between autonomy and
    technological determinism: to what extent do humans retain agency in
    their interactions with AI? Third is the concern for privacy and
    information manipulation, as AI can be used to collect personal data
    and influence public opinion on a massive scale. According to
    Bostrom and Yudkowsky (2014), AI is an exponential technology with
    the potential to be either beneficial or harmful, depending on the
    ethical frameworks and regulatory mechanisms that accompany its
    development.</p>
    <p>ChatGPT represents a form of generative AI based on large
    language models (LLMs), developed by OpenAI (OpenAI, 2023). This
    model is trained on billions of textual data and is designed to
    produce responses that are relevant, natural, and contextually
    appropriate to user input (OpenAI, 2024).</p>
    <p>The advantages of ChatGPT include its ability to generate
    coherent text quickly, access broad knowledge across various
    domains, and engage in human- like interactive dialogue (McKinsey
    Global Institute, 2023). Nevertheless, several challenges remain.
    These include the potential spread of inaccurate information and
    users’ growing cognitive dependence on AI (Boddington, 2017).
    Furthermore, users often fall into the illusion of intelligence,
    treating ChatGPT as if it &quot;understands&quot; and &quot;intends
    well,&quot; even though AI systems lack consciousness (Floridi &amp;
    Cowls, 2019). This phenomenon underscores the need for philosophical
    reflection on the role of AI in reshaping how humans think and act
    (Binns, 2018).</p>
    <p>Generative artificial intelligence, such as ChatGPT and similar
    tools, has become a significant aid in academic writing. It enables
    users to produce coherent, fast, and contextually relevant text
    based on specific prompts (OpenAI, 2023). However, the use of
    generative AI in scholarly writing still faces several limitations
    that require critical evaluation.</p>
    <p>One of the primary limitations is AI’s inability to autonomously
    verify the truth or accuracy of data. Large language models like
    ChatGPT generate text based on statistical patterns from their
    training data—not through critical reasoning or actual research
    (Bender et al., 2021). As a result, AI may produce information that
    appears valid but is in fact inaccurate, biased, or even entirely
    fabricated (hallucinations).</p>
    <p>Another limitation lies in its dependence on training data.
    Generative AI can only generate content based on information
    available up to its training cut off. It cannot automatically access
    the most recent findings or literature unless integrated with
    real-time data retrieval systems or external databases</p>
    <p>(Bommasani et al., 2021). This limits its capacity to accurately
    reference or cite up-to-date research.</p>
    <p>Academic writing is not merely about presenting information—it
    also requires critical argumentation, synthesis of ideas, and
    original contributions to a specific field of knowledge. Due to its
    predictive-statistical nature, AI lacks contextual awareness and
    deep understanding of the complexities within scholarly discourse
    (Zhuo et al., 2023). Therefore, AI cannot replace the human role in
    crafting philosophically or methodologically meaningful academic
    arguments.</p>
    <p>The use of generative AI also raises ethical questions,
    particularly regarding plagiarism and the authenticity of academic
    work. Although AI does not directly copy content, the generated text
    may closely resemble existing works due to linguistic pattern
    similarity. This creates a risk of academic misconduct if not
    accompanied by proper clarification and attribution (Nature
    Editorial, 2023).</p>
    <p>Generative AI’s limitations are also evident in the references
    and citations it produces. These are often incomplete, invalid, or
    even fabricated. While AI may simulate academic structures (such as
    APA or MLA styles), it does not inherently access authoritative
    academic databases like Scopus, JSTOR, or PubMed—unless integrated
    with external tools (Thorp, 2023). The limitations of generative AI
    in academic journal writing are illustrated in Figure 2.</p>
    <graphic mimetype="image" mime-subtype="jpeg" xlink:href="vertopal_1f5f061512e24aa0ac11dc922867cd38/media/image4.jpeg" />
    <p>Figure 2. The limitations of generative AI in academic journal writing</p>
  </sec>
</sec>







<sec>
  <title>METHODOLOGY</title>
  <p>This study adopts a qualitative approach using the method of
  library research, with a philosophical-interpretive orientation. This
  method is commonly employed in philosophical studies to explore deeper
  meanings and conduct critical analyses of moral and ontological
  concepts (Baggini &amp; Fosl, 2010). The aim of this research is not
  to produce quantitative or statistical data, but to explore,
  understand, and interpret philosophical meanings related to the ethics
  of technology use—particularly ChatGPT—within the framework of
  existentialist philosophy (Guignon, 2004).</p>
  <p>The type of research conducted is normative philosophical inquiry,
  which seeks to critically analyze ideas, concepts, and moral
  principles underlying the phenomenon of AI usage, especially from an
  existentialist standpoint. The existential approach is used to
  understand how human beings, as existential subjects, relate to
  intelligent technologies. As Sartre (2007) asserts in Being and
  Nothingness, humans are free agents who must take responsibility for
  their choices and actions.</p>
  <p>The data sources in this research consist of both primary and
  secondary materials. Primary sources include original works by
  existentialist philosophers, such as Being and Nothingness (Jean-Paul
  Sartre), The Question Concerning Technology (Martin Heidegger), and
  The Sickness Unto Death (Søren Kierkegaard). These texts provide a
  philosophical foundation for understanding modern phenomena through an
  existential lens, including the human- technology relationship and the
  moral responsibility therein (Heidegger, 1977; Kierkegaard, 1980).</p>
  <p>Secondary sources encompass books, scholarly journal articles, AI
  ethics reports, and technology publications, including works such as
  those by Floridi and Cowls (2019) and Boddington (2017), which offer
  contemporary ethical frameworks related to AI. Additionally, popular
  references and empirical research findings are utilized to enrich the
  context and bridge philosophical thought with actual phenomena.</p>
  <p>Data is collected through literature review, examining both
  philosophical and contemporary references in print and digital
  formats. The researcher also analyzes several case studies of ChatGPT
  use in contexts such as education, work, and everyday life as concrete
  examples to be interpreted philosophically (Turkle, 2011; Carr,
  2011).</p>
  <p>Data analysis is conducted using philosophical hermeneutics, a
  process of interpreting philosophical texts and applying them to
  contemporary realities (Palmer, 1969). The main steps in the analysis
  include: Textual interpretation – understanding the meaning of
  existential concepts such as freedom, responsibility, authenticity,
  and alienation (May, 1950; Fromm, 1955); Conceptual application –
  linking these concepts to the phenomenon of ChatGPT use; Critical
  evaluation – assessing the ethical impact of AI use on human existence
  based on the framework of existentialist philosophy (Frankl, 1946;
  Camus, 1942).</p>
  <p>To ensure the validity and credibility of the analysis, the
  researcher applies source triangulation, comparing philosophical
  perspectives from different thinkers with contemporary sources from
  the fields of technology and ethics. Furthermore, the researcher
  maintains consistency in argumentation within the existentialist
  philosophical framework used as the basis for interpretation.</p>
</sec>





<sec>
  <title>RESEARCH RESULTS</title>
  <p>Based on philosophical inquiry and interpretive literature
  analysis, this study finds that the use of ChatGPT as a form of
  generative AI carries complex existential implications for human
  beings. The primary findings can be</p>
  <p>summarized across three dimensions: the subject-tool relationship,
  existential responsibility, and authenticity in the digital
  context.</p>
  <sec id="chatgpt-as-a-tool-that-blurs-the-boundary-between-subject-and-technology">
    <title>ChatGPT as a Tool That Blurs the Boundary Between Subject and
    Technology</title>
    <p>The interpretation reveals that ChatGPT has surpassed its role as
    a mere technical tool. It has become an artificial cognitive entity
    that influences how humans think, learn, and interact. In this
    context, there is a growing tendency toward cognitive dependency, in
    which users relinquish critical thinking processes to AI. This
    supports the analyses of Nicholas Carr (2011) and Sherry Turkle
    (2011), who highlight the transformation of cognitive structures and
    human relationships due to technological reliance.</p>
    <p>Existentially, this phenomenon signifies a shift in the human
    position— from that of an active subject to a passive entity that
    merely consumes answers. Heidegger (1977) refers to this condition
    as enframing, where humans begin to treat both themselves and the
    world as mere technical objects. Thus, the relationship between
    humans and ChatGPT is no longer a functional subject-tool
    interaction, but an ontological relationship that shapes how humans
    experience both the world and themselves.</p>
  </sec>
  <sec id="escape-from-freedom-and-existential-responsibility">
    <title>Escape from Freedom and Existential Responsibility</title>
    <p>An analysis of user behavior, especially in educational and
    decision- making contexts, reveals symptoms of bad faith as
    conceptualized by Sartre (1956). When individuals delegate
    decision-making processes to AI and cease to exercise reflective
    freedom of thought, they are denying their existence as free and
    responsible beings.</p>
    <p>In this regard, AI such as ChatGPT is not merely a tool, but a
    medium of escape from the existential anxiety associated with
    freedom of choice. The use of technology without existential
    awareness undermines human autonomy and moral responsibility,
    placing individuals at risk of falling into algorithmic dependency
    that dulls the will to choose freely.</p>
  </sec>
  <sec id="authenticity-undermined-by-artificial-comfort">
    <title>Authenticity Undermined by Artificial Comfort</title>
    <p>This study also finds that the excessive use of AI in idea
    generation, task completion, or the pursuit of meaning potentially
    erodes the authenticity of human thought. According to Heidegger and
    Kierkegaard, authenticity can only be achieved through conscious and
    reflective existential engagement. When technology is employed to
    avoid anxiety, difficult choices, or deep intellectual labor, it
    becomes a space of escape from the self’s authenticity.</p>
    <p>Nevertheless, the findings also suggest that AI like ChatGPT does
    not inherently produce negative consequences. When used consciously
    and ethically, technology can serve as a space of
    liberation—enabling humans to expand their thinking capacities,
    strengthen autonomy, and access new forms of meaning in the digital
    realm. In this sense, AI becomes a reflective instrument employed by
    an autonomous subject, rather than a replacement for human existence
    itself.</p>
  </sec>
</sec>






<sec>
  <title>DISCUSSION</title>
  <sec id="chatgpt-in-the-existential-relationship-between-subject-and-instrument">
    <title>ChatGPT in the Existential Relationship: Between Subject and
    Instrument</title>
    <p>In an increasingly automated world, technologies like ChatGPT are
    no longer merely technical tools; they have become entities that
    actively reshape the way humans think. AI models such as ChatGPT
    facilitate instant access to information and idea generation, which
    previously required contemplative and reflective processes. This
    shift fosters cognitive dependency, a tendency to rely on AI for
    tasks that demand critical thinking and deep understanding.</p>
    <p>Nicholas Carr (2011) warns that the ease of accessing information
    through digital technology has altered human cognitive
    structures—promoting shallow, fast, and reactive thinking over deep
    and reflective thought. Similarly, Sherry Turkle (2011) emphasizes
    that dependence on communication technologies and intellectual
    automation, such as ChatGPT, can erode human capacities for
    contemplation, critical thinking, and meaningful connection.</p>
    <p>If not used consciously, AI may diminish one's abilities for
    reflection, analysis, and evaluation—particularly in education,
    decision-making, and opinion formation. Over time, this alters not
    only individual cognitive patterns but also contributes to a
    socially and existentially shallow landscape.</p>
    <p>Social interaction is also undergoing transformation. ChatGPT and
    similar systems are increasingly replacing human roles in basic
    communication contexts, such as customer service or tutoring. While
    efficient, this raises concerns about dehumanization in
    communication and the decline of human social skills (Turkle, 2011).
    On the other hand, AI can also foster global collaboration and
    digital interaction, forming new social ecosystems that transcend
    geographical and cultural boundaries (Floridi, 2014).</p>
    <p>Human existential freedom, which is rooted in conscious
    decision-making, is significantly affected. AI like ChatGPT
    accelerates the decision-making process by instantly providing
    relevant information. While this enhances efficiency, decisions made
    with AI assistance are vulnerable to biases embedded in training
    data (Binns, 2018; McKinsey Global Institute, 2023). When users rely
    on ChatGPT without acknowledging its limitations, the resulting
    decisions can be partial, unethical, or misleading. Consequently,
    the human as an existential subject risks treating both self and
    others as mere objects to be optimized—echoing Heidegger’s (1977)
    critique of technological enframing.</p>
    <p>Moreover, using ChatGPT for writing, answering questions, or
    engaging in dialogue may lead to existential laziness—a condition in
    which individuals surrender their cognitive efforts to machines
    (Carr, 2011). This is not simply a matter of practicality; it
    touches on a deeper question: Does the individual remain a
    reflective subject or become a mere operator of algorithms?
    Existential laziness is not physical sloth but a crisis of meaning
    and inner emptiness. This condition resonates with nihilism,
    despair, and self-alienation. Viktor Frankl (1946) argues that
    humans are driven by the will to find meaning (logotherapy). When
    meaning is absent, one may fall into existential emptiness expressed
    through apathy, anxiety, and withdrawal.</p>
    <p>Modern life and consumerist culture pressure individuals to live
    in routines devoid of reflection (Fromm, 1955; May, 1950). Combined
    with social media</p>
    <p>pressures and identity crises, existential laziness becomes
    widespread, giving rise to absurdity of existence (Camus, 1942). In
    such a state, it is essential for individuals to create their own
    meaning in a world that offers no definitive answers. Existential
    laziness can be seen as a passive response to that absurdity.</p>
  </sec>
  <sec id="chatgpt-and-existential-responsibility">
    <title>ChatGPT and Existential Responsibility</title>
    <p>Jean-Paul Sartre (1956) asserts that human beings are
    fundamentally free and cannot offload responsibility onto others,
    including technology. In this context, using ChatGPT is a conscious
    choice that demands full accountability. When someone uses ChatGPT
    to complete academic tasks without engaging in critical thought,
    they are essentially denying their own freedom by fleeing from the
    responsibility of thinking. Sartre terms this as bad faith (mauvaise
    foi), the condition in which a person pretends not to be a free and
    responsible agent (Sartre, 1956).</p>
    <p>As a generative AI, ChatGPT can serve as a mechanism of escape
    when users replace critical thought and decision-making with
    automated answers. When a person turns to ChatGPT to
    &quot;decide,&quot; &quot;determine life direction,&quot; or
    &quot;assign meaning,&quot; there is a danger of outsourcing
    existential freedom to a non-human entity. This is a contemporary
    example of bad faith, where individuals avoid the anxiety of freedom
    by clinging to instant answers, refuse to reflect, and become
    passive toward their moral and existential responsibilities.</p>
    <p>Sartre's philosophy emphasizes that although external conditions
    may influence individuals, existential responsibility is
    non-transferable. Therefore, in the context of using ChatGPT,
    responsibility remains in the hands of the human user. AI possesses
    no consciousness, no values, and no existential freedom; it merely
    provides information, not meaning or moral judgment. As such, tools
    like ChatGPT should support—not replace—human freedom. An authentic
    existential act is one in which technology is used consciously and
    reflectively, maintaining the freedom to think without outsourcing
    existential decisions to external systems (Sartre, 2007).</p>
    <p>AI bears no moral accountability—only humans can be held morally
    responsible. Thus, the decision to use, exploit, or even ignore AI
    reflects the user's autonomy and ethical orientation. In Sartrean
    terms, uncritical use of ChatGPT risks trapping humans in bad faith,
    where they deny their own freedom and responsibility. This
    highlights the urgent need to cultivate awareness that AI is not a
    substitute for human autonomy but a tool that must be used with full
    moral responsibility and conscious intent. Only in this way can
    humans remain authentic subjects in their lives rather than passive
    objects of algorithms.</p>
  </sec>
  <sec id="self-authenticity-in-interaction-with-chatgpt">
    <title>Self-Authenticity in Interaction with ChatGPT</title>
    <p>Existentialist philosophy regards authenticity as one of the
    highest human values. To live authentically is to live in accordance
    with one's own consciousness and personal choices—not under external
    pressures or artificial comfort (Sartre, 1943; Kierkegaard, 1980).
    Overreliance on AI in idea generation or problem- solving may erode
    original thought, as it displaces the contemplative processes
    central to human existence.</p>
    <p>Heidegger (1977) warns that when humans overly submit to
    technology, they risk losing their authentic being-in-the-world, as
    reality is reduced to a technical resource under the logic of
    enframing. This is the threshold of dehumanization: where humans
    cease to relate fully with the world and instead become entities
    substituted by data and algorithmic patterns. In education, for
    example, learners who rely on ChatGPT without reflection may grow
    into individuals lacking critical thinking, creativity, and personal
    responsibility—all of which define the authentic subject in
    existential thought.</p>
    <p>Existentialism also emphasizes the importance of anxiety (angst)
    as a trigger for existential awareness. In the context of AI,
    anxiety arises from profound questions: Will humans be replaced? Is
    the meaning of work and thought still relevant in an automated age?
    According to Kierkegaard (1980), anxiety must not be avoided but
    faced—as a path toward authentic selfhood. Amidst AI proliferation,
    anxiety can serve as a reflective moment for rediscovering what it
    means to be human—whether we are simply efficient beings or subjects
    in pursuit of meaning, authenticity, and freedom.</p>
    <p>Dependence on AI in thought and action also raises critical
    ethical concerns, including data privacy, algorithmic transparency,
    and responsibility for errors in decisions. Hence, robust regulatory
    frameworks and digital literacy are essential to ensure that AI use
    supports holistic and responsible human development (Floridi &amp;
    Cowls, 2019; Boddington, 2017).</p>
    <p>Understanding ChatGPT not merely as a tool but as an existential
    challenge means its use must involve ethical and reflective
    decisions. Technology can serve as a space of liberation or
    escape—depending entirely on the human response and worldview. This
    is where the role of philosophical ethics education becomes crucial:
    to ensure humans are not merely technology users, but ethical
    subjects responsible for their existence (Frankl, 1946; Fromm,
    1955).</p>
    <p>Technology—especially in the digital and AI era—has great
    potential as a medium for individual and collective empowerment. It
    can open access to knowledge, broaden intellectual horizons,
    accelerate innovation, and facilitate self- expression and global
    connection. In this sense, technology serves as a space of
    liberation by enabling free expression. Social media, blogs, and
    digital platforms allow individuals to voice their ideas,
    experiences, and identities (Turkle, 2011).</p>
    <p>Another implication of technology as a space of liberation is the
    democratization of knowledge access. Tools like ChatGPT, search
    engines, and online courses remove geographic boundaries to
    learning. This enhances individual autonomy, enabling people to
    manage time, emotions, finances, and mental health independently—a
    reflection of authentic action in existentialism (May, 1950; Camus,
    1942).</p>
    <p>However, technology can also be a means of self-escape, as
    described in Sartre’s concept of bad faith. It becomes a realm where
    individuals flee from existential responsibility, offloading
    self-reflection and decision-making onto algorithms (Sartre, 1956).
    This also occurs in the creation of curated digital identities that
    alienate individuals from their authentic selves (Fromm, 1955).</p>
    <p>Technology is ontologically neutral—what determines whether it
    becomes a tool of liberation or escape is how humans use it. When
    used consciously,</p>
    <p>reflectively, and responsibly, it becomes a medium for
    existential expansion. When used to evade life’s choices and
    responsibilities, it becomes a trap that limits personal growth. In
    other words, technology is a mirror of human freedom: it can express
    the courage to be authentic or the fear of being oneself (Sartre,
    2007).</p>
  </sec>
</sec>





<sec>
  <title>CONCLUSION AND RECOMMENDATIONS</title>
  <p>The use of ChatGPT, as a form of artificial intelligence, has
  significantly transformed the way humans interact with information,
  think, and make decisions. However, viewed through the lens of
  existentialist philosophy— particularly the thought of Sartre,
  Heidegger, and Kierkegaard—it becomes clear that AI cannot be
  separated from the moral responsibility of its users.</p>
  <p>From an existentialist perspective, technologies like ChatGPT are
  not merely neutral tools. They may serve as instruments of liberation,
  yet they also carry the potential for alienation and evasion of
  personal responsibility. Humans are still required to maintain
  intellectual autonomy, act with awareness, and remain accountable for
  the choices they make, including those involving the use of AI.</p>
  <p>Unreflective reliance on AI risks eroding human authenticity,
  dulling critical faculties, and reducing individuals to mere
  technological operators. Therefore, the use of ChatGPT must be
  situated within an existential ethical framework—one that recognizes
  human freedom, embraces responsibility, and preserves authentic
  existence amid technological convenience.</p>
  <p>To address the ethical challenges posed by the use of AI
  technologies such as ChatGPT, several key measures are recommended.
  First, fostering ethical awareness among technology users—including
  students, academics, and the general public—is essential. This
  awareness should go beyond normative ethics and engage users in
  reflective and philosophical thinking, with existentialist philosophy
  offering a robust framework for such engagement.</p>
  <p>Second, AI tools like ChatGPT should be used wisely and
  responsibly, serving as aids rather than substitutes for human
  cognition. Users must remain actively involved in their thinking and
  decision-making processes. Third, educational institutions should
  integrate philosophical perspectives into technology and education
  curricula to help future generations grasp the existential
  implications of everyday technologies.</p>
  <p>Finally, AI developers are encouraged to design systems rooted in
  humanistic values, ensuring that AI not only serves functional
  purposes but also respects the moral and existential dimensions of
  human life.</p>
</sec>





<sec>
  <title>ADVANCED RESEARCH</title>
  <p>Based on the findings and limitations of this study, several
  avenues for future research are suggested. First, this research adopts
  a conceptual and philosophical approach through qualitative library
  research. Therefore, subsequent studies could employ empirical
  methodologies, such as qualitative fieldwork or quantitative surveys,
  to assess directly the impact of ChatGPT usage on users’ existential
  awareness, moral responsibility, and patterns of thinking in
  educational, professional, and everyday contexts.</p>
  <p>Second, this study centers primarily on the existentialist
  perspectives of Sartre, Heidegger, and Kierkegaard. Future research
  may expand this scope by</p>
  <p>integrating other existentialist thinkers, such as Simone de
  Beauvoir— particularly in relation to existentialist feminist
  ethics—or by contrasting existentialist views with those of
  post-humanism and transhumanism to explore alternative visions of
  human existence in the age of advanced technology.</p>
  <p>Third, interdisciplinary research that bridges philosophy, computer
  science, psychology, and education is essential for constructing a
  more comprehensive ethical framework for AI use. Such integration is
  necessary to ensure that philosophical reflections remain grounded in
  the social and technological realities that continue to evolve.</p>
  <p>Finally, future studies may focus on developing critical
  pedagogical models rooted in existentialist philosophy to educate
  younger generations to be not only technologically literate but also
  meaning-aware and morally responsible as autonomous human beings in
  the era of artificial intelligence.</p>
</sec>






<sec>
  <title>ACKNOWLEDGEMENTS</title>
  <p>The author would like to express sincere gratitude to all
  individuals and institutions who have contributed to the completion of
  this research. Special thanks are extended to the academic mentors and
  colleagues who provided insightful feedback, critical suggestions, and
  philosophical perspectives that enriched the depth of this study.</p>
  <p>The author also appreciates the support of the university's library
  and access to digital academic databases, which were instrumental in
  conducting the literature review. Lastly, the author is grateful to
  the broader scholarly community whose works and ideas continue to
  inspire critical reflection on the ethical and existential dimensions
  of artificial intelligence.</p>
</sec>








<sec>
<title>REFERENCES</title>
<ref-list>

<ref id="ref1">
  <element-citation publication-type="journal">
    <person-group person-group-type="author">
      <name><surname>Arochma</surname><given-names>N. M.</given-names></name>
      <name><surname>Purnaningsih</surname><given-names>E. G.</given-names></name>
      <name><surname>Anggreani</surname><given-names>N. K.</given-names></name>
      <name><surname>Faroqi</surname><given-names>A.</given-names></name>
    </person-group>
    <article-title>Ethical analysis of information technology use regarding unethical ChatGPT usage by students</article-title>
    <source>Prosiding Seminar Nasional Teknologi dan Sistem Informasi</source>
    <year>2023</year>
    <volume>3</volume>
    <issue>1</issue>
    <fpage>508</fpage>
    <lpage>515</lpage>
    <pub-id pub-id-type="doi">10.33005/sitasi.v3i1.404</pub-id>
  </element-citation>
</ref>

<ref id="ref2">
  <element-citation publication-type="book">
    <person-group person-group-type="author">
      <name><surname>Baggini</surname><given-names>J.</given-names></name>
      <name><surname>Fosl</surname><given-names>P. S.</given-names></name>
    </person-group>
    <article-title>The philosopher's toolkit: A compendium of philosophical concepts and methods</article-title>
    <source>Wiley-Blackwell</source>
    <year>2010</year>
    <comment>2nd edition</comment>
  </element-citation>
</ref>

<ref id="ref3">
  <element-citation publication-type="confproc">
    <person-group person-group-type="author">
      <name><surname>Bender</surname><given-names>E. M.</given-names></name>
      <name><surname>Gebru</surname><given-names>T.</given-names></name>
      <name><surname>McMillan-Major</surname><given-names>A.</given-names></name>
      <name><surname>Shmitchell</surname><given-names>S.</given-names></name>
    </person-group>
    <article-title>On the dangers of stochastic parrots: Can language models be too big?</article-title>
    <source>Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’21)</source>
    <year>2021</year>
    <fpage>610</fpage>
    <lpage>623</lpage>
    <publisher-name>Association for Computing Machinery</publisher-name>
    <pub-id pub-id-type="doi">10.1145/3442188.3445922</pub-id>
  </element-citation>
</ref>

<ref id="ref4">
  <element-citation publication-type="confproc">
    <person-group person-group-type="author">
      <name><surname>Binns</surname><given-names>R.</given-names></name>
    </person-group>
    <article-title>Fairness in machine learning: Lessons from political philosophy</article-title>
    <source>Proceedings of the 2018 Conference on Fairness, Accountability, and Transparency</source>
    <year>2018</year>
    <fpage>149</fpage>
    <lpage>159</lpage>
    <pub-id pub-id-type="doi">10.1145/3287560.3287598</pub-id>
  </element-citation>
</ref>

<ref id="ref5">
  <element-citation publication-type="book">
    <person-group person-group-type="author">
      <name><surname>Boddington</surname><given-names>P.</given-names></name>
    </person-group>
    <article-title>Towards a code of ethics for artificial intelligence</article-title>
    <source>Springer</source>
    <year>2017</year>
  </element-citation>
</ref>

<ref id="ref6">
  <element-citation publication-type="report">
    <person-group person-group-type="author">
      <name><surname>Bommasani</surname><given-names>R.</given-names></name>
      <name><surname>Hudson</surname><given-names>D. A.</given-names></name>
      <etal/>
    </person-group>
    <article-title>On the opportunities and risks of foundation models</article-title>
    <source>Stanford University</source>
    <year>2021</year>
  </element-citation>
</ref>

<ref id="ref7">
  <element-citation publication-type="chapter">
    <person-group person-group-type="author">
      <name><surname>Bostrom</surname><given-names>N.</given-names></name>
      <name><surname>Yudkowsky</surname><given-names>E.</given-names></name>
    </person-group>
    <article-title>The ethics of artificial intelligence</article-title>
    <source>The Cambridge handbook of artificial intelligence</source>
    <editor>
      <name><surname>Frankish</surname><given-names>K.</given-names></name>
    </editor>
    <editor>
      <name><surname>Ramsey</surname><given-names>W. M.</given-names></name>
    </editor>
    <publisher-name>Cambridge University Press</publisher-name>
    <year>2014</year>
    <fpage>316</fpage>
    <lpage>334</lpage>
  </element-citation>
</ref>

<ref id="ref8">
  <element-citation publication-type="book">
    <person-group person-group-type="author">
      <name><surname>Camus</surname><given-names>A.</given-names></name>
    </person-group>
    <article-title>Le mythe de Sisyphe [The myth of Sisyphus]</article-title>
    <source>Gallimard</source>
    <year>1942</year>
  </element-citation>
</ref>

<ref id="ref9">
  <element-citation publication-type="book">
    <person-group person-group-type="author">
      <name><surname>Carr</surname><given-names>N.</given-names></name>
    </person-group>
    <article-title>The shallows: What the Internet is doing to our brains</article-title>
    <source>W. W. Norton &amp; Company</source>
    <year>2011</year>
  </element-citation>
</ref>

<ref id="ref10">
  <element-citation publication-type="book">
    <person-group person-group-type="author">
      <name><surname>Floridi</surname><given-names>L.</given-names></name>
    </person-group>
    <article-title>The fourth revolution: How the infosphere is reshaping human reality</article-title>
    <source>Oxford University Press</source>
    <year>2014</year>
  </element-citation>
</ref>

<ref id="ref11">
  <element-citation publication-type="journal">
    <person-group person-group-type="author">
      <name><surname>Floridi</surname><given-names>L.</given-names></name>
      <name><surname>Cowls</surname><given-names>J.</given-names></name>
    </person-group>
    <article-title>A unified framework of five principles for AI in society</article-title>
    <source>Harvard Data Science Review</source>
    <year>2019</year>
    <volume>1</volume>
    <issue>1</issue>
    <pub-id pub-id-type="doi">10.1162/99608f92.8cd550d1</pub-id>
  </element-citation>
</ref>

<ref id="ref12">
  <element-citation publication-type="book">
    <person-group person-group-type="author">
      <name><surname>Frankl</surname><given-names>V. E.</given-names></name>
    </person-group>
    <article-title>Man’s search for meaning</article-title>
    <source>Beacon Press</source>
    <year>1946</year>
  </element-citation>
</ref>

<ref id="ref13">
  <element-citation publication-type="book">
    <person-group person-group-type="author">
      <name><surname>Fromm</surname><given-names>E.</given-names></name>
    </person-group>
    <article-title>The sane society</article-title>
    <source>Rinehart</source>
    <year>1955</year>
  </element-citation>
</ref>

<ref id="ref14">
  <element-citation publication-type="book">
    <person-group person-group-type="author">
      <name><surname>Gert</surname><given-names>B.</given-names></name>
    </person-group>
    <article-title>Morality: Its nature and justification</article-title>
    <source>Oxford University Press</source>
    <year>2004</year>
    <comment>Revised edition</comment>
  </element-citation>
</ref>

<ref id="ref15">
  <element-citation publication-type="book">
    <person-group person-group-type="author">
      <name><surname>Guignon</surname><given-names>C. B.</given-names></name>
    </person-group>
    <article-title>On being authentic</article-title>
    <source>Routledge</source>
    <year>2004</year>
  </element-citation>
</ref>

<ref id="ref16">
  <element-citation publication-type="book">
    <person-group person-group-type="author">
      <name><surname>Heidegger</surname><given-names>M.</given-names></name>
    </person-group>
    <article-title>The question concerning technology and other essays</article-title>
    <translator>
      <name><surname>Lovitt</surname><given-names>W.</given-names></name>
    </translator>
    <source>Harper &amp; Row</source>
    <year>1977</year>
    <comment>Original work published 1954</comment>
  </element-citation>
</ref>

<ref id="ref17">
  <element-citation publication-type="book">
    <person-group person-group-type="author">
      <name><surname>Jonas</surname><given-names>H.</given-names></name>
    </person-group>
    <article-title>The imperative of responsibility: In search of an ethics for the technological age</article-title>
    <source>University of Chicago Press</source>
    <year>1984</year>
  </element-citation>
</ref>

<ref id="ref18">
  <element-citation publication-type="book">
    <person-group person-group-type="author">
      <name><surname>Kant</surname><given-names>I.</given-names></name>
    </person-group>
    <article-title>Grounding for the metaphysics of morals</article-title>
    <translator>
      <name><surname>Ellington</surname><given-names>J. W.</given-names></name>
    </translator>
    <source>Hackett Publishing</source>
    <year>1993</year>
    <comment>Original work published 1785</comment>
  </element-citation>
</ref>

<ref id="ref19">
  <element-citation publication-type="book">
    <person-group person-group-type="author">
      <name><surname>Kierkegaard</surname><given-names>S.</given-names></name>
    </person-group>
    <article-title>The sickness unto death</article-title>
    <translator>
      <name><surname>Hong</surname><given-names>H. V.</given-names></name>
      <name><surname>Hong</surname><given-names>E. H.</given-names></name>
    </translator>
    <source>Princeton University Press</source>
    <year>1980</year>
    <comment>Original work published 1849</comment>
  </element-citation>
</ref>

<ref id="ref20">
  <element-citation publication-type="book">
    <person-group person-group-type="author">
      <name><surname>Levinas</surname><given-names>E.</given-names></name>
    </person-group>
    <article-title>Totality and infinity: An essay on exteriority</article-title>
    <source>Duquesne University Press</source>
    <year>1961</year>
  </element-citation>
</ref>

<ref id="ref21">
  <element-citation publication-type="book">
    <person-group person-group-type="author">
      <name><surname>MacIntyre</surname><given-names>A.</given-names></name>
    </person-group>
    <article-title>After virtue: A study in moral theory</article-title>
    <source>University of Notre Dame Press</source>
    <year>1981</year>
  </element-citation>
</ref>

<ref id="ref22">
  <element-citation publication-type="book">
    <person-group person-group-type="author">
      <name><surname>May</surname><given-names>R.</given-names></name>
    </person-group>
    <article-title>The meaning of anxiety</article-title>
    <source>Ronald Press</source>
    <year>1950</year>
  </element-citation>
</ref>

<ref id="ref23">
  <element-citation publication-type="report">
    <person-group person-group-type="author">
      <name><surname>McKinsey Global Institute</surname></name>
    </person-group>
    <article-title>The economic potential of generative AI: The next productivity frontier</article-title>
    <source>McKinsey &amp; Company</source>
    <year>2023</year>
    <pub-id pub-id-type="uri">https://www.mckinsey.com/mgi/overview/2023/the-economic-potential-of-generative-ai</pub-id>
  </element-citation>
</ref>

<ref id="ref24">
  <element-citation publication-type="journal">
    <person-group person-group-type="author">
      <name><surname>Nature Editorial</surname></name>
    </person-group>
    <article-title>Tools such as ChatGPT threaten transparent science; here are our ground rules for their use</article-title>
    <source>Nature</source>
    <year>2023</year>
    <volume>613</volume>
    <fpage>612</fpage>
    <pub-id pub-id-type="doi">10.1038/d41586-023-00057-6</pub-id>
  </element-citation>
</ref>

<ref id="ref25">
  <element-citation publication-type="journal">
    <person-group person-group-type="author">
      <name><surname>Niyu</surname><given-names>N.</given-names></name>
      <name><surname>Dwihadiah</surname><given-names>D.</given-names></name>
      <name><surname>Gerungan</surname><given-names>A.</given-names></name>
      <name><surname>Purba</surname><given-names>H.</given-names></name>
    </person-group>
    <article-title>The use of ChatGPT among students and lecturers at Indonesian universities</article-title>
    <source>CoverAge: Journal of Strategic Communication</source>
    <year>2024</year>
    <volume>14</volume>
    <issue>1</issue>
    <fpage>130</fpage>
    <lpage>145</lpage>
    <pub-id pub-id-type="doi">10.35814/coverage.v14i2.6058</pub-id>
  </element-citation>
</ref>

<ref id="ref26">
  <element-citation publication-type="report">
    <person-group person-group-type="author">
      <name><surname>OpenAI</surname></name>
    </person-group>
    <article-title>GPT-4 technical report</article-title>
    <source>OpenAI</source>
    <year>2023</year>
    <pub-id pub-id-type="uri">https://openai.com/research/gpt-4</pub-id>
  </element-citation>
</ref>

<ref id="ref27">
  <element-citation publication-type="report">
    <person-group person-group-type="author">
      <name><surname>OpenAI</surname></name>
    </person-group>
    <article-title>ChatGPT and GPT-4 usage overview</article-title>
    <source>OpenAI</source>
    <year>2024</year>
    <pub-id pub-id-type="uri">https://openai.com</pub-id>
  </element-citation>
</ref>

<ref id="ref28">
  <element-citation publication-type="book">
    <person-group person-group-type="author">
      <name><surname>Palmer</surname><given-names>R. E.</given-names></name>
    </person-group>
    <article-title>Hermeneutics: Interpretation theory in Schleiermacher, Dilthey, Heidegger, and Gadamer</article-title>
    <source>Northwestern University Press</source>
    <year>1969</year>
  </element-citation>
</ref>

<ref id="ref29">
  <element-citation publication-type="journal">
    <person-group person-group-type="author">
      <name><surname>Putri</surname><given-names>Z. H. A.</given-names></name>
      <name><surname>Pradana</surname><given-names>N. R.</given-names></name>
      <name><surname>Yustraini</surname><given-names>Y. A.</given-names></name>
      <name><surname>Efansyah</surname><given-names>A. D.</given-names></name>
    </person-group>
    <article-title>An analysis of ChatGPT’s impact on students’ skills, collaboration, and creativity: A systematic literature review approach</article-title>
    <source>Jurnal Ilmu Komputer dan Teknologi Informasi</source>
    <year>2024</year>
    <volume>12</volume>
    <issue>1</issue>
    <fpage>22</fpage>
    <lpage>35</lpage>
  </element-citation>
</ref>

<ref id="ref30">
  <element-citation publication-type="book">
    <person-group person-group-type="author">
      <name><surname>Sartre</surname><given-names>J.-P.</given-names></name>
    </person-group>
    <article-title>Being and nothingness: An essay on phenomenological ontology</article-title>
    <translator>
      <name><surname>Barnes</surname><given-names>H. E.</given-names></name>
    </translator>
    <source>Philosophical Library</source>
    <year>1956</year>
  </element-citation>
</ref>

<ref id="ref31">
  <element-citation publication-type="book">
    <person-group person-group-type="author">
      <name><surname>Sartre</surname><given-names>J.-P.</given-names></name>
    </person-group>
    <article-title>Existentialism is a humanism</article-title>
    <translator>
      <name><surname>Macomber</surname><given-names>C.</given-names></name>
    </translator>
    <source>Yale University Press</source>
    <year>2007</year>
  </element-citation>
</ref>

<ref id="ref32">
  <element-citation publication-type="journal">
    <person-group person-group-type="author">
      <name><surname>Thorp</surname><given-names>H. H.</given-names></name>
    </person-group>
    <article-title>ChatGPT is fun, but not an author</article-title>
    <source>Science</source>
    <year>2023</year>
    <volume>379</volume>
    <issue>6630</issue>
    <fpage>313</fpage>
    <lpage>314</lpage>
    <pub-id pub-id-type="doi">10.1126/science.adg8754</pub-id>
  </element-citation>
</ref>

<ref id="ref33">
  <element-citation publication-type="book">
    <person-group person-group-type="author">
      <name><surname>Turkle</surname><given-names>S.</given-names></name>
    </person-group>
    <article-title>Alone together: Why we expect more from technology and less from each other</article-title>
    <source>Basic Books</source>
    <year>2011</year>
  </element-citation>
</ref>

<ref id="ref34">
  <element-citation publication-type="journal">
    <person-group person-group-type="author">
      <name><surname>Zhuo</surname><given-names>J.</given-names></name>
      <name><surname>Wang</surname><given-names>J.</given-names></name>
      <etal/>
    </person-group>
    <article-title>Limitations and risks of generative AI in academic writing</article-title>
    <source>AI &amp; Ethics</source>
    <year>2023</year>
    <volume>3</volume>
    <issue>1</issue>
    <fpage>45</fpage>
    <lpage>59</lpage>
    <pub-id pub-id-type="doi">10.1007/s43681-023-00255-0</pub-id>
  </element-citation>
</ref>

</ref-list>
</sec>
</body>
</article>
