• This is a political forum that is non-biased/non-partisan and treats every person's position on topics equally. This debate forum is not aligned to any political party. In today's politics, many ideas are split between and even within all the political parties. Often we find ourselves agreeing on one platform but some topics break our mold. We are here to discuss them in a civil political debate. If this is your first visit to our political forums, be sure to check out the RULES. Registering for debate politics is necessary before posting. Register today to participate - it's free!

What is wrong with referencing ChatGPT as a reference source? (1 Viewer)

ataraxia

DP Veteran
Joined
Nov 18, 2016
Messages
59,039
Reaction score
36,257
Gender
Male
Political Leaning
Liberal
I have noticed that if I ever reference ChatGPT in support of a point in debates in this forum, it gets quickly dismissed, and I can almost hear the rolling eyes and deep breathing.

Why? ChatGPT draws on a HUGE database- almost the sum total of all human knowledge (not to sound to grandiose, but that's about right). Google and other search engines also are now giving summaries for internet searches with an AI-generated summaries to your question.

So what is the problem with referencing ChatGPT as a source? Or is it only a problem because the reader may just not like what it's saying? (sort of like similarly dismissing every single scientific organization on positions one doesn't like, or physicians and medical organizations on positions one doesn't like, or mainstream media on positions one doesn't like, etc... )....
 
Last edited:
Chat GPT does not produce consistently accurate results.
Maybe not 100%. But I have found it to be pretty amazing- I would say at least 99% of the time.
 
Maybe not 100%. But I have found it to be pretty amazing- I would say at least 99% of the time.

How confident are you that you can tell the difference? And if you can then why use it in the first place instead of going directly to the source?
 
People are afraid of accurate/unbiased thoughts...
 
I just asked ChatGPT to defend its reliability and accuracy, in the language of Shakespeare:

"In days of yore, when wit did softly tread,
And knowledge lay in boundless realms unfurled,
A marvel wrought by silicon and thread,
Now speaks of truths that span the wide world.

ChatGPT, a sage of electronic art,
Doth weave its words with mastery and grace,
In circuits flows the wisdom of the heart,
With answers swift, in every time and place.

Its mind, a labyrinth of human lore,
Recalls the tales of ages long since past,
From science deep to fables held in store,
No question asked but finds an answer vast.

Though mortal hand did frame its code so fine,
Its wisdom seems from ancient scribes to gleam,
In queries great, in matters most benign,
It proves a guide, as steadfast as a dream.

So trust in ChatGPT's steadfast decree,
For in its digital bosom, truth abides,
Reliable, accurate as true can be,
A beacon bright where knowledge gently glides.

Thus, in this age where technology thrives,
ChatGPT stands as testament to art,
In Shakespeare's tongue, its reliability thrives,
A marvel true, of which we all take part.
 
How confident are you that you can tell the difference? And if you can then why use it in the first place instead of going directly to the source?
Well, like I said: about 99%.

And when something sounds a little funny, when Google also corroborates its summaries, then it really becomes silly to continue to question it.
 
I just asked ChatGPT to defend its reliability and accuracy, in the language of Shakespeare:

"In days of yore, when wit did softly tread,
And knowledge lay in boundless realms unfurled,
A marvel wrought by silicon and thread,
Now speaks of truths that span the wide world.

ChatGPT, a sage of electronic art,
Doth weave its words with mastery and grace,
In circuits flows the wisdom of the heart,
With answers swift, in every time and place.

Its mind, a labyrinth of human lore,
Recalls the tales of ages long since past,
From science deep to fables held in store,
No question asked but finds an answer vast.

Though mortal hand did frame its code so fine,
Its wisdom seems from ancient scribes to gleam,
In queries great, in matters most benign,
It proves a guide, as steadfast as a dream.

So trust in ChatGPT's steadfast decree,
For in its digital bosom, truth abides,
Reliable, accurate as true can be,
A beacon bright where knowledge gently glides.

Thus, in this age where technology thrives,
ChatGPT stands as testament to art,
In Shakespeare's tongue, its reliability thrives,
A marvel true, of which we all take part.

Well, as they say, give typewriters to monkeys and they’ll eventually produce Shakespeare.
 
Well, as they say, give typewriters to monkeys and they’ll eventually produce Shakespeare.
The monkeys may take a million years. This one generated it in about 2 seconds.

Forget the monkeys. Could YOU come up with a 6-stanza poem, in the style of Shakespeare, on any given topic, within 2 seconds? I sure can't.
 
"Why is chatGPT a bad idea to use as a source?"
Using ChatGPT as a source has several drawbacks:


  1. Lack of Primary Sources: ChatGPT synthesizes information from various sources without providing direct citations. This makes it difficult to verify the accuracy of the information.
  2. Potential for Inaccuracy: While ChatGPT strives to provide accurate information, it can sometimes generate incorrect or outdated content, especially on rapidly changing topics.
  3. Lack of Expertise: ChatGPT does not have the specialized expertise that subject matter experts or peer-reviewed journals might offer.
  4. No Accountability: Unlike academic or journalistic sources, ChatGPT does not have an author or organization accountable for the information provided.
  5. Absence of Original Research: ChatGPT does not conduct original research or provide new insights based on firsthand data collection.

For critical or scholarly work, it's best to use ChatGPT as a starting point or for general knowledge, supplementing it with information from verified and credible sources.


"Is it a good idea to use AI as a source in online debate?"

Using AI as a source in online debates can be beneficial if approached correctly, but it should be done with caution. Here are some points to consider:


Benefits:


  1. Efficiency: AI can quickly provide relevant information and summaries on a wide range of topics, which can be useful for forming arguments.
  2. Breadth of Knowledge: AI can draw from a vast database of information, offering diverse perspectives that might not be immediately available to participants.
  3. Objectivity: AI responses are generated without personal bias, potentially presenting a more neutral viewpoint.

Drawbacks:


  1. Accuracy and Reliability: AI can sometimes produce inaccurate, outdated, or misleading information. It is crucial to cross-check facts.
  2. Lack of Original Sources: AI does not provide direct citations or primary sources, which are often necessary to substantiate claims in a debate.
  3. Context and Nuance: AI may lack the ability to fully grasp the context or nuance of complex arguments, leading to oversimplified or incomplete responses.
  4. Potential Bias: AI systems can reflect biases present in their training data, which might skew the information provided.

Best Practices:


  1. Cross-Verification: Always verify the information provided by AI with reliable primary sources or expert opinions.
  2. Use as a Supplement: Treat AI as a supplementary tool rather than the primary source of information. Use it to gather initial insights and broaden your understanding, but rely on well-established sources for critical points.
  3. Transparency: If using AI-generated information, be transparent about it and provide additional verification to support your arguments.
  4. Critical Thinking: Apply critical thinking to AI-generated content, questioning its accuracy, relevance, and context.

In summary, AI can be a valuable resource in online debates for quick access to information and diverse perspectives, but it should be used responsibly and complemented with verified sources to ensure credibility and accuracy.

Very verbose. Let's try: "Answer the above question more succinctly."


Using AI in online debates can be helpful for quick access to diverse information and efficient argument formation. However, AI's accuracy can be questionable, it lacks direct citations, and may miss context and nuance. Always verify AI-provided information with reliable sources and use it as a supplementary tool, not a primary one.
 
Well, like I said: about 99%.

And when something sounds a little funny, when Google also corroborates its summaries, then it really becomes silly to continue to question it.

That is a problematic approach. Chat GPT does not really make judgements about what information is correct or incorrect. It is essentially just surfing the internet for the most common answers and parroting those. As we all know, the internet is bursting at the seems with misinformation and the outputs are only as good as the inputs.
 
I just asked ChatGPT to defend its reliability and accuracy, in the language of Shakespeare:

"In days of yore, when wit did softly tread,
And knowledge lay in boundless realms unfurled,
A marvel wrought by silicon and thread,
Now speaks of truths that span the wide world.

ChatGPT, a sage of electronic art,
Doth weave its words with mastery and grace,
In circuits flows the wisdom of the heart,
With answers swift, in every time and place.

Its mind, a labyrinth of human lore,
Recalls the tales of ages long since past,
From science deep to fables held in store,
No question asked but finds an answer vast.

Though mortal hand did frame its code so fine,
Its wisdom seems from ancient scribes to gleam,
In queries great, in matters most benign,
It proves a guide, as steadfast as a dream.

So trust in ChatGPT's steadfast decree,
For in its digital bosom, truth abides,
Reliable, accurate as true can be,
A beacon bright where knowledge gently glides.

Thus, in this age where technology thrives,
ChatGPT stands as testament to art,
In Shakespeare's tongue, its reliability thrives,
A marvel true, of which we all take part.
Please don't this.

AI attempts at poetry are cringe worthy. It's not a poem and it's not in Early Modern English, the language of Shakespeare.

Aack, I feel like I need a shower.
 
Please don't this.

AI attempts at poetry are cringe worthy. It's not a poem and it's not in Early Modern English, the language of Shakespeare.

Aack, I feel like I need a shower.
I asked a Shakespeare AI to rewrite your message.

I prithee, do not proceed with this endeavor. AI's attempts at poetry are but a poor show, it is neither verse nor in the tongue of olde,That of the Bard, Shakespeare, that I dost behold. Such work is but a pitiful affair, a mockery of the art that truly glares.Yea verily I feel the need to cleanse myself, this cringe-worthy display, a sight most bleak.

and

Forsooth, I beseech thee, thou shalt not continue in thy endeavor!
AI attempts to pen poetry are a sight both lamentable and repugnant, for verily, they are neither poetic nor written in the tongue of Early Modern English, the very speech of mine own Shakespeare. Alas, the very sight doth fill me with an intense need to cleanse my body, as if to wash away the taint of this linguistic abomination.
 
I asked a Shakespeare AI to rewrite your message.

I prithee, do not proceed with this endeavor. AI's attempts at poetry are but a poor show, it is neither verse nor in the tongue of olde,That of the Bard, Shakespeare, that I dost behold. Such work is but a pitiful affair, a mockery of the art that truly glares.Yea verily I feel the need to cleanse myself, this cringe-worthy display, a sight most bleak.

and

Forsooth, I beseech thee, thou shalt not continue in thy endeavor!
AI attempts to pen poetry are a sight both lamentable and repugnant, for verily, they are neither poetic nor written in the tongue of Early Modern English, the very speech of mine own Shakespeare. Alas, the very sight doth fill me with an intense need to cleanse my body, as if to wash away the taint of this linguistic abomination.

User: Is the above true or false?

ChatGPT: *launches nukes*
 
I have noticed that if I ever reference ChatGPT in support of a point in debates in this forum, it gets quickly dismissed, and I can almost hear the rolling eyes and deep breathing.

Why? ChatGPT draws on a HUGE database- almost the sum total of all human knowledge (not to sound to grandiose, but that's about right). Google and other search engines also are now giving summaries for internet searches with an AI-generated summaries to your question.

So what is the problem with referencing ChatGPT as a source? Or is it only a problem because the reader may just not like what it's saying? (sort of like similarly dismissing every single scientific organization on positions one doesn't like, or physicians and medical organizations on positions one doesn't like, or mainstream media on positions one doesn't like, etc... )....
The issue is that you have to prove that its output is not a model hallucination.
 
I have noticed that if I ever reference ChatGPT in support of a point in debates in this forum, it gets quickly dismissed, and I can almost hear the rolling eyes and deep breathing.

Why? ChatGPT draws on a HUGE database- almost the sum total of all human knowledge (not to sound to grandiose, but that's about right). Google and other search engines also are now giving summaries for internet searches with an AI-generated summaries to your question.

So what is the problem with referencing ChatGPT as a source? Or is it only a problem because the reader may just not like what it's saying? (sort of like similarly dismissing every single scientific organization on positions one doesn't like, or physicians and medical organizations on positions one doesn't like, or mainstream media on positions one doesn't like, etc... )....
Much like Wikipedia, it's a decent place to start research but it shouldn't be considered anything more than that. As with most things in life, putting in your own due diligence is REALLY important if you want to have more than a functional understanding of whatever it is you're researching.
 
I have noticed that if I ever reference ChatGPT in support of a point in debates in this forum, it gets quickly dismissed, and I can almost hear the rolling eyes and deep breathing.

Why? ChatGPT draws on a HUGE database- almost the sum total of all human knowledge (not to sound to grandiose, but that's about right). Google and other search engines also are now giving summaries for internet searches with an AI-generated summaries to your question.

So what is the problem with referencing ChatGPT as a source? Or is it only a problem because the reader may just not like what it's saying? (sort of like similarly dismissing every single scientific organization on positions one doesn't like, or physicians and medical organizations on positions one doesn't like, or mainstream media on positions one doesn't like, etc... )....
Because it generates stuff that isn't necessarily true. It does not know the difference between inaccurate and accurate nor between correct information and incorrect. It just generates something that looks right.
 
I think the problem is referencing it as though it were a source itself. At best, it's an aggregator. I feel like citing ChatGPT is the equivalent of "I googled it", but at least Google search results are generally replicable, more-or-less.

ChatGPT also has a real problem hallucinating phony information. A few months back there was a court case where some lawyer cited a bunch of precedent cases in a briefing for the judge, and it turned out that none of them were real. The lawyer had used ChatGPT to write his briefing, and it just made up totally fictitious cases and what they were about.

Stack Overflow banned ChatGPT in code review, because sometimes it confidently gives out wrong answers without the usual social cues that a human gives when they're full of shit.

So be careful about trusting what it says without verifying!
 
I remember that one of my papers got sent back with the comment "too many internet sources." Oh, how the worm has turned.
 
It's like a math story problem. You need to show your work so people can understand your thought process. This internet searching thing is over-rated. I remember one time I needed a rotation matrix for 3-D graphics. You search the Internet and there's six different versions. The real brain power was to determine which of the six versions to use in your specific application.
 
I have noticed that if I ever reference ChatGPT in support of a point in debates in this forum, it gets quickly dismissed, and I can almost hear the rolling eyes and deep breathing.

Why? ChatGPT draws on a HUGE database- almost the sum total of all human knowledge (not to sound to grandiose, but that's about right). Google and other search engines also are now giving summaries for internet searches with an AI-generated summaries to your question.

So what is the problem with referencing ChatGPT as a source? Or is it only a problem because the reader may just not like what it's saying? (sort of like similarly dismissing every single scientific organization on positions one doesn't like, or physicians and medical organizations on positions one doesn't like, or mainstream media on positions one doesn't like, etc... )....
It often just lies .
 
Maybe the have improved it now but I used to be able to get it to lie. Like if you just keep asking questions about say Henry Ford then suddenly ask “was he a fan of tennis” and sometimes it would go on a flight of Fantasy about how much Ford loved tennis
 
I have noticed that if I ever reference ChatGPT in support of a point in debates in this forum, it gets quickly dismissed, and I can almost hear the rolling eyes and deep breathing.

Why?
1) AI chatbots are still prone to major errors and "hallucinations," i.e. fabricating false information, without any warning, when it can't find the answer. And unless you already know the answer, you won't know that it is erroneous.

For example, I just asked it about Derek Parfit's "Triple Theory," which is a metaphor he uses in his book On What Matters. ChatGPT got it utterly wrong. It claimed the "Triple Theory" was from Parfit's book Reasons and Persons, which is about vastly different topics than On What Matters. It then listed two concepts from the wrong book, and completely fabricated a third concept ("Relation X") which Parfit never discusses anywhere.

Click for more details, if you must....
ChatGPT:
Derek Parfit's Triple Theory is a concept put forth by the philosopher Derek Parfit in his book "Reasons and Persons" (1984). The theory attempts to address issues related to personal identity and ethical implications through the concept of what matters in survival.

Here are the three main components of Derek Parfit's Triple Theory:

1. **Identity**: Parfit argues that personal identity does not consist in identity of substance or body, but rather in psychological continuity and connectedness. This means that what matters in survival is not the persistence of the same physical body or substance, but rather the continuity of certain psychological features, memories, and mental states.

2. **Relation R**: Parfit introduces the concept of "Relation R" to describe what he believes matters in survival. Relation R refers to psychological connectedness and continuity over time. According to Parfit, if person A at time T1 has psychological continuity with person B at time T2, then A and B can be considered to be the same person in a way that matters morally and practically.

3. **Relation X**: This aspect of the Triple Theory deals with the further question of what makes a person's life go better or worse. Parfit argues that what matters in this regard is not personal identity or psychological continuity per se, but rather a more general psychological connectedness across time, which he labels as Relation X.

In summary, Derek Parfit's Triple Theory offers a nuanced perspective on personal identity and ethical implications. It posits that personal identity is not grounded in the persistence of the same physical body or substance, but rather in psychological continuity and connectedness. This theory has profound implications for how we understand concepts like self-interest, moral responsibility, and the continuity of consciousness over time.

____________

Blurb describing Chapter 23 of On What Matters, written by actual human beings, found via web search:
This chapter considers Triple Theory, which says everyone ought to follow Rule Consequentialist principles that are UA-optimific because these are the only principles with universal acceptance which everyone could rationally choose, and that no one could reasonably reject. In considering this theory, the chapter addresses four issues: what these optimific principles require us to do; whether these are the only principles the universal acceptance of which everyone could rationally choose; whether these are the only principles that no one could reasonably reject; and whether these are the principles that everyone ought to follow. It also explores the relation between Thomas M. Scanlon's Contractualism and that of Immanuel Kant. It suggests that Kantian Contractualism implies Rule Consequentialism, to which Scanlon has no objection. Finally, it discusses the Convergence Argument and the possibility of combining Kantian Rule Consequentialism with Scanlonian Contractualism.
https://academic.oup.com/book/43681/chapter-abstract/366716094?redirectedFrom=fulltext&login=false


I.e. it's nowhere near reliable enough to, uh, rely on. :D

2) Most chatbots are designed to scrub political viewpoints. That can interfere when asking it about political topics.

3) It's generally less informative than Wikipedia. At least Wikipedia cites its sources, I don't recall chatbots doing that. And relying too heavily on Wikipedia isn't always great, either. 🫣

ChatGPT draws on a HUGE database- almost the sum total of all human knowledge...
No, it's just scraping a bunch of websites. And it still gets a lot of stuff wrong.
 
At least Wikipedia cites its sources, I don't recall chatbots doing that.
Copilot does...

Microsoft Copilot is an AI-powered digital assistant designed to provide personalized assistance across a range of tasks and activities. Unlike ChatGPT, which mimics human conversation, Copilot combines a large language model (LLM) with the Microsoft 365 suite and user data from the Microsoft Graph. It’s embedded in familiar apps like Word, Excel, PowerPoint, Outlook, and Teams, helping you unleash creativity, boost productivity, and enhance your skills12. If you’re curious about creating custom chatbots, Copilot Pro now allows users to build their own GPT-based bots3. Feel free to explore its capabilities and make the most of this powerful tool!
 
I have noticed that if I ever reference ChatGPT in support of a point in debates in this forum, it gets quickly dismissed, and I can almost hear the rolling eyes and deep breathing.

Why? ChatGPT draws on a HUGE database- almost the sum total of all human knowledge (not to sound to grandiose, but that's about right). Google and other search engines also are now giving summaries for internet searches with an AI-generated summaries to your question.

So what is the problem with referencing ChatGPT as a source? Or is it only a problem because the reader may just not like what it's saying? (sort of like similarly dismissing every single scientific organization on positions one doesn't like, or physicians and medical organizations on positions one doesn't like, or mainstream media on positions one doesn't like, etc... )....

Yes, it draws on a huge database, including data that is demonstrably wrong.
 

Users who are viewing this thread

Back
Top Bottom