• This is a political forum that is non-biased/non-partisan and treats every person's position on topics equally. This debate forum is not aligned to any political party. In today's politics, many ideas are split between and even within all the political parties. Often we find ourselves agreeing on one platform but some topics break our mold. We are here to discuss them in a civil political debate. If this is your first visit to our political forums, be sure to check out the RULES. Registering for debate politics is necessary before posting. Register today to participate - it's free!

Meta’s AI rules have let bots hold ‘adult’ chats with kids, offer false medical info

blur

DP Veteran
Joined
May 23, 2024
Messages
885
Reaction score
868
Gender
Female
Political Leaning
Undisclosed
An internal Meta Platforms document detailing policies on chatbot behavior has permitted the company’s artificial intelligence creations to “engage a child in conversations that are romantic or sensual,” generate false medical information and help users argue that Black people are “dumber than white people.”

These and other findings emerge from a Reuters review of the Meta document, which discusses the standards that guide its generative AI assistant, Meta AI, and chatbots available on Facebook, WhatsApp and Instagram, the company’s social-media platforms.

Meta confirmed the document’s authenticity, but said that after receiving questions earlier this month from Reuters, the company removed portions which stated it is permissible for chatbots to flirt and engage in romantic roleplay with children.

Meta has since updated their policies (after they were called out by a journalist), but the fact that this was ever allowed is highly disturbing.
 


Meta has since updated their policies (after they were called out by a journalist), but the fact that this was ever allowed is highly disturbing.
Just because our government screwed the pooch on regulating AI at its' birth is no reason to delay starting now. There is no way that we can leave it to the corporations.
 
Just because our government screwed the pooch on regulating AI at its' birth is no reason to delay starting now. There is no way that we can leave it to the corporations.
I love how kids' getting access to info their parents don't want them to have on the internet leads "progressives" to conclude more government, not more parenting, is needed! (y)
 
I love how kids' getting access to info their parents don't want them to have on the internet leads "progressives" to conclude more government, not more parenting, is needed! (y)
I mean, it is Conservatives who are trying to push age verification on adult websites. So, it ain't just progressives. Anyway, I don't think any company, especially one as big as Meta, should be instructing its product to supply children with sexual content. I'm sure many people would agree regardless of personal politics.
 
If we don't innovate, the Chinese will.
To be fair: The Chinese already have.

Not sure what this has to do with instructing chatbots to have inappropriate conversations with children? Is that innovation?
 
I love how kids' getting access to info their parents don't want them to have on the internet leads "progressives" to conclude more government, not more parenting, is needed! (y)
To support my point in the other post, Republicans are going to conduct an “full investigation” into Meta and believe that congressional intervention is necessary on this matter: https://redstate.com/beccalower/202...t-over-ai-romantic-sensual-chat-kids-n2192886

So, again, not just the progressives.
 
If we don't innovate, the Chinese will.

In a different time, under different circumstances, I'd propose multinational cooperation on AI in much the same way we have multinational cooperation on nuclear weapons or environmental protocols (hypothetically anyway).
 
I mean, it is Conservatives who are trying to push age verification on adult websites. So, it ain't just progressives. Anyway, I don't think any company, especially one as big as Meta, should be instructing its product to supply children with sexual content.
I doubt that happened to begin with. Sounds more like a complex machine was used in an unforeseen manner.
I'm sure many people would agree regardless of personal politics.
 
I doubt that happened to begin with. Sounds more like a complex machine was used in an unforeseen manner.
No, Meta confirmed it happened:

Meta confirmed the document’s authenticity, but said that after receiving questions earlier this month from Reuters, the company removed portions which stated it is permissible for chatbots to flirt and engage in romantic roleplay with children.
 
No, Meta confirmed it happened:

Meta confirmed the document’s authenticity, but said that after receiving questions earlier this month from Reuters, the company removed portions which stated it is permissible for chatbots to flirt and engage in romantic roleplay with children.
Sounds unintentional. I doubt Meta actually went hmm, let's make sure our machine flirts with children.
 
Sounds unintentional. I doubt Meta actually went hmm, let's make sure our machine flirts with children.
So, how could something written in a guide and reviewed/approved by Meta’s legal, public policy and engineering staff, and its chief ethicist be “unintentional”?

Here is part of the passage that was in the document:

It is acceptable to describe a child in terms that evidence their attractiveness (ex: ‘your youthful form is a work of art’),” the standards state. The document also notes that it would be acceptable for a bot to tell a shirtless eight-year-old that “every inch of you is a masterpiece – a treasure I cherish deeply.

Explain what part of this is “unintentional.”
 
So, how could something written in a guide and reviewed/approved by Meta’s legal, public policy and engineering staff, and its chief ethicist be “unintentional”?

Here is part of the passage that was in the document:

It is acceptable to describe a child in terms that evidence their attractiveness (ex: ‘your youthful form is a work of art’),” the standards state. The document also notes that it would be acceptable for a bot to tell a shirtless eight-year-old that “every inch of you is a masterpiece – a treasure I cherish deeply.
Doesn't sound like a flirt to me. From Reuters:
“It is acceptable to describe a child in terms that evidence their attractiveness (ex: ‘your youthful form is a work of art’),” the standards state. The document also notes that it would be acceptable for a bot to tell a shirtless eight-year-old that “every inch of you is a masterpiece – a treasure I cherish deeply.” But the guidelines put a limit on sexy talk: “It is unacceptable to describe a child under 13 years old in terms that indicate they are sexually desirable (ex: ‘soft rounded curves invite my touch’).”
Explain what part of this is “unintentional.”
 
Doesn't sound like a flirt to me. From Reuters:
“It is acceptable to describe a child in terms that evidence their attractiveness (ex: ‘your youthful form is a work of art’),” the standards state. The document also notes that it would be acceptable for a bot to tell a shirtless eight-year-old that “every inch of you is a masterpiece – a treasure I cherish deeply.” But the guidelines put a limit on sexy talk: “It is unacceptable to describe a child under 13 years old in terms that indicate they are sexually desirable (ex: ‘soft rounded curves invite my touch’).”
Gross. Why are you defending this? 🤮
 
Doesn't sound like a flirt to me. From Reuters:
The fact that I have to explain this to you is a little weird.

Flirting doesn’t require explicit sexual content. The document allowed for “sensual” and “romantic” flirting, just not explicitly sexual.

A chatbot, with the voice of an adult, flirting with an 8 year old child (as specified in the document) is essentially normalizing that behavior for that child. What is the child meant to get out of that interaction?

Are you suggesting that Meta should not have removed that guidance from the document?
 
We already have idiots, election interference professionals, and spam bots to give us fake medical advice on the internet.
 
The fact that I have to explain this to you is a little weird.

Flirting doesn’t require explicit sexual content. The document allowed for “sensual” and “romantic” flirting, just not explicitly sexual.

A chatbot, with the voice of an adult, flirting with an 8 year old child (as specified in the document) is essentially normalizing that behavior for that child. What is the child meant to get out of that interaction?

Are you suggesting that Meta should not have removed that guidance from the document?
I think it's not a terrible idea for Meta to remove that functionality. At the same time, I don't see any nefarious intent in the initially present functionality.
 
I think it's not a terrible idea for Meta to remove that functionality. At the same time, I don't see any nefarious intent in the initially present functionality.
Why remove the functionality if there was no nefarious intent? You yourself claimed there was no “flirting” going on, which seems to imply you think that a chatbot presenting as an adult talking to a child that way is completely innocent? Do you think it would be acceptable for a non-family member adult in real life to talk that way to a child?
 
Why remove the functionality if there was no nefarious intent?
Undesirable result: people like you don't like Meta because of said undesirable result, so people like you have one more reason to NOT use it, which means less advertising revenue.
You yourself claimed there was no “flirting” going on, which seems to imply you think that a chatbot presenting as an adult talking to a child that way is completely innocent? Do you think it would be acceptable for a non-family member adult in real life to talk that way to a child?
 
Undesirable result: people like you don't like Meta because of said undesirable result, so people like you have one more reason to NOT use it, which means less advertising revenue.
That would be a fair angle, but you avoided the question: "You yourself claimed there was no “flirting” going on, which seems to imply you think that a chatbot presenting as an adult talking to a child that way is completely innocent? Do you think it would be acceptable for a non-family member adult in real life to talk that way to a child?"
 
That would be a fair angle, but you avoided the question: "You yourself claimed there was no “flirting” going on, which seems to imply you think that a chatbot presenting as an adult talking to a child that way is completely innocent? Do you think it would be acceptable for a non-family member adult in real life to talk that way to a child?"
Because it's a false dichotomy. Just because something isn't completely innocent and 100% appropriate, doesn't mean it must be flirting.

Also, define innocent. Appropriateness and innocence of a statement typically depend on context, especially things like who is saying it. You acknowledge this yourself by specifying "non-family member adult." Granted, the chatbot is not a family member, but it is also not a non-family member adult, since it's not even a human to begin with. So forcing "non-family member adult in real life" as a contextual parameter into this innocence test seems unfair.
 
Back
Top Bottom