I played with Bard, AI from Google I think. I was not impressed.
I asked it to write an essay summarizing the AFL Championship game between KC and Baltimore in 2024. I had watched the game and there were countless accounts of the game on the Internet,
It made numerous errors of fact, including a made-up claim it went into OT. (It did not). When I pointed out the errors and asked it to rewrite the essay, after apologizing, it continued to make mistakes.
Then, when I pointed out that if I had asked about something important, like a life or death medical concern, which could be devastating, it said "the thought fills me with dread," as if it was capable of an emotional reaction. What a crock. Like it's pretending to be human.
Most AI is still a about 3 years behind present day. Try again and ask it to write about the 2019 Super Bowl. I bet it turns out way better.I played with Bard, AI from Google I think. I was not impressed.
I asked it to write an essay summarizing the AFL Championship game between KC and Baltimore in 2024. I had watched the game and there were countless accounts of the game on the Internet,
It made numerous errors of fact, including a made-up claim it went into OT. (It did not). When I pointed out the errors and asked it to rewrite the essay, after apologizing, it continued to make mistakes.
Then, when I pointed out that if I had asked about something important, like a life or death medical concern, which could be devastating, it said "the thought fills me with dread," as if it was capable of an emotional reaction. What a crock. Like it's pretending to be human.
Did you read about the lawyer who submitted a brief to a court that was written by AI? The AI just made up court decisions that never existed. Needles to say the judge was none too pleased.Yea, no one is suggesting that you ask "AI" a life or death question yet.
This is the worst it will ever be.
It will be better than humans at almost every task very soon. It doesn't have to be perfect to be better than humans.
It only needs to be able to replace the average worker to turn the entire economy upside down.
Did you read about the lawyer who submitted a brief to a court that was written by AI? The AI just made up court decisions that never existed. Needles to say the judge was none too pleased.
My point is AI is not capable of performing at an acceptable level - yet.yea i read that.
It doesn't change anything i said.
My point is AI is not capable of performing at an acceptable level - yet.
I played with Bard, AI from Google I think. I was not impressed.
I asked it to write an essay summarizing the AFL Championship game between KC and Baltimore in 2024. I had watched the game and there were countless accounts of the game on the Internet,
It made numerous errors of fact, including a made-up claim it went into OT. (It did not). When I pointed out the errors and asked it to rewrite the essay, after apologizing, it continued to make mistakes.
Then, when I pointed out that if I had asked about something important, like a life or death medical concern, which could be devastating, it said "the thought fills me with dread," as if it was capable of an emotional reaction. What a crock. Like it's pretending to be human.
It's pretty obvious it is not real.Wow, shit, that's amazing actually. A bit scary as well. We need a Universal Income, so I can love this! lol
Or to recognize the danger inherent. Students are able to cheat in ways that the education system can not keep up with...Dismissing AI like Google's Bard over a few errors misses the big picture. This technology is revolutionizing how we handle information. Sure, it flubbed some details about a football game, but does that overshadow its vast capabilities? We're talking about machines that can process and analyze data faster than any human ever could. And about the emotional responses—it’s designed to make tech approachable, not to trick you into thinking it’s human. Let's not nitpick the shortcomings when the advancements are staring us in the face. AI is a game-changer, and its potential is just beginning to unfold. It’s time to recognize its value and push forward, not backpedal over imperfections.
Sure, AI opens up new avenues for cheating, but is that a fault of the technology or a challenge for our education system to evolve? Blaming AI for enabling cheating is like blaming the internet for plagiarism. It’s not about the tool, but how we use it. This is an opportunity to redefine educational approaches, emphasizing critical thinking and problem-solving over rote memorization. Instead of trying to “keep up” with AI, we should harness it to create more innovative and interactive learning environments.Or to recognize the danger inherent. Students are able to cheat in ways that the education system can not keep up with...
Who blamed AI?Sure, AI opens up new avenues for cheating, but is that a fault of the technology or a challenge for our education system to evolve? Blaming AI for enabling cheating is like blaming the internet for plagiarism. It’s not about the tool, but how we use it. This is an opportunity to redefine educational approaches, emphasizing critical thinking and problem-solving over rote memorization. Instead of trying to “keep up” with AI, we should harness it to create more innovative and interactive learning environments.
Fair point—maybe “blame” was too strong a word. But the concern seemed to suggest that AI's potential for misuse in academics is a significant downside. My argument is that instead of viewing AI merely as a problem to be managed, we should see it as a tool to be levered for more dynamic and engaging education. It’s about shifting our perspective to make the most of what AI can offer, not just guarding against its challenges.Who blamed AI?
Without a manner of checking to see if something was written by AI then AI is a huge negative to education...Fair point—maybe “blame” was too strong a word. But the concern seemed to suggest that AI's potential for misuse in academics is a significant downside. My argument is that instead of viewing AI merely as a problem to be managed, we should see it as a tool to be levered for more dynamic and engaging education. It’s about shifting our perspective to make the most of what AI can offer, not just guarding against its challenges.
Absolutely, ensuring academic integrity in the age of AI is crucial. But rather than viewing AI as a “huge negative” we should look at it as a catalyst for developing new tools and strategies. For instance, there are already AI detection software programs being developed to help educators identify AI-generated content. This challenge also gives us a chance to innovate how we teach, specifically emphasizing creativity and analysis—skills that AI can't replicate. All I’m advocating is that we explore how to turn potential negatives into powerful positives.Without a manner of checking to see if something was written by AI then AI is a huge negative to education...
Remember 2001: A Space Odyssey? Yeah, we're headed there with AIDismissing AI like Google's Bard over a few errors misses the big picture. This technology is revolutionizing how we handle information. Sure, it flubbed some details about a football game, but does that overshadow its vast capabilities? We're talking about machines that can process and analyze data faster than any human ever could. And about the emotional responses—it’s designed to make tech approachable, not to trick you into thinking it’s human. Let's not nitpick the shortcomings when the advancements are staring us in the face. AI is a game-changer, and its potential is just beginning to unfold. It’s time to recognize its value and push forward, not backpedal over imperfections.
Remember 2001: A Space Odyssey? Yeah, we're headed there with AI
Soon we will have AI pilots on commercial flights and combat zones. And if they make a mistake and people die, the wrong people die, oh well, back to the drawing board.
At what point can AI be trusted?
If I'm on a commercial airliner, I want an experienced human pilot who is highly motivated to land the plane safely, not some software that will simply be rebooted after everyone is dead.At what point can humans be trusted?
If AI makes mistakes less often than humans, doesn't that mean it is preferable to a human, even if it still makes mistakes?
If AI driven cars crash 50% less often than human driven ones, doesn't that mean AI is better, even if it still crashes thousands of times a year?
If I'm on a commercial airliner, I want an experienced human pilot who is highly motivated to land the plane safely, not some software that will simply be rebooted after everyone is dead.
Makes mistakes? Like the software that just made up court citations out of thin air? That kind of mistake? When the human lawyers are all replaced with AI, who's going to correct that?
Dismissing AI like Google's Bard over a few errors misses the big picture. This technology is revolutionizing how we handle information. Sure, it flubbed some details about a football game, but does that overshadow its vast capabilities? We're talking about machines that can process and analyze data faster than any human ever could. And about the emotional responses—it’s designed to make tech approachable, not to trick you into thinking it’s human. Let's not nitpick the shortcomings when the advancements are staring us in the face. AI is a game-changer, and its potential is just beginning to unfold. It’s time to recognize its value and push forward, not backpedal over imperfections.
Let's not get carried away with Hollywood sci-fi doom scenarios. AI isn't HAL from 2001. We're talking about real-world technology, governed by rigorous safety protocols, not a rogue AI with a mind of its own. Suggesting that AI applications in critical areas like aviation or defense are just reckless experiments is a gross misrepresentation. These systems undergo extensive validation before deployment. And trust? That's earned through proven reliability and effectiveness, not through fear. If we're going to debate AI, let's stick to facts and practical applications, not fictional fears.Remember 2001: A Space Odyssey? Yeah, we're headed there with AI
Soon we will have AI pilots on commercial flights and combat zones. And if they make a mistake and people die, the wrong people die, oh well, back to the drawing board.
At what point can AI be trusted?