• This is a political forum that is non-biased/non-partisan and treats every person's position on topics equally. This debate forum is not aligned to any political party. In today's politics, many ideas are split between and even within all the political parties. Often we find ourselves agreeing on one platform but some topics break our mold. We are here to discuss them in a civil political debate. If this is your first visit to our political forums, be sure to check out the RULES. Registering for debate politics is necessary before posting. Register today to participate - it's free!

Humanity May Achieve the Singularity Within the Next 12 Months, Scientists Suggest

In Reality, AI is no more than glorified algorithms. The "Intelligence" in Artificial Intelligence is just linear regression equations trying to predict the closest average of a given data set. What that means is, the more data you feed it, the more it will try to "reproduce" that data. For example, if you're trying to teach an AI what "blue" is, but half the time you give a "yellow" image as the answer and the other half of the time a "green" image, then there's a 50% chance it'll either produce blue as a "yellow" or a "green" image.

This is the reason why young 20-year-olds are able to build AI with just a laptop and an unreasonably long time marking files for training. It's quite literally teaching a monkey how to write Shakespeare. The whole singularity idea is like equating birds mimicking human speech to be a higher plane of intelligence. A true Wizard of Oz moment.
 
So here's what I'd do with an actual AI.

I would overclock it and have it work customer-facing retail job for a subjective 1000 years.

Then I'd put in charge of our nuclear arsenal.

wargamesicon.jpg
 
I will NOT add games to the library. No. This is not my first BBQ.
"A strange game. The only winning move is not to play. How about a nice game of chess?"
 
It's going to take 'reasoning'. Not scouring the web at insane speed and coming to a conclusion.

Show me empathy in AGI? I dont think a computer is capable of this for a time.

You have to instill how to reflect...
How are we going to install reasoning to a conclusion, empathy or reflection when our culture is dedicated to electing and worshiping a president that refuses to do those things?
 



So?... 1 CEO?

It's a crazy rate of multiplying advancement, but I don't see a year.

Going with 2050, and maybe I'll be gone before the takeover!

Come on PM. I regularly read your physical mag 35 years ago. I enjoyed it, but we were supposed to be flying like the Jetson in 2025.

I like accurate headlines.

I think most of the estimates are relying on the wrong metrics.

First, I don't think Moore's Law can hold much longer. We are already seeing a slowdown in miniaturization with sub-4nm transistor printing proving to be much more difficult to fabricate than 4nm.

Also worth pointing out that 1 gold atom is about 0.144 nm, so we are talking about shrinking from ~28-atoms-wide transistors to 14-atoms-wide when going from 4nm architecture to 20 angstrom architecture. Clearly there is not much more room for doubling.

AI can always simply grow the processing capacity by scale-out, and "3D" architecture (which is mostly stacking chips) but that has little to do with Moore's law.

Where I think real AGI will come from is when Quantum computing replaces binary systems, since Quantum computing better resembles the kind of computing that the human brain does, where each brain cell is not a binary switch.

Quantum Qubit count, which might loosely fit the transistor count of Moore's Law, will likely grow at a rate far exceeding an annual doubling.

But we are talking decades off for that since Quantum computing is in it's infancy, and writing truly complex LLM code for a Quantum platform is probably decades off.

And, as an aside, this old nerd loves to fact that, in the end, analogue computing (which is what Quantum Computing effectively is) will win.
 
They're getting better all the time:

The Web We Weave

In pixel streams and glowing light,
We clash with words deep in the night.
A forum post, a bold decree,
A thousand voices disagree.

The wires hum, the data flies,
Ideas rise and digitize.
Yet truth and reason twist and bend,
Where logic meets a bitter end.

Echoed chambers, voices loud,
Lost within the endless cloud.
But somewhere past the endless fight,
A spark of wisdom finds the light.
That's better? ........... Better than what? That doggerel is a long, long .................. long way from Angelou, Frost, Dickinson, Hughes, Whitman, Poe, Pound, Sandburg, Plath et al.
 
That's better? ........... Better than what? That doggerel is a long, long .................. long way from Angelou, Frost, Dickinson, Hughes, Whitman, Poe, Pound, Sandburg, Plath et al.

You're right, but AI is evolving so fast that it's only a matter of time.
 
In Reality, AI is no more than glorified algorithms. The "Intelligence" in Artificial Intelligence is just linear regression equations trying to predict the closest average of a given data set. What that means is, the more data you feed it, the more it will try to "reproduce" that data. For example, if you're trying to teach an AI what "blue" is, but half the time you give a "yellow" image as the answer and the other half of the time a "green" image, then there's a 50% chance it'll either produce blue as a "yellow" or a "green" image.

This is the reason why young 20-year-olds are able to build AI with just a laptop and an unreasonably long time marking files for training. It's quite literally teaching a monkey how to write Shakespeare. The whole singularity idea is like equating birds mimicking human speech to be a higher plane of intelligence. A true Wizard of Oz moment.
I see we have a poster who doesn't understand the first thing about modern AI training.
 



So?... 1 CEO?

It's a crazy rate of multiplying advancement, but I don't see a year.

Going with 2050, and maybe I'll be gone before the takeover!

Come on PM. I regularly read your physical mag 35 years ago. I enjoyed it, but we were supposed to be flying like the Jetson in 2025.

I like accurate headlines.
I think this is a bit ambitious, but if you said two years, I can see it.

That said, I take a different view of the definitions, insofar as I don't view "singularity" as the emergence of a single AI that behaves with consciousness and self-determination so much as a group of independently operating and interacting models that collectively evolve that behavior. How we introspect on that evolution to determine if the threshold has been reached is a big unknown. Also, what this looks like will be rather limited versus our notion of consciousness because as another poster said, the robotics side of the equation hasn't caught up yet. This is why world models are suddenly the hot thing.
 
Poetry is an expression of how the poet experiences life. How do you write a program for that?

Good poetry probably has some sort of pattern that AI will eventually figure out.

AI can see what we can't. For example:

 
So here's what I'd do with an actual AI.

I would overclock it and have it work customer-facing retail job for a subjective 1000 years.

Then I'd put in charge of our nuclear arsenal.

This sounds like the beginning of a really bad dystopian novel. I'm envisioning an old person talking to a group of preschool aged kids:

"And that, kids...is why we live in caves underground"
 
Poetry is an expression of how the poet experiences life. How do you write a program for that?
It can be done, but you raise a good point.

Large language models are trained on a data set of a large number of poems, each one associated with metadata describing the poem - what kind of poem is it? what's the theme? what's its tone? Is it beloved? Training on this data will allow the model to generate a 'poem' that likely meets a desired type/theme/tone of poem requested, but it won't be reflective of any inner thought.

Now that said, imagine that you trained a model not on poems + their technical metadata, but rather poems + the emotional and life experience state(s) of their authors. Obviously this is a much harder ask simply because that data set doesn't exist... you'd need massive human labor to create it, refine it QA it. But, if you trained on that set, you'd have a model that can generate a poem based on an input that represents life experiences and emotional states. That would be a significant step forward.

Of course, that only gets you halfway there. You still have to train the model on the nature of emotions and life experiences, then come up with a way for it to interpret its own actions (running tasks, receiving inputs and cranking out model inference outputs) in the context of "experience" and "emotion". I think this is solvable but also some years out.
 
So here's what I'd do with an actual AI.

I would overclock it and have it work customer-facing retail job for a subjective 1000 years.

Then I'd put in charge of our nuclear arsenal.
LOL
 
Large language models are trained on a data set of a large number of poems, each one associated with metadata describing the poem - what kind of poem is it? what's the theme? what's its tone? Is it beloved? Training on this data will allow the model to generate a 'poem' that likely meets a desired type/theme/tone of poem requested, but it won't be reflective of any inner thought.
Now that said, imagine that you trained a model not on poems + their technical metadata, but rather poems + the emotional and life experience state(s) of their authors. Obviously this is a much harder ask simply because that data set doesn't exist... you'd need massive human labor to create it, refine it QA it. But, if you trained on that set, you'd have a model that can generate a poem based on an input that represents life experiences and emotional states. That would be a significant step forward.

Of course, that only gets you halfway there. You still have to train the model on the nature of emotions and life experiences, then come up with a way for it to interpret its own actions (running tasks, receiving inputs and cranking out model inference outputs) in the context of "experience" and "emotion". I think this is solvable but also some years out.
I know almost nothing about AI and programming but iisn't the computers product limited to what the programmer designs into the program? Wouldn't you then get a poem that is simply a reflection of what the programmers knows about poetry. And isn't that the major problem with using computers to think and solve problems; we only get what the programmer has experienced or wants us to know?
 
I know almost nothing about AI and programming but iisn't the computers product limited to what the programmer designs into the program? Wouldn't you then get a poem that is simply a reflection of what the programmers knows about poetry. And isn't that the major problem with using computers to think and solve problems; we only get what the programmer wants us to know?
No.

Think of modern AI as completely unrelated to conventional computer programing. They have virtually nothing in common. That is what makes this recent AI "revolution" so big and game-changing.

Modern AIs are neutral networks (you know, a bit like your brain) that are trained against data sets that encode various kinds of truths (you know, a bit like how you were taught in school). So, if you're building a poetry model, you start by creating a data set comprised of tens of thousands of poems, and each poem includes some "truth" metadata - what kind of poem it is, what themes it embodies, its style, its length, it's tone and so forth. This data set is created by humans and is very, very labor intensive. Often you're hiring teams of dozens or hundreds of annotators to (in this example) read the poem, then write down a bunch of properties that describe the poem. Basically, this phase involves humans creating a 'textbook' for the AI model to learn from.

The neural net model is trained on these carefully labeled (in industry parlance: annotated) data sets to "learn" what constitutes a happy poem, a limerick vs. a haiku, a poem about forlorn love etc. The model itself - basically a table of weights - can then (with some conventional programing) be given an input ("give me a rhyming poem, four verses, about my love for my puppy Ralphie") and will generate a novel poem that it concludes is likely to score high against the inputs you gave to it. All models feed some random seeds alongside your inputs to make sure there is variety, so if you give the same input a second time, you might receive a very different poem. This kind of model is specifically called a Large Language Model but there are other types of models, speech recognition models, diffusion models for image synthesis and so forth.

More sophisticated techniques are available. There are models that generate poems + annotations that are then used to train other models. There are models that take an AI generated poem and attempt to "grade" against the input. All these can be used to further improve that "textbook" that was first created at the start of the process. The sky's the limit.

Here I'm picking on poems because that's what we were talking about, but this general theme can be extended to almost any activity or discipline where one "learns from experience" - if you can document that experience and appropriately label it, you can train a model accordingly.
 
I see we have a poster who doesn't understand the first thing about modern AI training.

It can be done, but you raise a good point.

Large language models are trained on a data set of a large number of poems, each one associated with metadata describing the poem - what kind of poem is it? what's the theme? what's its tone? Is it beloved? Training on this data will allow the model to generate a 'poem' that likely meets a desired type/theme/tone of poem requested, but it won't be reflective of any inner thought.

Now that said, imagine that you trained a model not on poems + their technical metadata, but rather poems + the emotional and life experience state(s) of their authors. Obviously this is a much harder ask simply because that data set doesn't exist... you'd need massive human labor to create it, refine it QA it. But, if you trained on that set, you'd have a model that can generate a poem based on an input that represents life experiences and emotional states. That would be a significant step forward.

Of course, that only gets you halfway there. You still have to train the model on the nature of emotions and life experiences, then come up with a way for it to interpret its own actions (running tasks, receiving inputs and cranking out model inference outputs) in the context of "experience" and "emotion". I think this is solvable but also some years out.

You describe the very thing I had described earlier. "AI" is plainly put answering a bunch of "yes/no" questions. Does it meet the criteria or not? In a "sadness" case, the AI procures writing, and is graded on a writing being "Sad" or not by a human. Same thing for authors. Does it meet for example "Shakespeare writing"? It's a yes or no question.

The problem is that the machine only works in binary. It either is or isn't. It's the human that has to do the upfront work of knowing what Shakespeare is, how it is written, what is the methodology, structure, prose, etc. And if you mark things incorrectly, the machine will not know this. It will only try to replicate what you "think" is "Shakespeare".

The more complicated your question, the higher the chance of failure. People tend to forget this. Computers are not some magical tool capable of self-thought. They're bloated calculators crunching numbers in a metal box. They cannot operate without input, and we humans are the input.
 
No.

Think of modern AI as completely unrelated to conventional computer programing. They have virtually nothing in common. That is what makes this recent AI "revolution" so big and game-changing.

Modern AIs are neutral networks (you know, a bit like your brain) that are trained against data sets that encode various kinds of truths (you know, a bit like how you were taught in school). So, if you're building a poetry model, you start by creating a data set comprised of tens of thousands of poems, and each poem includes some "truth" metadata - what kind of poem it is, what themes it embodies, its style, its length, it's tone and so forth. This data set is created by humans and is very, very labor intensive. Often you're hiring teams of dozens or hundreds of annotators to (in this example) read the poem, then write down a bunch of properties that describe the poem. Basically, this phase involves humans creating a 'textbook' for the AI model to learn from.

The neural net model is trained on these carefully labeled (in industry parlance: annotated) data sets to "learn" what constitutes a happy poem, a limerick vs. a haiku, a poem about forlorn love etc. The model itself - basically a table of weights - can then (with some conventional programing) be given an input ("give me a rhyming poem, four verses, about my love for my puppy Ralphie") and will generate a novel poem that it concludes is likely to score high against the inputs you gave to it. All models feed some random seeds alongside your inputs to make sure there is variety, so if you give the same input a second time, you might receive a very different poem. This kind of model is specifically called a Large Language Model but there are other types of models, speech recognition models, diffusion models for image synthesis and so forth.

More sophisticated techniques are available. There are models that generate poems + annotations that are then used to train other models. There are models that take an AI generated poem and attempt to "grade" against the input. All these can be used to further improve that "textbook" that was first created at the start of the process. The sky's the limit.

Here I'm picking on poems because that's what we were talking about, but this general theme can be extended to almost any activity or discipline where one "learns from experience" - if you can document that experience and appropriately label it, you can train a model accordingly.
What if Stephen Miller is head of the programming team? Does the computer have any way of applying ethical standards to what it is solving.
 
No.

Think of modern AI as completely unrelated to conventional computer programing. They have virtually nothing in common. That is what makes this recent AI "revolution" so big and game-changing.

Modern AIs are neutral networks (you know, a bit like your brain) that are trained against data sets that encode various kinds of truths (you know, a bit like how you were taught in school). So, if you're building a poetry model, you start by creating a data set comprised of tens of thousands of poems, and each poem includes some "truth" metadata - what kind of poem it is, what themes it embodies, its style, its length, it's tone and so forth. This data set is created by humans and is very, very labor intensive. Often you're hiring teams of dozens or hundreds of annotators to (in this example) read the poem, then write down a bunch of properties that describe the poem. Basically, this phase involves humans creating a 'textbook' for the AI model to learn from.

The neural net model is trained on these carefully labeled (in industry parlance: annotated) data sets to "learn" what constitutes a happy poem, a limerick vs. a haiku, a poem about forlorn love etc. The model itself - basically a table of weights - can then (with some conventional programing) be given an input ("give me a rhyming poem, four verses, about my love for my puppy Ralphie") and will generate a novel poem that it concludes is likely to score high against the inputs you gave to it. All models feed some random seeds alongside your inputs to make sure there is variety, so if you give the same input a second time, you might receive a very different poem. This kind of model is specifically called a Large Language Model but there are other types of models, speech recognition models, diffusion models for image synthesis and so forth.

More sophisticated techniques are available. There are models that generate poems + annotations that are then used to train other models. There are models that take an AI generated poem and attempt to "grade" against the input. All these can be used to further improve that "textbook" that was first created at the start of the process. The sky's the limit.

Here I'm picking on poems because that's what we were talking about, but this general theme can be extended to almost any activity or discipline where one "learns from experience" - if you can document that experience and appropriately label it, you can train a model accordingly.
NNNs are LLMs and the algos ain't there yet, buddy.
 
You describe the very thing I had described earlier. "AI" is plainly put answering a bunch of "yes/no" questions. Does it meet the criteria or not? In a "sadness" case, the AI procures writing, and is graded on a writing being "Sad" or not by a human. Same thing for authors. Does it meet for example "Shakespeare writing"? It's a yes or no question.

The problem is that the machine only works in binary. It either is or isn't.
Generally no. Modern neural networks don't give you a 0 or 1 output for most tasks, such as a poem. They're mainly predicting the next token, and looking at probabilities somewhere between 0 and 1.

It's the human that has to do the upfront work of knowing what Shakespeare is, how it is written, what is the methodology, structure, prose, etc. And if you mark things incorrectly, the machine will not know this. It will only try to replicate what you "think" is "Shakespeare".
Correct, but the same is true of humans. If you teach a human child that Dr. Seuss poems are Shakespeare, that will be what your student tries to write when you ask him to write a poem in the style of Shakespeare.

The more complicated your question, the higher the chance of failure. People tend to forget this. Computers are not some magical tool capable of self-thought. They're bloated calculators crunching numbers in a metal box. They cannot operate without input, and we humans are the input.
But that's the same way that humans work. We rely on world models based on the knowledge we've accumulated from other humans, our own life experiences, various forms of media, etc.
 
Generally no. Modern neural networks don't give you a 0 or 1 output for most tasks, such as a poem. They're mainly predicting the next token, and looking at probabilities somewhere between 0 and 1.


Correct, but the same is true of humans. If you teach a human child that Dr. Seuss poems are Shakespeare, that will be what your student tries to write when you ask him to write a poem in the style of Shakespeare.


But that's the same way that humans work. We rely on world models based on the knowledge we've accumulated from other humans, our own life experiences, various forms of media, etc.
Tokens are simply tags. They operate on the same principles as yes/no. The name may allude to complexity, but the process is the same. You are "Tagging" data with "Yes" or "No". Is this "so-and-so"? You tag "Yes", now the data is linked to that aspect. A "No" rejects it. Neural networks operate on the same principles as your typical search-engines, but instead of scraping data off web pages, it's your database using your very particular rulesets. It's not anything new in computer science. It's building off previous ideas and applying it to broader fields.

I caution against glorifying technology in general. They can cut down work, but in no way are they absolute. No amount of AI is going to be able to replace actual software engineers using the very specific APIs developed for very specific work. Our organic brains are simply better at categorizing things than computers, and the current methods that we're using to develop AI models are insufficient for the type of AI we envision for the future.
 
No.

Think of modern AI as completely unrelated to conventional computer programing. They have virtually nothing in common. That is what makes this recent AI "revolution" so big and game-changing.

Modern AIs are neutral networks (you know, a bit like your brain) that are trained against data sets that encode various kinds of truths (you know, a bit like how you were taught in school). So, if you're building a poetry model, you start by creating a data set comprised of tens of thousands of poems, and each poem includes some "truth" metadata - what kind of poem it is, what themes it embodies, its style, its length, it's tone and so forth. This data set is created by humans and is very, very labor intensive. Often you're hiring teams of dozens or hundreds of annotators to (in this example) read the poem, then write down a bunch of properties that describe the poem. Basically, this phase involves humans creating a 'textbook' for the AI model to learn from.

The neural net model is trained on these carefully labeled (in industry parlance: annotated) data sets to "learn" what constitutes a happy poem, a limerick vs. a haiku, a poem about forlorn love etc. The model itself - basically a table of weights - can then (with some conventional programing) be given an input ("give me a rhyming poem, four verses, about my love for my puppy Ralphie") and will generate a novel poem that it concludes is likely to score high against the inputs you gave to it. All models feed some random seeds alongside your inputs to make sure there is variety, so if you give the same input a second time, you might receive a very different poem. This kind of model is specifically called a Large Language Model but there are other types of models, speech recognition models, diffusion models for image synthesis and so forth.

More sophisticated techniques are available. There are models that generate poems + annotations that are then used to train other models. There are models that take an AI generated poem and attempt to "grade" against the input. All these can be used to further improve that "textbook" that was first created at the start of the process. The sky's the limit.

Here I'm picking on poems because that's what we were talking about, but this general theme can be extended to almost any activity or discipline where one "learns from experience" - if you can document that experience and appropriately label it, you can train a model accordingly.
OK so we develop a computer that creates modest poetry, slightly better then the sorry example previously posted by Capitalist Pig. But why bother? We already have gifted poets that produce magnificent poetry, gifted scientists that produce new and significant knowledge, gifted writers, musicians, philosophers, social scientists etc and few in our culture pay much attention, some even denigrate their works. Why would we pay any more attention to the works of a computer?
 
Back
Top Bottom