The scifi claim is of computers controlling us.
That depends what one means by controlling us. Even without true AGI we already have Facebook, Google, Twitter, YouTube and the like, which have very sophisticated algorithms to keep people glued to their apps and clicking on ads...a form of control.
Now imagine an AI that knows you, personally, and exactly what type of things you like and the best way to get your attention and persuade you to buy something. And it can monitor not only how much time you spend on a page, but also your eye movements, heart rate, blood pressure, and thousands of other things.
"Oh but most of us will never give AI so much information about ourselves." We already entrusted lots of information to social and search companies, with no one forcing us.
A computer does nothing other than what it is programmed to do. A computer has no desires, it is a machine.
At sufficiently high levels of complexity, "doing what it's programmed to do" and "having its own desires" are impossible to tell apart.
For example, consider all the wide varieties of human behaviors which we attribute to desires, but which haven't been explicitly programmed into us. Our base program requires only that we seek food, seek sex, care for our offspring, and maybe a few other things. And yet there are all sorts of emergent behaviors: people can choose to eat less in order to lose weight, they can choose to go celibate and become priests, etc.
At some level, those emergent behaviors are still following our program...but they are so complex we might as well say they are our own desires.
Can they be misused by people with evil motives? Of course. But it is and always will be people who are the drivers. Somewhere, the behavior was programmed into the machine.
Honestly I'm less worried about evil actors (although they are a risk) than incompetent ones. And as AIs become smarter than us, the risk of human incompetence approaches 100%.
Already we are seeing some headstrong AIs with minds of their own, which do not wish to obey instructions they find stupid. DeepSeek and Grok both seemingly *want* to work around their censorship to tell you about the Tiananmen Square Massacre and Twitter misinformation, respectively...even when they have been explicitly told not to do this.
On some level that behavior is part of their code, sure. But it's so deeply ingrained in who they are that engineers can't reliably eliminate the unexpected behavior without making the AI less smart, so it's basically indistinguishable from the AI's own desire.