1. We won't make a true self-conscious consciousness in our lifteimes.
2. If humanity has not changed in ways I don't see it changing, the probability that it decides humanity must be controlled or destroyed is tremendous (we are horrible violent beasts, even if we sometimes pump out a Mozart).
3. Non-conscious AI in bad hands is a tremendous, near-catastrophic threat. Terrorists, rogue states, even maybe a super-billionaire who goes fully loops with dreams of power.... they are going to use them to design biological and other weapons. Game theory just about forces us to use our own "AI" to predict what other AI will come up with, and guess what we will seek to learn in the course of that. "AI" will be a weapon-building weapon.
4. Another tremendous risk: AI will help people severely reduce their own cretivity and intelligence. It already is. I think it was in the NYT, but an article covered a study of about 330 something people. Point was essay writing. By all measures, the people who wrote a series of papers without assistance were much better able to remember, defend, and further argue from what they wrote. The AI people....ehhhh... which is no surprise. *They* didn't write the paper, they just thought of things that should be in it and prompted AI.
5. AI weapons: a sci-fi dystopia. Philip K. Dick was right... a true visionary. He saw the ever-increasing obscene crassness and banality of the future. Somewhere between him and Black Mirror.
[possible edits]