I disregard prophesied apocalyptic scenarios (except for recurrence of known natural catastrophies), but I'm increasingly interested in the probability of AI taking on a life of its own, at humanity's expense.
Few would argue that, over time, AI will become increasingly enmeshed with our lives. Many believe that we'll always retain control, with preemptive capabilties in cases that would otherwise head out of control. Some argue for unlimited "progress", no worries.
I'm afraid what people don't realize is that future utopia will not be designed by vote, but by people with vast resources and personal motives. They'll be in control at first, often with short-term gains their priority. They are just human, with imperfect judgments, lofty intentions plated with gold, and built-in fallabilties. AI machines won't have the benefit of millions of years of evolution to shape their behaviors, but will operate at first at the will and whim of their programmers, for better and for worse.
Does anyone else see a slow death of homo sapiens coming, at the "hands" of owners and programmers at first, but then over generations, all existential priorities ultimately defined purely by AI mechanisms, regardless even of owner and programmer intentions? Bugs happen!
Here's a short word about it from a recent article.