I'm actually quite scared of big data being combined with AI.
What is bound to happen is AI will start to "predict" the future based on millions of correlations within big data sets. Then people will ooo and awwwww and begin to believe these predictions.
"Well gee the AI says here that if you are born in such a place and had such a family and went through such events and have such medical issues and had such and such schooling and such an income and such siblings and like fried foods, you have 78.542% chance of becoming a criminal before age 20."
When we start seeing AI predict human life by using mass data sets, situations like the above will start to take place, thus leading us to try and change, educate, stop people from something they've never done. Telling a child they are very likely to become a criminal will have deep effects on how that child is raised and viewed by others. And then this data will be fed to schools and doctors and parents will be "warned" to "watch out" for signs and who knows what else.
It might even start happening that people, without having done anything at all, become targets for monitoring, tracking, or even pre-crime type actions taken against them.
The only thing AI might be able to "think" better than humans is predicting the future, because it can analyze a gazillion sets of data and find such correlations. But do we as a human species want AI computers trying to tell us what the future holds? And are we prepared to figure out how to deal with such correlations and predictions without completely destroying any semblance of human freedom?
And will all these correlations not lead to causation? Will life become more prescriptive than descriptive? Kids "prescribed" to not do well in school, well let's just put those kids in programs where they might get into lower intelligence labor jobs or something and not even give them a chance, cause the computer said they had low chance of success.
It's impossible to argue with a computer, after all computers are so "smart" and "unbiased" and look at "only the facts". What would be the point of going to trial or denying the data produced by AI? Computers can't lie, would be the belief. Can't argue with the data, is the name of the game.
What if you are an 80% match across hundreds of data points of people who all exhibit some negative life choice? What do we do as a society then? Force the person on some kind of medication? Force surgeries, lifestyle changes, education programs, relocation? Force them into monitoring programs? Put blocks on computers? Flight bans? Take away other rights, out of "just in case the bot is right?"
In any case, big data fed into the mind of powerful AI and applied to human life in predictive models will be a dangerous game to play. And we must be prepared to deal with the data we get, and what we're willing to do about what it says.