What are the most common problems of artificial intelligence implementation?

Can anyone tell me that what are the most common problems of artificial intelligence implementation.

I’ve honestly never seen the subject discussed on here, so I don’t quite know what response you’ll get regarding implementation. Maybe try a search on the subject, but I’m really not sure you’ll find.

1 Like

depends on your definition of AI. Anything from image recognition software to a Robot that is going to make your dinner for you, break it’s programming join a union and demand equal pay for robot kind.

Looking at googles image recognition software it still struggles with a lot of things. Our brains can pattern match to a phenomenal level far beyond googles software. But it is constantly learning and getting better. Same with google translate which 5 years ago was quite funny to translate a couple of times and see what it returned.

I’m pretty sure the google recapture system is a learning exercise for google. When it says match all the pictures with Bridges in them it then learns from a vast amount of people what pictures match that and can analyze the results to get better next time.

I am guessing there are 2 key components. 1 data to draw from. 2 the ability to cross reference that data in such a way to extract a logical answer. A lack of either one of those is going to limit what you can achieve.

3 Likes

Nope!

2 Likes

ha everybody knows bugs are a good source of protein.

3 Likes

The extinction of man kind.

1 Like

I’m actually quite scared of big data being combined with AI.

What is bound to happen is AI will start to “predict” the future based on millions of correlations within big data sets. Then people will ooo and awwwww and begin to believe these predictions.

“Well gee the AI says here that if you are born in such a place and had such a family and went through such events and have such medical issues and had such and such schooling and such an income and such siblings and like fried foods, you have 78.542% chance of becoming a criminal before age 20.”

When we start seeing AI predict human life by using mass data sets, situations like the above will start to take place, thus leading us to try and change, educate, stop people from something they’ve never done. Telling a child they are very likely to become a criminal will have deep effects on how that child is raised and viewed by others. And then this data will be fed to schools and doctors and parents will be “warned” to “watch out” for signs and who knows what else.

It might even start happening that people, without having done anything at all, become targets for monitoring, tracking, or even pre-crime type actions taken against them.

The only thing AI might be able to “think” better than humans is predicting the future, because it can analyze a gazillion sets of data and find such correlations. But do we as a human species want AI computers trying to tell us what the future holds? And are we prepared to figure out how to deal with such correlations and predictions without completely destroying any semblance of human freedom?

And will all these correlations not lead to causation? Will life become more prescriptive than descriptive? Kids “prescribed” to not do well in school, well let’s just put those kids in programs where they might get into lower intelligence labor jobs or something and not even give them a chance, cause the computer said they had low chance of success.

It’s impossible to argue with a computer, after all computers are so “smart” and “unbiased” and look at “only the facts”. What would be the point of going to trial or denying the data produced by AI? Computers can’t lie, would be the belief. Can’t argue with the data, is the name of the game.

What if you are an 80% match across hundreds of data points of people who all exhibit some negative life choice? What do we do as a society then? Force the person on some kind of medication? Force surgeries, lifestyle changes, education programs, relocation? Force them into monitoring programs? Put blocks on computers? Flight bans? Take away other rights, out of “just in case the bot is right?”

In any case, big data fed into the mind of powerful AI and applied to human life in predictive models will be a dangerous game to play. And we must be prepared to deal with the data we get, and what we’re willing to do about what it says.

1 Like

One thing that I could think of is that when an unexpected circumstance occurred that was not included in their program. Once this happens, functions will halt and get stuck until a human intervention is called for.

This topic was automatically closed 91 days after the last reply. New replies are no longer allowed.