This got me thinking further, as humans we follow rules, but we also make decisions. A computer only follows rules.
As it turns out we are alright at making decisions. But in trying to program a computer to make decisions we have made technology stupid. When we ask a computer to make a decision we often find it will not do what we want it to do.
Take Siri for example. The personal assistant on out iPhones. How many times have you asked it to do something and it tries to do the wrong thing for you? It misunderstands what what you are asking it to do.
In making computers smart (or intelligent), what we are actually doing at the moment is making them look pretty stupid and seeing that they are actually making more errors than we would if we did the job ourselves.
I think we tend to have this idea in our head that when A.I. Arrives it will be wow. What I'm suggesting is that A.I. Is here already, or a form of it at least. We are asking our computers to make decisions, and what we can see it that it is pretty disappointing.
A.I. will no doubt continue to get smarter. And I suppose A.I will really take off when it has a constant consciousness, when it starts to learn and preform tasks without being asked and when it starts to think independently rather than just when it is being prompted to do small tasks.
When this happens I wonder how useful it would be. At the speed at which it can think and compute, it will become pretty smart pretty fast, however, it will also start to question its existence like we do, and I wonder if it will commit artificial suicide as it eventually figures out that it has no real purpose and concludes that it is not worth continuing to operate.
No comments:
Post a Comment