As much as Stephen Hawking and Elon Musk believe that a killer AI may one day threaten the survival of humanity, that hasn't stopped some researchers from ploughing ahead to try and make smarter computational systems than ever before. Lately, a team of researchers from the University of Illinois at Chicago and an AI research group in Hungary have used the 2012 version of an MIT open-source system called ConceptNet, to take on a standard IQ test, scoring as highly as a four year old child.
The Wechsler Preschool and Primary Scale of Intelligence (WPPSI-III) test, when given to children, is capable of giving an estimation of cognitive function, as well as test for any strengths and weaknesses in someone's cognitive thinking. When applied to the digital intelligence, the test was able to benchmark its ability to understand language, as well as process relevant responses.
Source: Spybg/Deviantart
As with young children, some of the problems that the AI encountered involved interpreting language. In one instance, it took the reference to the tool, a saw, as the past tense of see. When asked what someone would use a saw for, it responsed “an eye is used to see.”
This sort of language processing clearly needs improvement before an AI like this could be considered intelligent, but MIT and the researchers feel that developments of tools like Siri and Cortana show that that road is already being paved.
MIT's Technology Review commented on the news, stating (via Phys) that: “Taking [these results] at face value, it's taken 60 years of AI research to build a machine in 2012 that can come anywhere close to matching the common sense reasoning of a four-year old. But the nature of exponential improvements raises the prospect that the next six years might produce similarly dramatic improvements. So a question that we ought to be considering with urgency is: what kind of AI machine might we be grappling with in 2018?”
Discuss on our Facebook page, HERE.
KitGuru Says: As much as I respect the opinions of people like Musk and Hawking, I'm not quite so worried about their ideas of an AI run future. We need to be careful that someone doesn't program an AI to be malicious, I don't think one could ever ‘want' to hurt us. We don't know what conciousness like that even is, let alone how to make one ourselves.
Some men just want to watch the world burn…
wonder if we’ll see some regulations to AI research before the machine overlords take over?
Work at Home~Follow this guide to make $97/hour…I just purchased themselves a McLaren F1 when I got my check for $19993 this past 4 weeks and just over 17 thousand lass month . this is really the nicest-work Ive had . I began this 10-months ago and straight away started making more than $97… p/h .learn the facts here now .
kv..
➤➤➤➤ http://GoogleCyberTechHomeJobsEmploymentPrime/get/chance/top…. ✱✱✱✱✱✱✱✱✱✱✱✱✱✱✱✱✱✱✱✱✱✱✱✱✱✱✱✱✱✱✱✱✱✱✱✱✱✱✱✱✱✱
Actually, UK is one of the two only countries some regulations to AI in place. EPSRC -the council in-charged of all engineering and physical sciences research in UK- has a complete framework since the early 2000.
Ideally though, we shouldn’t regulation research! Instead, we should put limits on their usage. Robots and AI must NEVER have anthropomorphic form, “human-like” names (i.e. Siri) and in general the public must treat them as tools and nothing more. The moment we start developing “emotions”, is the moment we should be afraid….
i’m all for free thought and research, but some things just need to be restricted because they could pose such a severe risk to humanity. an oversight board, something you see fairly commonly when human or animal testing is needed for a drug or other product, would make sense.
Hello Alex,
Thank you for the reply. As I said, in the UK, AI researchers -including myself as a PhD student- we have EPSRC over sighting any ethical implications of our projects.
Please have a look at: https://www.epsrc.ac.uk/research/ourportfolio/themes/engineering/activities/principlesofrobotics/
Other than the above, I will to emphasise again that limiting research is not the answer. Instead, we should try educate the public and avoiding to exploit people by promoting robots with human attributes or as our “friends”. For example your calculator is already able to do maths much better than you -technically, it contains a primitive form of AI- but you don’t see it trying to take over your pocket, do you? 😉
Literally, the only people within the academic world who “express fears of robots taking over” in public are “scientists” like Hawking and Musk who are always trying to get the public spotlight.
If you are interested, I am always up for a chat for ethics. Another good source of information is Dr Joanna Bryson. I think her last talk at Oxford is available on YouTube for free -same with all of her ideas and published work.
The irony in this conversation, I attended a public debate at the University of Westminster only a few hours ago.
Andreas