AI Fails University of Tokyo Admission Test for Second Time

The National Institute of Informatics has given up on making its Todai Robot AI smart enough to get into the University of Tokyo.


Put one in the win column for us humans. Researchers in Japan have given up on developing an AI that is smart enough to pass the University of Tokyo’s admission test.

The National Institute of Informatics (NII) has been working on the Todai Robot AI for the last five years. The goal was to get Todai Robot into Japan’s top university by 2020.

The AI took the admission test in 2015 and scored a 511 out of 950. That score is well below the requirement, but it’s higher than the national average of 416. The researchers had high hopes the AI would perform better this time around, but it reportedly earned nearly the same score.

Noriko Arai, a professor at the NII, tells the Japan Times that the “AI is not good at answering a type of question that requires the ability to grasp meaning in a broad spectrum.” Essentially, it appears Todai Robot struggled with its critical thinking skills.

Here’s are some excerpts from a 2013 interview on the Todai Robot Project website that details some of the challenges an AI might encounter with the University of Tokyo’s admission test:

Why was passing the university entrance exam selected as the project’s goal?

Miyao The key point is that what’s difficult for people is different than what’s difficult for computers. Computers excel at calculation, and can beat professional chess and shogi players at their games. IBM’s “Watson” question-answering system*1 became a quiz show world champion. For a person, beating a professional shogi player is far harder than passing the University of Tokyo entrance exam, but for a computer, shogi is easier. What makes the University of Tokyo entrance exam harder is that the rules are less clearly defined than they are for shogi or a quiz show. From the perspective of using knowledge and data to answer questions, the university entrance exam requires a more human-like approach to information processing. However, it does not rely as much on common sense as an elementary school exam or everyday life, so it’s a reasonable target for the next step in artificial intelligence research.

Does the difficulty vary by test subject?

“What varies more than the difficulty itself are the issues that have to be tackled by artificial intelligence research. The social studies questions, which test knowledge, rely on memory, so one might assume they would be easy for computers, but it’s actually difficult for a computer to determine if the text of a problem corresponds to knowledge the computer possesses. What makes that identification possible is “Textual Entailment Recognition”*2, an area in which we are making progress, but still face many challenges. Ethics questions, on the other hand, frequently cover common sense, and require the reader to understand the Japanese language, so they are especially difficult for computers, which lack this common sense. Personally, I had a hard time with questions requiring memorization, so I picked ethics.”

The researchers now plan to shift their focus to studies related to the academic skills needed for written responses.

[Source:] Japan Times



Comments

Menno Mafait · November 18, 2016 · 2:05 am

As long as scientists fail to define intelligence in a natural way, the field of AI and knowledge technology is engineering (specific solutions to specific problems) instead of a science (generic solutions).

Actually, AI scientists made a fundamental mistake 60 years ago:

Intelligence and language are natural phenomena. Natural phenomena obey laws of nature. And laws of nature are investigated using fundamental science (algebra). However, the field of AI and knowledge technology is researched using cognitive science (simulation of behavior).

A consequence of this fundamental mistake:

• AI is programmed intelligence rather than an artificial implementation of natural intelligence;
• In knowledge technology, artificial structures are applied to keywords, while the natural structure of sentences is ignored. By ignoring this structure provided by nature, the field of knowledge technology got stuck in processing of “bags of keywords” and unstructured texts, while scientists fail to define the logical function of even the most basic word types.

I am elevating this field from engineering (using artificial structures) to a science (using natural structures embedded in grammar). I am using fundamental science (algebra) instead of cognitive science (simulation of behavior):

• I have defined intelligence in a natural way (http://mafait.org/intelligence/);
• I have discovered a relationship between natural intelligence and natural language;
• I am implementing these (Natural Laws of) Intelligence embedded in Grammar in software;
• And I defy anyone to beat the simplest results of my natural language reasoner in a generic way (=through algorithms): http://mafait.org/challenge/.

It is open source software. So, everyone is invited to join.

Menno Mafait · November 17, 2016 · 9:29 pm

Why robots fail to reason logically in natural language

We only know very little of the logic of language. Actually, for centuries, algebra is limited to support reasoning using verb “is/are” in the present tense form, like in:

> Given: “John is a father.”
> Given: “Every father is a man.”

• Logical conclusion:
< “John is a man.”

But we are also capable of possessive reasoning – using possessive verb “has” – like in:

> Given: “Paul is a son of John.”

• Logical conclusion:
< “John has a son, called Paul.”

And we are able to reason in the past tense, like in:

> Given: “James was the father of Peter.”

• Logical conclusions:
< “Peter has no father anymore.”
< “Peter had a father, called James.”

So, why doesn’t algebra support past tense reasoning – nor possessive reasoning – in a natural way? Why should any predicate beyond present tense verb “is/are” be described in an artificial way? Why is algebra still not equipped for linguistics, after those centuries of scientific research?

And even though algebra describes the Exclusive OR (XOR) function in a natural way, automated reasoners still don’t implement its linguistic equivalent: conjunction “or”. So, automated reasoners are unable to generate the following question:

> Given: “Every person is a man or a woman.”
> Given: “Addison is a person.”

• Logical question:
< “Is Addison a man or a woman?”

So, even 60 years after the start of this field, knowledge technology still has a fundamental problem:

Words like definite article “the”, conjunction “or”, possessive verb “has/have” and past tense verbs “was/were” and “had” have a naturally intelligent function in language. However, their naturally intelligent function is not described in any scientific paper. Apparently, scientists don’t understand their naturally intelligent function in language.

I defy anyone to beat the simplest results of my natural language reasoner in a generic way (=through algorithms): http://mafait.org/challenge/.

It is open source software. So, everyone is invited to join.


Menno Mafait · November 17, 2016 at 9:29 pm

Why robots fail to reason logically in natural language

We only know very little of the logic of language. Actually, for centuries, algebra is limited to support reasoning using verb “is/are” in the present tense form, like in:

> Given: “John is a father.”
> Given: “Every father is a man.”

• Logical conclusion:
< “John is a man.”

But we are also capable of possessive reasoning – using possessive verb “has” – like in:

> Given: “Paul is a son of John.”

• Logical conclusion:
< “John has a son, called Paul.”

And we are able to reason in the past tense, like in:

> Given: “James was the father of Peter.”

• Logical conclusions:
< “Peter has no father anymore.”
< “Peter had a father, called James.”

So, why doesn’t algebra support past tense reasoning – nor possessive reasoning – in a natural way? Why should any predicate beyond present tense verb “is/are” be described in an artificial way? Why is algebra still not equipped for linguistics, after those centuries of scientific research?

And even though algebra describes the Exclusive OR (XOR) function in a natural way, automated reasoners still don’t implement its linguistic equivalent: conjunction “or”. So, automated reasoners are unable to generate the following question:

> Given: “Every person is a man or a woman.”
> Given: “Addison is a person.”

• Logical question:
< “Is Addison a man or a woman?”

So, even 60 years after the start of this field, knowledge technology still has a fundamental problem:

Words like definite article “the”, conjunction “or”, possessive verb “has/have” and past tense verbs “was/were” and “had” have a naturally intelligent function in language. However, their naturally intelligent function is not described in any scientific paper. Apparently, scientists don’t understand their naturally intelligent function in language.

I defy anyone to beat the simplest results of my natural language reasoner in a generic way (=through algorithms): http://mafait.org/challenge/.

It is open source software. So, everyone is invited to join.

Menno Mafait · November 18, 2016 at 2:05 am

As long as scientists fail to define intelligence in a natural way, the field of AI and knowledge technology is engineering (specific solutions to specific problems) instead of a science (generic solutions).

Actually, AI scientists made a fundamental mistake 60 years ago:

Intelligence and language are natural phenomena. Natural phenomena obey laws of nature. And laws of nature are investigated using fundamental science (algebra). However, the field of AI and knowledge technology is researched using cognitive science (simulation of behavior).

A consequence of this fundamental mistake:

• AI is programmed intelligence rather than an artificial implementation of natural intelligence;
• In knowledge technology, artificial structures are applied to keywords, while the natural structure of sentences is ignored. By ignoring this structure provided by nature, the field of knowledge technology got stuck in processing of “bags of keywords” and unstructured texts, while scientists fail to define the logical function of even the most basic word types.

I am elevating this field from engineering (using artificial structures) to a science (using natural structures embedded in grammar). I am using fundamental science (algebra) instead of cognitive science (simulation of behavior):

• I have defined intelligence in a natural way (http://mafait.org/intelligence/);
• I have discovered a relationship between natural intelligence and natural language;
• I am implementing these (Natural Laws of) Intelligence embedded in Grammar in software;
• And I defy anyone to beat the simplest results of my natural language reasoner in a generic way (=through algorithms): http://mafait.org/challenge/.

It is open source software. So, everyone is invited to join.


Log in to leave a Comment


in the Future Tech Hub

Editors’ Picks

FAA’s Recreational Drone Registration Struck Down in Court
A federal court rules the FAA's mandatory recreational drone registration violates Section 336...

Self-Driving Cars Approved for Public Tests in Germany
Germany passed a law that allows self-driving car tests on public roads....

Smart Exoskeleton Prevents Elderly Falls
The Active Pelvis Orthosis is a smart exoskeleton that recognizes in just 350...

EduExo DIY Kit Lets You Build Exoskeletons
EduExo is a 3D-printable, Arduino-powered kit for students, hobbyists and educators that...