Incredible new faculties offered to knowledge by reasoning

Not only reasoning can produce new informations about a given situation and trigger relevant decisions, but it also allows for an automatic critical analysis of knowledge. That is, it allows both to communicate and to validate the knowledge used, to say whether it is good. And that by providing original services to those who execute it: explanation, detection of contradiction and logical simulation.

I – The faculty of explanation: there is no explanation without reasoning, and vice versa!

There is no explanation without reasoning, there is no reasoning without explanation! I insist on this for my colleagues academics AI researchers who refuse to work on human logic and desperately trying to invent « logic » that they would be the only ones to understand: modal, epistemic, temporal, fuzzy, order 1, etc. unfortunately, all of them are unable to provide the slightest explanation of how they work. As the calculation …

Let’s take the three rules already seen:

1st rule: IF it is a living thing THEN it is mortal
2nd rule: IF it is a man THEN it is a living thing
3rd rule: IF its name = « Socrates » THEN it is a man

1) Here are the explanations drawn from these rules in forward chaining:

  • Its name is Socrates. Is it immortal?
  • no
  • Why ?
  • because it is a living being and if it is a living being THEN it is mortal
  • why it is a living?
  • because it is a man, and if it is a man THEN it is a living thing
  • why is he a man?
  • because IF its name = « Socrates » THEN it is a man

2) And here are those in backward chaining:

  • this thing, it is mortal?
  • Is it a living thing ?
  • why this question ?
  • because IF it is a living thing THEN it is mortal (explanation on question)
  • Ok, my answer is yes
  • so it is mortal
  • why this deduction?
  • because IF it is a living thing THEN it’s mortal (explanation on deduction)

3) Explanation in forward chaining on the contraposition:

  • it is immortal, what do you deduce?
  • that it is not a living thing, that it is not a man, that it is not Socrates
  • why it is not Socrates?
  • because IF it is immortal THEN it is not a living thing, IF it is not a living thing THEN it is not a man and IF it is not a man THEN it is not Socrates 

In chaining back, the explanation can be very long because we know the purpose and it can be related to the question regardless of the distance.

You will note that in the context of reasoning AI the explanation brings up knowledge, which can be ignored by the user. He is therefore learning it. If this knowledge is not disputed then the explanation is clear.

4) Demonstrations of automated explanations

The explanations are very easy to automate with Pandora-type expert systems. Do you want to see some examples (sorry, in French) ? Take back the payroll I mentioned in the previous chapter and do not complete the initial entry screen. Since the program has no data to process, it will have to go get them since it is requested. We find ourselves in Conversational.

You see that you have the opportunity to ask « Why? On each question asked. A click on this why makes appear the reasoning and the knowledge which served to ask this question. There is also the « How? Which appears if we click on a deduction. It serves to have the deductions explained. Go to the end of the dialogue, to see appear as and when the deductions. Then click on these deductions starting from the last one: net salary to pay. You will understand, step by step, how the reasoning has progressed and why the Conversational has asked you all these questions.

II – The contradiction: reasoning = contradiction, and vice versa

There is no reasoning without possibility of contradiction. I insist on this, always for the benefit of my fellow researchers whose « logics » are unable to reveal the contradictions. The contradiction is the famous « critical reasoning » that Minsky was talking about in 1956 when he defined what an Artificial Intelligence should be. It is the intuitive way, within the reach of all, to detect or that a reasoning is bad, or that the knowledge used is wrong.

Example: It is midday, it is night. As I know that IF it is noon THEN it is day and it can not be both day and night, There is contradiction! In the same way, thanks to the contraposition, I deduce IF it is night THEN it is not midday. New contradiction. The reasoning that brought me there is unworkable, no need to pursue it. A contradiction can only have 2 possible reasons: a fact is wrong or a knowledge is wrong.

1st case: erroneous fact – If I read the clock incorrectly and in fact it is midnight (12 pm on the clock …) the mistake comes from me. I have only to correct and resume the reasoning, it will go to completion. It is because of the contradiction that I realized my mistake in observing the facts.

2nd case: erroneous knowledge – If I am at the North Pole and we are in winter, it is true that at noon it is dark! In this case, it is the knowledge that is wrong and must be seen again: when it is noon, it is day except at the North Pole in winter. Or, in the form of a rule: IF it is noon AND we are not at the North Pole in winter THEN it is day. To be complete, the Maïeutique will also impose to say: when it is midday, it is night at the North Pole in winter. Or, in the form of a rule: IF it is noon AND we are at the North Pole in winter THEN it is dark. It’s still the contradiction that allowed me to pinpoint the error.

Demonstrations of automated detection of contradictions

Want to test a contradiction in an AI software? To do this, we must be able to provide facts that lead to conflicting conclusions or a fact that contradicts a deduction. The trouble is that the Conversational generating relevant questions, it is difficult to put in a position of contradiction. There is however a way to make it appear interesting in the payroll expert system that you have already tested, thanks to the entry screen entry that allows you to enter several data at a time.

Click here to find the mini-pay. To get a contradiction, say:

  • you did not work at all this month: 0 hours (« number of hours worked of the month »)
  • you have been on leave for 170 hours or more, although this is not possible since the month has only 169 hours in this software (« paid leave period »). There you have surely made a mistake. Having been away for a month longer than this month is not possible. But how to explain it? You will see that the software explains it, but in an unexpected and original way.
  • you are an employee or a worker (it is this information that will trigger a contradiction)
  • no need to complete the rest of the form, this information is enough to trigger a contradiction. Click « Validation ».

Read the contradiction that appears: it is about the hours to pay that the software has a problem.illogisme This contradiction signals to the user that he was mistaken about his hours of holidays but also to the developer an inconsistency that he should have foreseen and filtered in the input screen: prohibition to enter a number of hours of paid leave greater than the number of hours of the month.

III – The logical simulation: reasoning = possibility of simulating cases

Imagine that you are a man and ask a doctor over the phone to diagnose stomach pain. He asks you a number of questions, and suddenly you realize that he thought you were talking about your belly, while you were talking about your wife’s belly. For the doctor, it’s not the same! As you may know, women can have a stomach ache from a certain age to a certain age at certain times. Even seems that it can play on their character …

For the doctor who has reasoned about a male case, he will have to leave aside the track of the prostate, a typical male organ, while keeping in memory the symptoms that I communicated to him may be useful for a woman. Since he is intelligent, there is no question of taking the diagnosis to zero. If he does, he will waste time and annoy his patient with useless questions. Then he will exploit what he knows, do not ask yourself the same questions and continue his reasoning. For example, he will then ask his age to verify that it may be a pain due to the rules.

The reasoning AI knows how to do that but not a classic program. It will force you to start all over again. A logical program will do it: it is enough to modify the answer (s) and to restart the reasoning. The dialogue will resume with the same relevance as before, although the situation has changed, or conclude immediately.

Demonstrations of logical simulations

Want to test the simulation capability in smart software? Return to the expert payroll system stuck in its contradiction and replace 170 hours of paid vacation with a lower number. You will see the reasoning and its questions resume. This time it will come to an end: « net pay to pay ».

You can also put more than 169 hours in the number of hours worked in the month but change the position of the employee by switching it from « employee » to « cadre ». There will be no more contradiction. It’s up to you to understand why.

Laisser un commentaire