TkbRewardPunishment {FKBL}R Documentation

Reward and Punishment. Tweaks a kB

Description

This algorithm is called Reward and Punishment. It is the same as TkbRewardPunishmentL, but it has not the parameters packed in a list.

It takes a knowledge base and tweaks its weights to fit better a given train data. It is based in the idea of checking every training case and discover which is the rule used to infer the case. If the rule was right, it is rewarded, with an "etaMore" value. If the rule did a mistake, it is punished, with an "etaLess" value. The reward and the punishment is done by lowering or rising the weight associated with a rule. By lowering a weight again and again, a rule would have less and less importance. This means that rules which make mistakes would appear less and less as the winning rules in an Inference one winner method. The same can be said with the rising of a rule's weight. The final result, is a tweaked rule set, which would be likely more adapted to the actual problem. This algorithm is described in chapter 3, pages 039-048 at Ishibuchi et al.\

Usage

 TkbRewardPunishment(kB, itera, etaMore, etaLess, train)

Arguments

Takes the knowledge base, the maximum number of iterations, the etaMore and the etaLess.

kB The knowledge base to tweak.
itera The maximum number of iterations.
etaMore The reward to a rule.
etaLess The punishment to a rule.
train The train data.

Value

Returns the tweaked knowledge base.

Source

begin{itemize}

  • Ishibuchi, H., Nakashima, T., Nii, M.
  • "Classification and modeling with linguistic information granules."
  • Soft Computing Approaches to Linguistic Data Mining.
  • Springer-Verlag, 2003 end{itemize}

    Examples

     data(kB)
     data(trainA)
     TkbRewardPunishment(kB, 1000, 0.001, 0.1, trainA)
    

    [Package FKBL version 0.50-4 Index]