Why batch learning is slower
    Nathan Intrator 
    nin at cns.brown.edu
       
    Thu Mar 19 10:59:36 EST 1992
    
    
  
           "Why Batch Learning is Slower Than Per-Sample Learning"
                           Thomas H. Hildebrandt
      From the abstract:
      "...For either algorithm, convergence is guaranteed as long as no
      step exceeds the minimum ideal step size by more than a factor of 2.
      By limiting the discussion to a fixed, safe step size, we can compare
      the maximum step that can be taken by each algorithm in the worst case."
      -------
There is no "FIXED safe step size" for the stochastic version, namely there is
no convergence proof for a fixed learning rate of the stochastic version.
The paper cited by Chung-Ming Kuan and Kurt Hornik does not imply that either.
It is therefore difficult to draw conclusions from this paper.
 - Nathan
    
    
More information about the Connectionists
mailing list