DIMACS Challenge neural net papers

Arun Jagota jagota at next1.msci.memst.edu
Mon Feb 14 20:18:56 EST 1994


Dear Connectionists:

Expanded versions of two neural net papers presented at the DIMACS Challenge 
on Cliques, Coloring, and Satisfiability are now available via anonymous ftp
(see below). First an excerpt from the Challenge announcement back in 1993:

			----------------------
The purpose of this Challenge is to encourage high quality empirical
research on difficult problems.  The problems chosen are known to be
difficult to solve in theory. How difficult are they to solve in practice?  
			----------------------

ftp ftp.cs.buffalo.edu (or 128.205.32.9 subject-to-change)
Name : anonymous
> cd users/jagota
> binary
> get DIMACS_Grossman.ps.Z
> get DIMACS_Jagota.ps.Z
> quit
> uncompress *.Z

Sorry, no hard copies. Copies may be requested by electronic mail to me 
(jagota at next1.msci.memst.edu) for those without access to ftp or for whom
ftp fails. Please use as last resort.

		Applying The INN Model to the MaxClique Problem

		  Tal Grossman, email: tal at goshawk.lanl.gov
      Complex Systems Group, T-13, and Center for Non Linear Studies
      		MS B213, Los Alamos National Laboratory
			 Los Alamos, NM 87545
 
		  Los Alamos Tech Report: LA-UR-93-3082

A neural network model, the INN (Inverted Neurons Network), is applied to
the Maximum Clique problem. First, I describe the INN model and how it 
implements a given graph instance. The model has a threshold parameter $t$, 
which determines the character of the network stable states. As shown in an 
earlier work (Grossman-Jagota), the stable states of the network correspond 
to the $t$-codegree sets of its underlying graph, and, in the case of $t<1$, 
to its maximal cliques. These results are briefly reviewed. In this work I 
concentrate on improving the deterministic dynamics called $t$-annealing.
The main issue is the initialization procedure and the choice of parameters.
Adaptive procedures for choosing the initial state of the network and
setting the threshold are presented. The result is the ``Adaptive t-Annealing"
algorithm (AtA). This algorithm is tested on many benchmark problems and found 
to be more efficient  than steepest descent or the simple t-annealing procedure.


		Approximately Solving Maximum Clique using 
		  Neural Network and Related Heuristics *

	   Arun Jagota 				Laura Sanchis 
     Memphis State University		      Colgate University

			   Ravikanth Ganesan 
		State University of New York at Buffalo

We explore neural network and related heuristic methods for the fast 
approximate solution of the Maximum Clique problem. One of these algorithms, 
{\em Mean Field Annealing}, is implemented on the Connection Machine CM-5 and 
a fast annealing schedule is experimentally evaluated on random graphs, as 
well as on several benchmark graphs. The other algorithms, which perform 
certain randomized local search operations, are evaluated on the same 
benchmark graphs, and on {\bf Sanchis} graphs. One of our algorithms adjusts 
its internal parameters as its computation evolves. On {\bf Sanchis} graphs,
it finds significantly larger cliques than the other algorithms do. Another 
algorithm, GSD$(\emptyset)$, works best overall, but is slower than the 
others. All our algorithms obtain significantly larger cliques than other 
simpler heuristics but run slightly slower; they obtain significantly smaller 
cliques on average than exact algorithms or more sophisticated heuristics but 
run considerably faster. All our algorithms are simple and inherently 
parallel.

* - 24 pages in length (twice as long as its previous version). 

Arun Jagota


More information about the Connectionists mailing list