NIPS90 VLSI workshop

Jim Burr burr at mojave.stanford.edu
Sat Nov 24 00:26:13 EST 1990


papers/neuro/nips90/agenda.t

To: everyone I've contacted about the NIPS90 VLSI workshop, thanks for your
help! It's shaping up to be a great session. Special thanks to those who
have volunteered to give presentations.

Workshop 8. on VLSI Neural Networks is being held Saturday, Dec 1 at
Keystone.  Related workshops are workshop 7. on implementations of neural
networks on digital, massively parallel computers, and workshop 9. on
optical implementations.

Abstract:		8. VLSI Neural Networks

			Jim Burr
			Stanford University
			Stanford, CA 94305
			(415) 723-4087
			burr at mojave.stanford.edu

This one day workshop will address the latest advances in VLSI
implementations of neural nets. How successful have implementations been so
far? Are dedicated neurochips being used in real applications?  What
algorithms have been implemented? Which ones have not been? Why not?  How
important is on chip learning? How much arithmetic precision is necessary?
Which is more important, capacity or performance? What are the issues in
constructing very large networks? What are the technology scaling limits?
Any new technology developments?

Several invited speakers will address these and other questions from various
points of view in discussing their current research. We will try to gain
better insight into the strengths and limitations of dedicated hardware
solutions.

Agenda:
	morning:

	1. review of new chips
		capacity
		performance
		power
		learning
		architecture

	2. guidelines for reporting results - recommendation at evening session
		specify technology, performance, power if possible
			translate power into joules/connection
			or joules/update

	3. analog vs digital - the debate goes on

	4. on-chip learning - who needs it

	5. locality - who needs it (Boltzmann vs backprop)

	6. precision - how much

	7. leveraging tech scaling

	evening:

	1. large networks - how big
		memory
		power

Here are some of the issues we will discuss during the workshop:

	- What is the digital/analog tradeoff for storing weights?

	- What is the digital/analog tradeoff for doing inner products?

	- What is the digital/analog tradeoff for multichip systems?

	- Is on-chip learning necessary?

	- How important is locality?

	- How much precision is needed in a digital system?

	- What capabilities can we expect in 2 years? 5 years?

	- What are the biggest obstacles to implementing LARGE networks?
	  Capacity? Performance? Power? Connectivity?

presenters:

Kristina Johnson, UC Boulder		electro-optical networks
Josh Alspector, Bellcore		analog Boltzmann machines
Andy Moore, Caltech			subthreshold (with Video!)
Edi Saeckinger, ATT			NET32K
Tom Baker, Adaptive Solutions		precision
Hal McCartor, Adaptive Solutions	the X1

Chip presenters: please consider mentioning the following:

	- technology (eg 2.0 micron CMOS, 0.8 micron GaAs)
	- capacity in connections and neurons
	- performance in connections per second
	- energy per connection (power dissipation)
	- on-chip learning? (Updates per second)
	- scalability? How large a network?
	- a few words on tools

See you at the workshop!

	Jim.


More information about the Connectionists mailing list