Connectionists: LLMs and the Risks of AI

Ali Minai minaiaa at gmail.com
Tue Apr 4 00:27:03 EDT 2023


DEar Colleagues

There's a lot of discussion about whether the new LLMs indicate that we are
entering a risky phase in AI. Having thought a lot about that topic for
many years, I tried to summarize my thoughts in this piece - written for a
more lay audience than this list. While it's unlikely that any sort of real
regulation will - or should - occur in the very short term, I came to the
conclusion that the risks are very real and not as broadly appreciated in
the AI community as they should be. In particular, I don't think that many
people in AI appreciate the inherent and irreducible hazard in systems
where emergent behaviors are not undesirable things to be suppressed but
the entire purpose of the system. How we connect such systems to the real
world has to be thought out really, really carefully in ways that go beyond
the confidence that we will be able to make such AI safe, harmless,
aligned, or human compatible. Rather, I feel that we'll need to make human
society and our systems anti-fragile to it, to borrow Taleb's phrase. How
do we do that? I think that's a discussion worth having.

https://3quarksdaily.com/3quarksdaily/2023/04/thinking-through-the-risks-of-ai.html

Any comments - and especially critiques to set me straight - are most
welcome.

Best
Ali


*Ali A. Minai, Ph.D.*
Professor and Graduate Program Director
Complex Adaptive Systems Lab
Department of Electrical & Computer Engineering
828 Rhodes Hall
University of Cincinnati
Cincinnati, OH 45221-0030

Phone: (513) 556-4783
Fax: (513) 556-7326
Email: Ali.Minai at uc.edu
          minaiaa at gmail.com

WWW: https://eecs.ceas.uc.edu/~aminai/ <http://www.ece.uc.edu/%7Eaminai/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20230404/6040cbb3/attachment.html>


More information about the Connectionists mailing list