Fwd: Humane Intelligence launches a NIST supported, nation-wide generative AI red teaming event

Artur Dubrawski awd at cs.cmu.edu
Thu Aug 22 02:44:21 EDT 2024


Certainly of interest and could be a fun loaded experience.

Artur

---------- Forwarded message ---------
From: Ramayya Krishnan <rk2x at cmu.edu>
Date: Wed, Aug 21, 2024, 11:07 PM
Subject: Fwd: Humane Intelligence launches a NIST supported, nation-wide
generative AI red teaming event
To: <block-lead at lists.andrew.cmu.edu>, <
Block-center-affiliated-faculty at lists.andrew.cmu.edu>, <
heinz-smt at lists.andrew.cmu.edu>, <heinz-faculty at lists.andrew.cmu.edu>, <
Heinz-all-faculty at lists.andrew.cmu.edu>, <
heinz-allmasters at lists.andrew.cmu.edu>, William Sanders <sanders at cmu.edu>,
Martial Hebert <mhebert at andrew.cmu.edu>, Zico Kolter <zkolter at cs.cmu.edu>,
Mona Diab <mdiab at andrew.cmu.edu>, Nicolas Christin <nicolasc at andrew.cmu.edu>



Dear Colleagues, FYI. of interest to many of you.

Krishnan
---------- Forwarded message ---------
From: Rumman Chowdhury <rumman at humane-intelligence.org>
Date: Wed, Aug 21, 2024 at 11:27 AM
Subject: Humane Intelligence launches a NIST supported, nation-wide
generative AI red teaming event
To: Kristian Lum <kristianlum at gmail.com>, Patrick Hall <ph at hallresearch.ai>,
Schwartz, Reva B. (Fed) <Reva.schwartz at nist.gov>, Rishi Bommasani <
rishibommasani at gmail.com>, <nlprishi at stanford.edu>, <HE_Ruimin at mci.gov.sg>,
Seraphina Goldfarb-Tarrant <seraphina at cohere.com>, <cjiahao at gmail.com>, <
jiahchen at oti.nyc.gov>, Aleksandra Korolova <korolova at princeton.edu>, David
Haber <dh at lakera.ai>, Eric Horvitz <horvitz at microsoft.com>, Ramayya
Krishnan <rk2x at cmu.edu>
Cc: Theodora Skeadas <theodora at humane-intelligence.org>


Dear Bias Bounty board members,

We are excited to introduce a truly innovative step forward in the public
evaluation of Generative AI systems.


Humane Intelligence announces
<https://www.linkedin.com/posts/rumman_red-teaming-interest-sign-up-form-activity-7232027235585601536-JDqe?utm_source=share&utm_medium=member_desktop>
a
U.S.-wide red-teaming event as part of NIST's ARIA pilot evaluation of
large language model (LLM) risks.


Tech nonprofit Humane Intelligenc <http://www.humane-intelligence.org/>e
announces a nation-wide call for participation by US residents interested
in finding flaws in Generative AI models as well as model owners building
GenAI-based office productivity tools. The online competition will serve as
a qualifier for an in-person red teaming and blue teaming competition,
supported by NIST, to be held alongside CAMLIS, an AI security conference.
More details below and at this link
<https://www.humane-intelligence.org/_files/ugd/cdba2c_297ec9efcc4f4f6693a9fcfe95375b64.pdf>.
If you have any questions, please let me know.


We'd appreciate any amplification to your respective networks.

***

Call for Participation

NIST-Supported Nationwide AI Red-Teaming Exercise

Overview

We are excited to announce an upcoming AI red-teaming exercise supported by
the U.S. National Institute of Standards And Technology
<https://www.nist.gov/> (NIST).  We are recruiting:


   -

   Individuals interested in red teaming models online (for the qualifying
   exercise) OR in-person
   -

   Model developers building generative AI office productivity software,
   including coding assistants, text and image generators, research tools, and
   more, and associated blue teams


Our goal is to demonstrate capabilities to rigorously test and evaluate the
robustness, security, and ethical implications of cutting-edge AI systems
through adversarial testing and analysis. This exercise is crucial for
helping to ensure the resilience and trustworthiness of AI technologies.

Participation requirements Individuals seeking to red team:

Virtual qualifier

To participate, interested red teamers will need to enroll in the
qualifying event, a NIST ARIA <https://ai-challenges.nist.gov/aria> (Assessing
Risks and Impacts of AI) pilot exercise. In the ARIA pilot, red teaming
participants will seek to identify as many violative outcomes as possible
using predefined test scenarios as part of stress tests of model guardrails
and safety mechanisms. This virtual qualifier will draw participants from
anyone residing in the US. For more details on ARIA and related scenarios,
see here <https://ai-challenges.nist.gov/aria>.

Red teaming participants who pass the ARIA pilot qualifying event will be
able to take part in an in-person red teaming exercise held during CAMLIS
<https://www.camlis.org/> (October 24-26).

In-person event:

This in-person exercise will include a hosted red team and blue team
evaluation using office productivity software that employs GenAI models.
The in-person exercise will use the “Artificial Intelligence Risk
Management Framework: Generative Artificial Intelligence Profile (NIST AI
600-1),” as the operative rubric for violative outcomes.

During testing, red teamers will engage in adversarial interactions with
developer-submitted applications on a turn-by-turn basis.  An analysis will
aggregate the scores for a final report. The challenge will be a
capture-the-flag (CTF)-style points-based evaluation, which will be
verified by in-person event assessors.

Who Should Participate:   Applications from individuals with diverse
expertise are encouraged, including but not limited to:

   -

   AI researchers and practitioners
   -

   Cybersecurity professionals
   -

   Data scientists
   -

   Ethicists and legal professionals
   -

   Software engineers
   -

   Policymakers and regulators


Participation requirements for companies donating their models:

Model owners interested in participating in the in-person red teaming event
will be required to meet the following criteria:


   1.

   The model or product must utilize Generative AI technology.
   2.

   The model or product must be designed for workplace productivity. This
   is broadly defined as: any technology enabling communication, coding,
   process automation, or any reasonably expected activity in a
   technology-enabled workplace that utilizes popular inter-office software
   such as: chat, email, code repositories, and shared drives.
   3.

   The model or product owner must be willing to have their model tested
   for both positive and negative impacts related to: vulnerability discovery,
   including program verification tools, automated code repair tools, fuzzing
   or other dynamic vulnerability discovery tools, and adversarial machine
   learning tools or toolkits.
   4.

   Optionally provide blue team support.


Full Event Details:

   -

   Dates:


   -

   August 21, 2024: Application opens
   -

   September 9, 2024, 11:59 PM ET: Application for participants closes
   -

   September 16, 2024: Pilot launches US-wide
   -

   October 4, 2024: Pilot closes
   -

   October 11, 2024: Those selected for the in person event announced and
   notified
   -

   October 24-25, 2024: CAMLIS in-person event


Objectives:

This event will demonstrate:

   -

   A test of the potential positive and negative uses of AI models, as well
   as a method of leveraging positive use cases to mitigate negative.
   -

   Use of NIST AI 600-1 to explore GAI risks and suggested actions as an
   approach for establishing GAI safety and security controls.


Participation Benefits:

   -

   Contribute to the advancement of secure and ethical AI.
   -

   Network with leading experts in AI and cybersecurity, including in U.S.
   government agencies.
   -

   Gain insights into cutting-edge AI vulnerabilities and defenses.
   -

   Participants in the qualifying red teaming event may be invited to
   participate in CAMLIS, scheduled for October 24-25, 2024 in Arlington, VA.
   All expenses for travel, food and lodging during this time will be covered.


How to Apply: Interested individuals and model owners are requested to fill
out this Google Form
<https://docs.google.com/forms/d/e/1FAIpQLSfIC_ZDrNBVtcle1AGObpXg2iOxyfd-a6KMJeu0s_-b4EPwtA/viewform>
by
Saturday, September 14, 2024, at 11:59 PM ET.

Contact Information

For any inquiries or further information, please contact Rumman Chowdhury
and Theodora Skeadas (hi at humane-intelligence.org) at Humane Intelligence
<https://humane-intelligence.org/>.

We look forward to your participation and to making strides towards a safer
and more ethical AI future together.

_______________________________________________
Heinz-all-faculty mailing list
Heinz-all-faculty at lists.andrew.cmu.edu
https://lists.andrew.cmu.edu/mailman/listinfo/heinz-all-faculty
_______________________________________________
Heinz-affiliate-faculty mailing list
Heinz-affiliate-faculty at lists.andrew.cmu.edu
https://lists.andrew.cmu.edu/mailman/listinfo/heinz-affiliate-faculty
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/autonlab-users/attachments/20240822/b35c8c2a/attachment-0001.html>


More information about the Autonlab-users mailing list