update lists

This commit is contained in:
2025-07-18 22:22:32 +02:00
parent 55bed3b4a1
commit 5916c5c074
3078 changed files with 331679 additions and 357255 deletions

View File

@@ -15,13 +15,64 @@ infer information about that algorithm.</p>
<h2 id="contents">Contents</h2>
<ul>
<li><a href="#papers">Papers</a></li>
<li><a href="#related-events">Related Events</a></li>
<li><a href="#related-events">Related Events
(conferences/workshops)</a></li>
</ul>
<h2 id="papers">Papers</h2>
<h3 id="section">2024</h3>
<h3 id="section">2025</h3>
<ul>
<li><a href="https://arxiv.org/abs/2504.00874">P2NIA: Privacy-Preserving
Non-Iterative Auditing</a> - (ECAI) <em>Proposes a mutually beneficial
collaboration for both the auditor and the platform: a
privacy-preserving and non-iterative audit scheme that enhances fairness
assessments using synthetic or local data, avoiding the challenges
associated with traditional API-based audits.</em></li>
<li><a
href="https://www.cambridge.org/core/services/aop-cambridge-core/content/view/9E8408C67F7CE30505122DD1586D9FA2/S3033373325000080a.pdf/the-fair-game-auditing-and-debiasing-ai-algorithms-over-time.pdf">The
Fair Game: Auditing &amp; debiasing AI algorithms overtime</a> -
(Cambridge Forum on AI: Law and Governance) <em>Aims to simulate the
evolution of ethical and legal frameworks in the society by creating an
auditor which sends feedback to a debiasing algorithm deployed around an
ML system.</em></li>
<li><a href="https://arxiv.org/pdf/2505.04796">Robust ML Auditing using
Prior Knowledge</a> - (ICML) <em>Formally establishes the conditions
under which an auditor can prevent audit manipulations using prior
knowledge about the ground truth.</em></li>
<li><a href="https://arxiv.org/abs/2501.02997">CALM: Curiosity-Driven
Auditing for Large Language Models</a> - (AAAI) <em>Auditing as a
black-box optimization problem where the goal is to automatically
uncover input-output pairs of the target LLMs that exhibit illegal,
immoral, or unsafe behaviors.</em></li>
<li><a href="https://arxiv.org/abs/2412.13021">Queries, Representation
&amp; Detection: The Next 100 Model Fingerprinting Schemes</a> - (AAAI)
<em>Divides model fingerprinting into three core components, to identify
100 previously unexplored combinations of these and gain insights into
their performance.</em></li>
</ul>
<h3 id="section-1">2024</h3>
<ul>
<li><a href="https://arxiv.org/pdf/2411.05197">Hardware and software
platform inference</a> - (arXiv) <em>A method for identifying the
underlying GPU architecture and software stack of a black-box machine
learning model solely based on its input-output behavior.</em></li>
<li><a href="https://arxiv.org/abs/2407.13281">Auditing Local
Explanations is Hard</a> - (NeurIPS) <em>Gives the (prohibitive) query
complexity of auditing explanations.</em></li>
<li><a href="https://arxiv.org/abs/2409.00159">LLMs hallucinate graphs
too: a structural perspective</a> - (complex networks) <em>Queries LLMs
for known graphs and studies topological hallucinations. Proposes a
structural hallucination rank.</em></li>
<li><a href="https://arxiv.org/pdf/2402.08522">Fairness Auditing with
Multi-Agent Collaboration</a> - (ECAI) <em>Considers multiple agents
working together, each auditing the same platform for different
tasks.</em></li>
<li><a href="https://arxiv.org/pdf/2401.11194">Mapping the Field of
Algorithm Auditing: A Systematic Literature Review Identifying Research
Trends, Linguistic and Geographical Disparities</a> - (Arxiv)
<em>Systematic review of algorithm auditing studies and identification
of trends in their methodological approaches.</em></li>
<li><a href="https://arxiv.org/pdf/2402.12572v1.pdf">FairProof:
Confidential and Certifiable Fairness for Neural Networks</a> -
Confidential and Certifiable Fairness for Neural Networks</a> - (Arxiv)
<em>Proposes an alternative paradigm to traditional auditing using
crytographic tools like Zero-Knowledge Proofs; gives a system called
FairProof for verifying fairness of small neural networks.</em></li>
@@ -40,17 +91,27 @@ href="https://github.com/bchugg/auditing-fairness">[Code]</a>
<em>Sequential methods that allows for the continuous monitoring of
incoming data from a black-box classifier or regressor.</em> ###
2023</li>
<li><a href="https://neurips.cc/virtual/2023/poster/70925">Privacy
Auditing with One (1) Training Run</a> - (NeurIPS - best paper) <em>A
scheme for auditing differentially private machine learning systems with
a single training run.</em></li>
<li><a
href="https://www.sciencedirect.com/science/article/pii/S0306457322003259">Auditing
fairness under unawareness through counterfactual reasoning</a> -
(Information Processing &amp; Management) <em>Shows how to unveil
whether a black-box model, complying with the regulations, is still
biased or not.</em></li>
<li><a href="https://arxiv.org/pdf/2206.04740.pdf">XAudit : A
Theoretical Look at Auditing with Explanations</a> - <em>Formalizes the
role of explanations in auditing and investigates if and how model
explanations can help audits.</em></li>
Theoretical Look at Auditing with Explanations</a> - (Arxiv)
<em>Formalizes the role of explanations in auditing and investigates if
and how model explanations can help audits.</em></li>
<li><a href="https://arxiv.org/pdf/2305.12620.pdf">Keeping Up with the
Language Models: Robustness-Bias Interplay in NLI Data and Models</a> -
<em>Proposes a way to extend the shelf-life of auditing datasets by
using language models themselves; also finds problems with the current
bias auditing metrics and proposes alternatives these alternatives
highlight that model brittleness superficially increased the previous
bias scores.</em></li>
(Arxiv) <em>Proposes a way to extend the shelf-life of auditing datasets
by using language models themselves; also finds problems with the
current bias auditing metrics and proposes alternatives these
alternatives highlight that model brittleness superficially increased
the previous bias scores.</em></li>
<li><a href="https://dl.acm.org/doi/pdf/10.1145/3580305.3599454">Online
Fairness Auditing through Iterative Refinement</a> - (KDD) <em>Provides
an adaptive process that automates the inference of probabilistic
@@ -390,7 +451,7 @@ Active Learning with Outcome-Dependent Query Costs</a> - (NIPS)
<em>Learns from a binary classifier paying only for negative
labels.</em></li>
</ul>
<h3 id="section-1">2012</h3>
<h3 id="section-2">2012</h3>
<ul>
<li><a href="http://www.jmlr.org/papers/v13/nelson12a.html">Query
Strategies for Evading Convex-Inducing Classifiers</a> - (JMLR)
@@ -406,9 +467,31 @@ Learning</a> - (KDD) <em>Reverse engineering of remote linear
classifiers, using membership queries.</em></li>
</ul>
<h2 id="related-events">Related Events</h2>
<h3 id="section-3">2025</h3>
<ul>
<li><a href="https://project.inria.fr/aimlai/">AIMLAI at ECML/PKDD
2025</a></li>
<li><a
href="https://aaai.org/conference/aaai/aaai-25/workshop-list/#ws06">AAAI
workshop on AI Governance: Alignment, Morality, and Law</a></li>
</ul>
<h3 id="section-4">2024</h3>
<ul>
<li><a href="https://www.ircg.msm.uni-due.de/ai/">1st International
Conference on Auditing and Artificial Intelligence</a></li>
<li><a href="https://regulatableml.github.io/">Regulatable ML Workshop
(RegML24)</a></li>
</ul>
<h3 id="section-5">2023</h3>
<ul>
<li><a href="https://cscw-user-ai-auditing.github.io/">Supporting User
Engagement in Testing, Auditing, and Contesting AI (CSCW User AI
Auditing)</a></li>
<li><a href="https://algorithmic-audits.github.io">Workshop on
Algorithmic Audits of Algorithms (WAAA)</a></li>
<li><a href="https://regulatableml.github.io/">Regulatable ML Workshop
(RegML23)</a></li>
</ul>
<p><a
href="https://github.com/erwanlemerrer/awesome-audit-algorithms">auditalgorithms.md
Github</a></p>