Name of Presenter: Sergio Santamarina (Librarian)

Library, School, or Organization Name: Universidad Nacional de José C. Paz (UNPAZ)

Co-Presenter Name(s): -

Area of the World from Which You Will Present: Argentina

Language in Which You Will Present: English

Target Audience(s): Librarians, researchers, academic staff, publishers, students, and anyone involved in scholarly communication or AI policy development.

Short Session Description (one line): Introducing a free, open-source web tool and methodology for implementing a universal AI use disclosure framework, ensuring transparency and accountability in academic and library work.

Full Session Description (as long as you would like):

The academic and library worlds have reached a rare and vital consensus: AI cannot be an author, human responsibility is paramount, and disclosure of substantive AI use is non-negotiable. Major publishers like Elsevier, Springer Nature, Wiley, Taylor & Francis, and SAGE have all enshrined these principles. Yet, the critical question remains: how do we actually implement this consensus in our daily work?

Librarians are on the front lines of this challenge. We are helping researchers navigate new AI tools, developing institutional policies, and struggling with how to verify integrity in a world of generative text. There is a pressing need for practical, user-friendly tools that move us from abstract principles to concrete action.

In this session, we will present the AITD (AI Transparency Declaration) Generator, a free, open-source web tool developed to address this exact need. Born from the principles of no AI authorship, total human responsibility, mandatory disclosure for extensive use, critical verification, and an essential human-in-the-loop schema, this tool provides a structured methodology for declaring AI use in any document or project.

We will demonstrate how the tool works, guiding users through a simple, transparent process to:

  • Declare the specific AI tools and models used.

  • Specify the nature and extent of AI involvement (e.g., brainstorming, editing, coding, data analysis).

  • Confirm adherence to the core principles of human oversight and critical verification.

  • Generate a clear, standardized disclosure statement that can be appended to papers, theses, reports, or library guides.

This session is not just a software demo. It is a case study in developing a practical, principle-driven response to a global challenge. We will discuss the methodology behind the tool, how it maps directly to the publisher consensus, and how libraries can champion its adoption to foster a culture of transparency and integrity. We will also explore the broader implications for library instruction, reference services, and institutional policy-making in the age of AI. Participants will leave with a concrete resource they can use immediately and a framework for thinking about AI disclosure in their own contexts.

Websites / URLs Associated with Your Session:

You need to be a member of Library 2.0 to add comments!

Join Library 2.0

Votes: 1
Email me when people reply –

Replies

  • "Born from the principles of no AI authorship, total human responsibility, mandatory disclosure for extensive use, critical verification" ... I'm sold based on this alone!

    I appreciate that 1. this is open-source, and 2. you are presenting it in a runthrough in a conference to get actual user feedback and responses. 

    Depending on what the resulting statement is, "generating" might be misleading IN THIS AGE OF "ai-generated" STATEMENTS. Is it really just selecting phrases based on user answers or keywords? That would be "generated," yes, but people might think it would be "constructed" or "filled." I am a pedant, so feel free to ignore this part of my comment.

    • Thank you for your thoughtful comments! You raise a good point. Perhaps "AITD Self-Audit Tool" would be more accurate? I chose "Generator" because the tool produces both a machine-readable code snippet and a plain text statement that can be easily inserted where disclosure is required or desirable—whether in academic papers, institutional repositories, or library guides.

      The goal is to make compliance with AI disclosure policies as simple and standardized as possible. I appreciate you helping me think through the naming!

This reply was deleted.