LCPL Workbooks

We are still actively editing this. Staff are currently experiencing some pretty big changes that are taxing ability to prioritize, which is going to keep our exploration limited for a while, but this is important. 

 

https://docs.google.com/document/d/1GFAA3lddKfjWcp8nrNEmsLZoKb0kA1dEZv9Yjnvh7vY/edit?usp=sharing

You need to be a member of Library 2.0 to add comments!

Join Library 2.0

Email me when people reply –

Replies

  •  Hi, Katie Tyson and LCPL team!

    Thank you for sharing your work in progress. I’m really impressed with how much forward movement you’ve made despite the challenges you named. There’s a clear sense of steady, thoughtful momentum here.

    Your AI commitments stand out as especially strong. They are values-anchored, realistic, and very “library-centered,” particularly the emphasis on privacy, access, sustainability, and choosing human connection when it matters most. That balance comes through consistently across the roadmap.

    Other strengths I noticed:

    • The clarity around roles, decision rights, and accountability. This sets you up well for avoiding confusion later.

    • A pragmatic approach to training, especially the idea of peer “experts” supporting others rather than expecting everyone to become an AI power user.

    • Strong awareness of where risks actually live day to day (front desk interactions, emails, schedules), which makes the privacy work feel grounded and actionable.

    • A refreshingly honest take on disclosure and trust. Naming the potential downsides shows real empathy for community perceptions.

    As you continue, a few ideas you might explore:

    • Picking one or two small pilots to test your “yellow light” areas and learn from them before scaling.

    • Turning some of this thinking into simple, staff-friendly artifacts (checklists, talking points, or examples) to make it easier to apply in real moments.

    • Revisiting your vision and commitments after an initial pilot to see what needs tightening or clarifying.

    Overall, this feels like a strong, values-aligned foundation that’s well positioned to evolve as you continue your AI journey!

This reply was deleted.