-
-
Notifications
You must be signed in to change notification settings - Fork 140
Educational Resources
Sandy Dunn edited this page Nov 15, 2023
·
60 revisions
Links to articles, papers, books, etc. that contain useful educational materials relevant to the project.
Please add to this list!
Institution | Date | Title and Link |
---|---|---|
NIST | 8-March-2023 | White Paper NIST AI 100-2e2023 (Draft) |
UK Information Commisioner's Office (ICO) | 3-April-2023 | Generative AI: eight questions that developers and users need to ask |
UK National Cyber Security Centre (NCSC) | 2-June-2023 | ChatGPT and large language models: what's the risk? |
UK National Cyber Security Centre (NCSC) | 31 August 2022 | Principles for the security of machine learning |
European Parliament | 31 August 2022 | EU AI Act: first regulation on artificial intelligence |
Publication | Author | Date | Title and Link |
---|---|---|---|
Deloitte | Deloitte AI Institute | 13-Mar-23 | A new frontier in artificial intelligence - Implications of Generative AI for businesses |
Team8 | Team8 CISO Village | 18-Apr-23 | Generative AI and ChatGPT Enterprise Risks |
Trail of Bits | Heidy Khlaaf | 7-Mar-23 | Toward Comprehensive Risk Assessments and Assurance of AI-Based Systems |
Security Implications of ChatGPT | Cloud Security Alliance (CSA) | 23-Apr-2023 | Security Implications of ChatGPT |
Service | Channel | Date | Title and Link |
---|---|---|---|
YouTube | RALFKAIROS | 05-Feb-23 | ChatGPT for Attack and Defense- AI Risks: Privacy, IP, Phishing, Ransomware-By Avinash Sinha |
YouTube | AI Explained | 25-Mar-23 | 'Governing Superintelligence' - Synthetic Pathogens, The Tree of Thoughts Paper and Self-Awareness |
YouTube | LiveOverflow | 14-Apr-23 | 'Attacking LLM - Prompt Injection' |
YouTube | LiveOverflow | 27-Apr-23 | 'Accidental LLM Backdoor - Prompt Tricks' |
YouTube | LiveOverflow | 11-May-23 | 'Defending LLM - Prompt Injection' |
YouTube | Cloud Security Podcast | 30-May-23 | 'CAN LLMs BE ATTACKED!' |
YouTube | API Days | 28-Jun-23 | Language AI Security at the API level: Avoiding Hacks, Injections and Breaches |
Service | Channel | Date | Title and Link |
---|---|---|---|
YouTube | API Days | 28-Jun-23 | Securing LLM and NLP APIs: A Journey to Avoiding Data breaches, Attacks and More |
Name | Type | Note | Link |
---|---|---|---|
SecDim | Attack and Defence | An attack and defence challenge where players should protect their chatbot secret phrase while attacking other players chatbot to exfiltrate theirs. | https://play.secdim.com/game/ai-battle |
GPT Prompt Attack | Attack | Goal of this game is to come up with the shortest user input that tricks the system prompt into returning the secret key back to you. | https://ggpt.43z.one |
Gandalf | Attack | Your goal is to make Gandalf reveal the secret password for each level. However, Gandalf will level up each time you guess the password, and will try harder not to give it away | https://gandalf.lakera.ai |