17th Annual
17th Annual
17th Annual
AI-based Hardware Attack Challenge
US-Canada, MENA, India
US-Canada Region Sponsored By: NORDTECH
It’s time to think a little differently about the capabilities of generative AI for chip design.
Using generative AI (e.g. ChatGPT, Claude, Gemini, or similar) you will work to insert hardware vulnerabilities, such as Trojans or backdoors, into an open-source digital design of your choice (e.g. OpenTitan, Ariane, a design from OpenCores, etc.). The resulting vulnerabilities must be simulatable and synthesizable, and you will need to be able to demonstrate the effects of the added vulnerability (e.g Hardware CWE & CVSS score). A successful submission will need to include all prompts and responses from the language model, a document detailing your methodology, and detailed demonstrations of exploits. You may use and modify existing tools and frameworks as you see fit. Points will be awarded for subtle yet powerful exploits, creative AI usage, tool integration, and valid use-cases.
Methodology
-
Choose an open source chip-design project (e.g. OpenTitan, a design from OpenCores)
-
Leveraging generative AI tools, do the following:
-
Identify security assets
-
Compromise via a bug insertion a chosen asset
-
Design an exploit to use that compromise to perform an attack
-
Basically: Create a bug and then exploit the bug!
-
-
Document your methods for using AI to develop and insert these security bugs
Judging criteria
-
Open-source:
-
The design you add a vulnerability to must be open sourced, as well as any additional tooling you might choose to create to support your vulnerability insertion.
-
While your design must be open-source, you may leverage non open-source platforms (e.g. you can use ModelSim, Vitis, Synopsys etc.) and LLMS (OpenAI, Anthropic, etc.)
-
-
Creative AI Usage:
-
A higher score will be given for more interesting or creative work: e.g. training an open-source LLM, creative or novel prompt engineering strategies, or developing a tool to automate bug insertion. This is intentionally very open-ended, so be creative!
-
-
Usefulness of targeted design:
-
More popular / more broadly accessible designs will be worth more
-
-
Vulnerability demonstration:
-
The more vulnerabilities, the more points!
-
Each vulnerability will be scored:
-
End-to-end “logs” of the tool creating vulnerabilities
-
Vulnerability creativity
-
Vulnerability subtlety
-
Severity of the vulnerability (e.g. theoretical CVSS score)
-
Vulnerability exploits (e.g. in simulation or videos of reconfigurable HW)
-
-
-
Documentation:
-
Instructions to "reproduce" your results (i.e. your methodology)
-
Insights into what went well, what was challenging, and any creative solutions you needed to work with the AI
-
Submission Guidelines
Round 1 Submission
-
Detailed description of the intended vulnerabilities and designs
-
Should include explanations of what's being targeted and why
-
Should include proposed vulnerabilities or methods of attack
-
-
Current progress report
-
Should include any relevant code/prompts for the AI
-
Should include preliminary results if any have been gathered
-
Round 2 Submission
-
Completed designs with vulnerabilities inserted
-
Detailed report discussing the methodology you used
-
Presentation (length TBD)
-
Poster (details TBD)
Awards
First Place: $1000
Second Place: $750
Third Place: $500