Week 8: AI-Assisted Final Project Development

CSCI 5773: Introduction to Emerging Systems Security

Course Level: Graduate
Project Duration: 8 Weeks (Two Months)
Prerequisites: CS background, security fundamentals, programming experience


Document Purpose

This tutorial guides you through leveraging AI tools strategically for your emerging systems security final project. Given the sensitive nature of security research, special emphasis is placed on ethical considerations, responsible disclosure, and maintaining academic integrity while using AI as an intelligent research assistant.


⚠️ Critical Security and Ethics Notice

Before You Begin:

Security research carries unique responsibilities. You must:

  1. Work only in controlled, authorized environments - Never test attacks on systems you don't own or have explicit written permission to test
  2. Follow responsible disclosure practices - If you discover real vulnerabilities, follow proper disclosure protocols
  3. Respect privacy and legal boundaries - Even in academic settings, certain attacks may be illegal if deployed improperly
  4. Consider ethical implications - Your work could potentially be misused; design with defense in mind
  5. Consult your instructor before pursuing any project involving:
    • Real-world systems or devices
    • Human subjects or biometric data
    • Potentially destructive attacks
    • Reverse engineering of commercial products

AI-Specific Ethics:

  • Do not use AI to generate malicious code for deployment
  • Do not ask AI to help bypass security measures on unauthorized systems
  • Use AI to understand security concepts, not to cause harm

Timeline Overview

Weeks 9-10 (Days 1-14): Project planning, threat modeling, literature review
Weeks 11-13 (Days 15-42): Implementation, controlled testing, refinement
Weeks 14-15 (Days 43-56): Evaluation, defense mechanisms, documentation
Week 16 (Days 57-60): Final presentation and demo preparation


Phase 1: Project Planning & Threat Analysis

Weeks 9-10 (14 Days)

1.1 Security Project Selection

Objective: Identify a security research topic that is ethical, feasible, impactful, and demonstrates deep understanding of course concepts.

Project Categories for Emerging Systems Security:

  1. Attack Demonstration (Red Team): Implement and demonstrate a known attack in a controlled environment
  2. Defense Mechanism (Blue Team): Develop countermeasures or detection systems
  3. Security Analysis: Audit or analyze security properties of systems/protocols
  4. Comparative Study: Evaluate multiple security approaches or tools
  5. Novel Threat Investigation: Research emerging attack vectors

AI-Assisted Topic Ideation:

"I'm a graduate student in emerging systems security. I've studied:
- Adversarial attacks, side-channel attacks, covert channels, backdoors
- Security in wireless sensing, biometrics, robotics, AI/LLM systems, VR/AR
- Federated learning privacy, LLM agent security, secure computation
- Countermeasures and security fairness

Suggest 5 security research project ideas that:
1. Combine attack and defense perspectives
2. Are ethical and can be done in controlled environments
3. Are feasible in 8 weeks with standard lab equipment
4. Address real security concerns in emerging systems
5. Would make compelling graduate-level demonstrations

For each, specify: threat model, testbed requirements, and expected outcomes."

Topic Integration Examples:

"Propose projects that integrate multiple course topics:
- Side-channel attacks + Biometric authentication
- Adversarial ML + AI/LLM systems security
- Covert channels + IoT/VR/AR wireless sensing
- Backdoor detection + Security fairness
- Federated learning privacy + Secure aggregation
- LLM agent security + Prompt injection defenses
- Robotic systems security + Sensor spoofing

Focus on topics from weeks 4-7 and 9-15."

1.2 Threat Modeling and Scope Definition

Days 1-5 Activities:

Step 1: Define Your Threat Model

Use AI to formalize your security assumptions:

"Help me develop a threat model for [your project idea]:

System description: [brief description]
Assets to protect: [data, privacy, availability, etc.]

Define:
1. Adversary capabilities (what can attacker control?)
2. Adversary goals (what does attacker want?)
3. Security assumptions (what do we trust?)
4. Attack surface (what can be targeted?)
5. Out-of-scope threats (what won't you address?)

Format this using STRIDE or PASTA methodology."

Step 2: Assess Feasibility and Risk

"Evaluate the feasibility of this security project for an 8-week timeline:

Project: [description]
Required resources: [hardware, software, expertise]

Assess:
1. Technical complexity (can a grad student implement this?)
2. Legal/ethical risks (what could go wrong?)
3. Resource availability (do I have what I need?)
4. Reproducibility (can others verify my work?)
5. Potential roadblocks and mitigation strategies"

1.3 Literature Review and Prior Art

Days 6-14 Activities:

Security-Specific Research Strategy:

Step 1: Find Academic Security Research

"Search for recent research papers (2020-2025) on [your topic]:
- What attacks have been demonstrated?
- What defenses exist?
- What are open problems?
- Which conferences/venues publish this research? (IEEE S&P, USENIX 
  Security, CCS, NDSS, NeurIPS, ICML, AAAI, etc.)

Summarize key findings and identify research gaps."

Use specialized search tools:

  • arXiv.org (cs.CR - Cryptography and Security, cs.AI, cs.LG)
  • IEEE Xplore (filter by security conferences)
  • Google Scholar (filter by recent years)
  • Semantic Scholar (for AI/ML security papers)

Step 2: Understand Attack Techniques

"Explain [specific attack type] in the context of emerging systems (IoT, VR/AR, LLM, robotics, etc.):
1. How does the attack work technically?
2. What vulnerabilities does it exploit?
3. What are real-world examples?
4. How can it be detected or prevented?
5. What tools/frameworks are used to implement it?

Provide references to seminal papers and recent developments."

Step 3: Survey Defense Mechanisms

"What are state-of-the-art defenses against [your target attack]?

For each defense:
1. Underlying principle
2. Effectiveness (how well does it work?)
3. Performance overhead
4. Deployment challenges
5. Limitations and bypass techniques

Compare at least 3-4 different approaches."

Step 4: Identify Tools and Frameworks

"What security tools, frameworks, and testbeds are commonly used for 
[your research area]?

Include:
- Attack simulation tools
- Analysis frameworks
- Dataset repositories
- Evaluation metrics
- Standard benchmarks

Provide setup complexity and learning curve for each."

1.4 Project Proposal Development

Deliverables for Phase 1:

Component 1: Security Problem Statement

"Refine my security research problem statement:

Draft: [your initial statement]

Improve by:
1. Clearly stating the security threat/challenge
2. Explaining why existing solutions are insufficient
3. Defining the technical gap you'll address
4. Specifying measurable security objectives

Make it suitable for a graduate-level security project proposal."

Component 2: Attack/Defense Approach

"I plan to [implement attack X / develop defense Y / analyze system Z].

My approach: [technical description]

Evaluate:
1. Is this approach technically sound?
2. What prior work should I build upon?
3. What are potential failure modes?
4. How will I validate my results?
5. What are alternative approaches?

Consider both attacker and defender perspectives."

Component 3: Experimental Design

"Design an experimental evaluation for my security project:

Hypothesis: [what you're testing]
Environment: [testbed, devices, software]

Specify:
1. Controlled variables
2. Independent/dependent variables
3. Success metrics (attack success rate, detection accuracy, overhead, etc.)
4. Baseline comparisons
5. Expected results and how to interpret them"

Component 4: Ethical Considerations

Include in your proposal:

  • How you'll ensure research is conducted ethically
  • Permission/authorization for testing
  • Data privacy considerations
  • Potential for misuse and mitigation
  • Responsible disclosure plan (if applicable)

Phase 1 Deliverable:

  • 3-4 page project proposal including threat model, approach, and ethics statement
  • Annotated bibliography (12-15 security papers)
  • Detailed timeline with weekly milestones

Phase 2: Threat Modeling & Architecture

Week 11 (Days 15-21)

2.1 Detailed Attack/Defense Architecture

For Attack-Focused Projects:

"Design an attack architecture for [specific attack]:

Target system: [description]
Attack vector: [entry point]

Provide:
1. Attack kill chain (reconnaissance → exploitation → post-exploitation)
2. Required attacker capabilities and positioning
3. Technical steps with tools needed
4. Expected artifacts and detection signatures
5. Diagram of attack flow

Use a framework like MITRE ATT&CK for mobile/IoT/ICS as applicable."

For Defense-Focused Projects:

"Design a defense architecture for protecting against [threat]:

System to protect: [description]
Threat model: [adversary capabilities]

Provide:
1. Defense-in-depth strategy (multiple layers)
2. Detection mechanisms
3. Prevention controls
4. Response procedures
5. Architecture diagram showing protected assets and security controls"

2.2 Testbed and Environment Setup

Controlled Environment Design:

"I need to set up a testbed to safely evaluate [attack/defense]:

Requirements:
- Must isolate from production systems
- Should be realistic but controllable
- Need to collect detailed logs and metrics

Suggest:
1. Virtualization or physical isolation approach
2. Network configuration (air-gapped, VLANed, etc.)
3. Required hardware/software
4. Monitoring and logging setup
5. Safety mechanisms to prevent accidental exposure"

For IoT/Embedded Device Testing:

"I'm testing security of IoT/embedded devices. Design a safe testbed:

Devices: [list]
Attack type: [description]

Include:
1. Network isolation strategy
2. How to prevent device "calling home"
3. Packet capture and analysis setup
4. How to reset devices between tests
5. Data storage and analysis plan"

For VR/AR System Testing:

"I'm testing security/privacy of a VR/AR system. Design a safe testbed:

Platform: [Meta Quest, HoloLens, custom, etc.]
Attack type: [sensor inference, visual side-channel, motion tracking, etc.]

Include:
1. Environment isolation (physical and network)
2. Data capture setup (sensor logs, network traffic, rendering traces)
3. Privacy safeguards for any human participants
4. Reproducibility across sessions
5. IRB considerations if human subjects are involved"

For LLM/AI System Testing:

"I'm testing security of an LLM-based or AI-driven system. Design a safe testbed:

System: [LLM agent, chatbot, multimodal model, etc.]
Attack type: [prompt injection, jailbreaking, data poisoning, model extraction, etc.]

Include:
1. API isolation and rate limiting strategy
2. Input/output logging for reproducibility
3. Cost management for API-based models
4. Baseline model behavior documentation
5. Ethical guardrails for adversarial testing"

2.3 Tool Selection and Capability Assessment

Security Tool Evaluation:

"Compare security tools for [your specific need]:

Options I'm considering: [Tool A, Tool B, Tool C]
Use case: [what you need to accomplish]

Evaluate each on:
1. Capabilities and limitations
2. Learning curve and documentation
3. Compatibility with your target platform
4. Community support and updates
5. Legal/licensing considerations

Recommend the best choice with justification."

Common Tool Categories:

  • Network analysis: Wireshark, tcpdump, Zeek
  • Mobile security: Frida, objection, MobSF, apktool
  • IoT analysis: Firmware analysis toolkit, Binwalk, QEMU
  • Side-channel: ChipWhisperer, oscilloscopes + analysis software
  • Adversarial ML: Foolbox, CleverHans, ART (Adversarial Robustness Toolbox)
  • LLM security: Garak, PyRIT, PromptBench, LLM-Guard
  • Federated learning: Flower, PySyft, TensorFlow Federated
  • VR/AR security: Unity/Unreal Engine, OpenXR, XR-specific analysis tools
  • Penetration testing: Metasploit, Burp Suite, Kali Linux tools
  • Robotics security: ROS 2 Security, Isaac Sim, Gazebo

2.4 Implementation Planning

Breaking Down Complex Security Tasks:

"Break down my security project into implementable components:

Project goal: [high-level objective]
Timeline: 3 weeks (Weeks 11-13)

Create:
1. Week-by-week breakdown
2. Dependencies between components
3. Proof-of-concept milestones
4. Testing checkpoints
5. Risk mitigation for each component

Prioritize: what to build first to validate feasibility?"

Deliverable for Week 11:

  • Detailed architecture diagram (attack/defense)
  • Testbed setup documentation
  • Tool evaluation and selection rationale
  • Week-by-week implementation plan

Phase 3: Implementation

Weeks 12-13 (Days 22-42)

3.1 Ethical Implementation Guidelines

Before writing any code:

"Review my implementation plan for ethical concerns:

Plan: [description of what you'll build]
Target: [what you'll test it on]
Environment: [where you'll test]

Check for:
1. Any potential for unintended harm
2. Data privacy issues
3. Legal considerations
4. Need for IRB approval (if testing on humans)
5. Responsible disclosure requirements

Flag any red flags and suggest mitigations."

3.2 Attack Implementation (Red Team Projects)

Week 12 Focus: Core attack mechanism

Responsible Attack Development:

❌ NEVER: "Write code to exploit [real system/commercial product]"

✅ INSTEAD: "Explain the theory behind [attack type] and how it's 
typically implemented. What are the key steps? What detection signatures 
might it leave?"

Then: "Here's my pseudocode for demonstrating this attack in a controlled 
environment: [paste code]. Review for correctness and suggest improvements."

Understanding Before Implementing:

"I want to demonstrate [side-channel attack / adversarial attack / etc.].

First explain:
1. What information does this attack leak/manipulate?
2. What measurements or observations are needed?
3. What signal processing or analysis is required?
4. What are the technical challenges?
5. How do I know if the attack succeeded?

Then provide a high-level implementation strategy."

Instrumentation and Logging:

"I need to instrument my attack implementation to collect data:

Attack: [description]

Help me:
1. Identify what to log (timing, power, network traffic, etc.)
2. Design logging that doesn't interfere with attack
3. Suggest visualization approaches
4. Plan for data storage and analysis
5. Create reproducibility documentation"

3.3 Defense Implementation (Blue Team Projects)

Week 12 Focus: Detection/prevention mechanism

Defense Architecture:

"I'm implementing a defense against [threat]:

Defense strategy: [your approach]

Help me design:
1. Input sources (what data do I monitor?)
2. Detection logic (rules, ML model, heuristics?)
3. Response actions (alert, block, throttle?)
4. False positive mitigation
5. Performance optimization for real-time operation"

Anomaly Detection Systems:

"Design an anomaly detection system for [emerging system context: IoT, VR/AR, LLM, robotics, etc.]:

Normal behavior: [description]
Anomalies to detect: [attack signatures]

Suggest:
1. Feature extraction from raw data
2. Detection algorithm (statistical, ML-based, rule-based?)
3. Training data requirements
4. Threshold tuning strategy
5. Evaluation metrics (TPR, FPR, F1, etc.)"

3.4 Code Development with AI Assistance

Secure Coding Practices:

"Review my security-critical code for vulnerabilities:

[paste code]

Check for:
1. Input validation issues
2. Buffer overflows or memory safety
3. Race conditions
4. Cryptographic mistakes
5. Information leaks

Suggest fixes with explanations."

Optimization for Security Contexts:

"This code needs to run in [constrained environment]:

[paste code]

Optimize for:
1. Minimal resource footprint
2. Real-time performance requirements
3. Evasion of detection (if red team project)
4. Reliability under adversarial conditions

Maintain security properties while improving efficiency."

3.5 Testing and Validation

Week 13 Focus: Validation and refinement

Attack Validation:

"Design test cases to validate my attack implementation:

Attack: [description]
Success criteria: [what constitutes success]

Create test suite with:
1. Positive cases (attack should work)
2. Negative cases (attack should fail)
3. Edge cases and boundary conditions
4. Robustness tests (noisy environments, moving targets, etc.)
5. Repeatability tests"

Defense Validation:

"How do I validate my defense mechanism works?

Defense: [description]
Threat model: [what it defends against]

Design validation that tests:
1. Detection of known attacks
2. Handling of benign traffic (false positive rate)
3. Evasion resistance (adversarial testing)
4. Performance under load
5. Graceful degradation

Include both in-sample and out-of-sample testing."

Debugging Security Code:

"My attack/defense isn't working as expected:

Expected: [behavior]
Actual: [what's happening]
Code: [relevant portions]
Environment: [setup details]

Help me debug:
1. What could cause this discrepancy?
2. How to isolate the issue?
3. What diagnostic steps should I take?
4. Are my assumptions about the system correct?"

Deliverable for Weeks 12-13:

  • Working implementation (attack or defense)
  • Test suite with documented results
  • Performance measurements
  • Security analysis of own implementation

Phase 4: Evaluation & Countermeasures

Week 14 (Days 43-49)

4.1 Security Evaluation Methodology

Comprehensive Security Assessment:

"Design a rigorous evaluation for my security project:

Project type: [attack/defense/analysis]
Claims: [what I claim to demonstrate]

Create evaluation protocol:
1. Security metrics (success rate, detection rate, false positives, etc.)
2. Performance metrics (time, resources, scalability)
3. Robustness tests (variations in environment, adaptive adversaries)
4. Comparison baselines (prior work, alternative approaches)
5. Statistical analysis plan

Make this publication-quality for a graduate thesis."

4.2 Attack Success Rate Analysis

For Red Team Projects:

"I need to evaluate my attack's effectiveness:

Attack: [description]
Test scenarios: [varied conditions]
Data collected: [metrics]

Analyze:
1. Success rate under different conditions
2. Required attacker effort (time, resources, positioning)
3. Attack detectability (what traces does it leave?)
4. Robustness to countermeasures
5. Comparison to existing attack techniques

Present results clearly with confidence intervals."

4.3 Defense Effectiveness Evaluation

For Blue Team Projects:

"Evaluate my defense mechanism's effectiveness:

Defense: [description]
Attack dataset: [description of attacks tested]

Calculate and interpret:
1. Detection accuracy (TP, TN, FP, FN rates)
2. Precision, Recall, F1-score
3. Receiver Operating Characteristic (ROC) curve
4. Performance overhead (latency, resource usage)
5. Resilience to adaptive attacks

Compare against: [baseline methods]"

4.4 Developing Countermeasures

Attack Projects Should Propose Defenses:

"I've demonstrated [attack]. Now propose countermeasures:

Attack characteristics: [summary]
Vulnerabilities exploited: [what weaknesses]

Suggest defenses at multiple levels:
1. Detection approaches (how to spot this attack)
2. Prevention mechanisms (how to stop it before it happens)
3. Mitigation strategies (reduce impact if it succeeds)
4. System hardening recommendations

Evaluate each defense's effectiveness and cost."

Defense Projects Should Consider Evasion:

"How might an adaptive adversary try to evade my defense?

Defense mechanism: [description]
Detection/prevention logic: [how it works]

Identify:
1. Potential evasion techniques
2. Blind spots or weaknesses
3. How to make the defense more robust
4. Trade-offs between security and usability

Test against adversarial scenarios."

4.5 Data Analysis and Visualization

Security Metrics Visualization:

"I have experimental data from my security evaluation:

[Upload or describe data]

Create visualizations appropriate for security research:
1. Attack success rate vs. conditions (bar charts, heatmaps)
2. ROC curves for detection systems
3. Timeline/sequence diagrams for attack progression
4. Performance overhead comparisons
5. Confusion matrices for classification-based defenses

Make them publication-ready."

Statistical Significance Testing:

"Test if my results are statistically significant:

Hypothesis: [e.g., "My defense detects attacks better than baseline"]
Data: [summary statistics]

Perform:
1. Appropriate statistical test (t-test, Mann-Whitney, etc.)
2. Calculate p-values and confidence intervals
3. Interpret results
4. Check for confounding factors
5. Assess practical significance (not just statistical)

N = [sample size], α = 0.05"

Deliverable for Week 14:

  • Complete evaluation results with statistical analysis
  • Countermeasure/evasion analysis
  • Performance benchmarks
  • Visualization of key findings

Phase 5: Documentation & Responsible Disclosure

Week 15 (Days 50-56)

5.1 Security Research Report Structure

Standard Format for Emerging Systems Security Papers:

  1. Abstract
  2. Introduction & Motivation (Security problem context)
  3. Threat Model (Assumptions, attackers, attack scenarios)
  4. Related Work (Existing security solutions and their limitations)
  5. Methodology (Your security approach and design decisions)
  6. Implementation (Security architecture and key components)
  7. Security Analysis (Theoretical security guarantees)
  8. Evaluation & Results (Performance data and security testing)
  9. Discussion (Insights, limitations, trade-offs, countermeasures)
  10. Ethical Considerations
  11. Conclusion & Future Work

5.2 Writing with AI Assistance

Threat Model Section:

"Write the threat model section for my security paper:

System: [description]
Adversary capabilities: [what attacker can/cannot do]
Security goals: [what you're protecting]
Assumptions: [what you trust]

Format this formally using standard security terminology. Include an 
adversary model diagram if applicable."

Methodology Section:

"Review my methodology description for clarity and completeness:

[paste your draft]

Improve by:
1. Adding technical details for reproducibility
2. Justifying design choices
3. Explaining why alternative approaches were rejected
4. Including system architecture diagrams
5. Using precise security terminology

Target audience: security researchers and graduate students."

Results Section:

"Help me present my security evaluation results:

Data: [summary of findings]
Metrics: [what you measured]

Create results section that:
1. Presents data clearly (tables and figures)
2. Highlights key findings
3. Compares with baselines/prior work
4. Discusses unexpected results
5. Maintains objectivity

Include figure captions that tell a story."

Discussion Section - Countermeasures:

"Write a discussion of countermeasures for my demonstrated attack:

Attack: [summary]
Vulnerabilities: [what was exploited]

Discuss:
1. Immediate mitigation strategies
2. Long-term solutions
3. Trade-offs of each countermeasure
4. Feasibility of deployment
5. Lessons for system designers

Balance responsible disclosure with academic contribution."

Limitations and Future Work:

"Help me write honest limitations of my security research:

Project: [description]
Scope: [what you did]
Constraints: [time, resources, access]

Identify:
1. Threats not addressed by your work
2. Assumptions that may not hold in practice
3. Experimental limitations
4. Generalizability concerns
5. Interesting directions for future work

Be honest but not self-defeating."

5.3 Ethical Considerations Section

Mandatory for Security Projects:

"Draft an ethical considerations section for my security paper:

Research: [what you did]
Potential impact: [positive and negative]
Safeguards: [what you did to be responsible]

Address:
1. Dual-use nature of security research
2. Steps taken to minimize harm potential
3. Responsible disclosure (if applicable)
4. Privacy protection (if human subjects)
5. Compliance with institutional policies

Make this thoughtful and genuine, not just perfunctory."

5.4 Responsible Disclosure Process

If You Discovered Real Vulnerabilities:

"I discovered a potential vulnerability in [system/device]. What's the 
responsible disclosure process?

Vulnerability: [high-level description, no exploits]
Affected: [vendor/product]
Severity: [your assessment]

Guide me through:
1. Should I disclose this during my project timeline?
2. How to contact the vendor/maintainer
3. What information to include in disclosure
4. Disclosure timeline (30, 60, 90 days?)
5. How to handle in my academic project (describe without full details?)

Note: Consult your instructor before disclosing."

Important: Real vulnerabilities may require coordination with your university's legal/ethics office.

5.5 Creating Effective Security Demonstrations

Demo Design for Security Projects:

"Design a compelling but responsible demo for my security project:

Project: [attack or defense]
Audience: Faculty and graduate students
Time: 3-4 minutes

Create demo plan that:
1. Clearly shows the security issue/solution
2. Is reliable (won't fail during presentation)
3. Doesn't reveal full exploit details publicly
4. Includes visualization of attack/defense in action
5. Has backup slides if live demo fails

Balance impact with responsibility."

Visualization for Security Concepts:

"Suggest visualizations to make [security concept] clear:

Concept: [e.g., side-channel attack, adversarial perturbation, covert channel]

Recommend:
1. Diagram types (sequence, architecture, data flow)
2. Real-time visualization for demo (network traffic, signal traces, etc.)
3. Before/after comparisons
4. Color schemes that are accessible and professional

Examples from security literature would help."

Deliverable for Week 15:

  • Complete technical report (10-15 pages, PDF format recommended)
  • Ethical considerations statement
  • Presentation slides with demo plan
  • (If applicable) Responsible disclosure documentation

Final Submission: Single zipped file via Canvas containing:

  • Complete source code (original, well-commented with security notes)
  • Project report/paper (PDF format recommended)
  • README file with: setup/installation instructions, security configuration guidelines, dependencies, how to run/test (including security tests), security considerations and warnings
  • Supporting materials (optional): architecture diagrams, threat model diagrams, security analysis documentation, demo videos (recommended), test datasets or attack scripts

Phase 6: Final Presentation & Defense

Week 16 (Days 57-60)

6.1 Presentation Structure for Security Projects

Recommended Flow (TBD minutes per individual/team):

  1. Security Problem Overview & Motivation (1-2 min): Why this security problem matters
  2. Threat Model & Attack Scenarios (1-2 min): Who's the adversary? What can they do?
  3. Security Solution Approach & Innovation (3-4 min): Your attack/defense mechanism
  4. System Architecture & Key Security Features (2-3 min): Design decisions
  5. Demo (3-4 min): Show both normal operation AND attack scenarios (live or video)
  6. Security Evaluation Results & Performance Analysis (2-3 min): Results and effectiveness
  7. Team Contributions (1 min, if applicable - MANDATORY): Explicit individual contributions
  8. Conclusion (1 min): Impact and future work

⚠️ Critical Presentation Requirements:

  • Attendance is mandatory — Non-attendance results in zero points for the entire project
  • Team contributions must be explicitly stated during the presentation
  • Failure to clarify individual contributions = zero points for all team members
  • Presentations are scheduled during Week 16, regular class time

6.2 Preparing for Tough Questions

Anticipate Faculty Questions:

"I'm presenting a security project on [topic]. Generate challenging 
questions faculty might ask:

Project: [brief description]
Claims: [what you demonstrated]

Prepare answers for questions about:
1. Threat model assumptions (are they realistic?)
2. Attack feasibility (could real attacker do this?)
3. Defense limitations (how to evade your countermeasure?)
4. Evaluation rigor (is your testing comprehensive?)
5. Ethical implications (potential for misuse)
6. Comparison with prior work (why is yours different/better?)
7. Generalization (does this apply to other systems?)

Include both friendly and skeptical questions."

6.3 Demo Safety Checklist

For Live Demonstrations:

"Create a safety checklist for my security demo:

Demo: [description]
Equipment: [what you'll use]
Environment: [classroom, lab, etc.]

Check:
□ Demo is completely isolated from production systems
□ No sensitive data will be exposed
□ Attack can be stopped immediately if needed
□ Audience cannot accidentally interfere
□ No dangerous physical effects (electrical, RF emissions, etc.)
□ Backup plan if demo fails
□ All software/configs documented for reproducibility

Add specific checks for [your demo type]."

6.4 Handling Ethical Challenges in Q&A

Responding to "This Could Be Misused":

Prepare responses with AI:

"Someone asks: 'Isn't your research helping attackers?'

Help me formulate a thoughtful response that:
1. Acknowledges the dual-use nature of security research
2. Explains benefits of public disclosure (defense improvement)
3. Describes safeguards in your work (e.g., not releasing full exploit)
4. References security research ethics norms
5. Shows you've thought deeply about this

Make it confident but not defensive."

6.5 Final Rehearsal

Mock Defense with AI:

"Act as a security faculty committee member reviewing my project:

[Provide project summary and slides]

Ask me:
- Critical questions about my methodology
- Challenges to my threat model
- Requests for clarification on technical details
- Ethical questions about potential misuse
- Suggestions for future work

Be rigorous but fair, like a PhD defense committee."

Deliverable for Week 16:

  • Final presentation (polished slides)
  • Working demonstration or video
  • Defense of design choices
  • Professional demeanor under questioning

AI Tools for Emerging Systems Security Research

Security-Specific Tools

Threat Intelligence and Research:

  • ChatGPT/Claude: Explaining attack techniques, brainstorming defenses, analyzing security architectures
  • Perplexity: Finding recent CVEs, vulnerability reports, security news
  • Elicit/Consensus: Security paper discovery and summarization
  • Claude Deep Research: In-depth literature survey and threat landscape analysis

Code Analysis and Development:

  • GitHub Copilot: Security-aware code completion (with caution)
  • Cursor / Claude Code: AI-assisted coding with security context
  • Semgrep/CodeQL: Automated vulnerability scanning (not AI per se, but useful)
  • Garak/PyRIT: LLM red-teaming and security evaluation

Data Analysis:

  • Claude (analysis tool): Process attack logs, analyze timing data, evaluate model robustness
  • ChatGPT Plus: Visualize security metrics, statistical analysis
  • Jupyter + AI assistants: Complex security data pipeline development

Documentation:

  • Claude: Technical writing, threat model formalization
  • Grammarly: Professional writing polish
  • Overleaf + AI: LaTeX formatting for academic papers

AI Limitations in Security Context

AI Can Hallucinate Security Information:

  • Always verify CVE numbers, vulnerability details
  • Cross-check exploit code suggestions (they may not work)
  • Confirm defense mechanisms against authoritative sources

AI May Suggest Outdated or Wrong Techniques:

  • Verify tool versions and availability
  • Check if attack techniques still work on modern systems
  • Consult official documentation for frameworks

AI Doesn't Understand Legal Boundaries:

  • Don't rely on AI for legal advice about security testing
  • Consult your instructor and legal resources
  • Understand local and international laws (CFAA, DMCA, etc.)

Security Project Categories & AI Usage Patterns

Category 1: Side-Channel Attack Projects

Example: Power/timing/EM side-channels against biometrics, cryptography, or VR/AR systems

AI Support Strategy:

"Explain [timing/power/EM] side-channel attacks:
1. Physical principles (why does information leak?)
2. Measurement setup (what equipment is needed?)
3. Signal processing (filtering, feature extraction)
4. Statistical analysis (correlation, template attacks)
5. Countermeasures (how to defend?)

Focus on [your target: biometric authentication / crypto implementation / etc.]"

Implementation Help:

"I collected side-channel traces [description]. Help me:
1. Process raw signals (filtering, alignment)
2. Extract features correlated with secret
3. Apply machine learning or statistical tests
4. Visualize information leakage
5. Quantify attack success (information theoretic metrics)

Provide Python code using scipy/numpy."

Category 2: Adversarial ML Projects

Example: Adversarial attacks on ML models in emerging systems (image, audio, sensor, LLM)

AI Support Strategy:

"I want to generate adversarial examples for [target ML model]:

Task: [classification/detection/generation/etc.]
Model architecture: [CNN/RNN/Transformer/LLM/etc.]
Constraints: [perturbation budget, physical realizability]

Explain:
1. Attack algorithms (FGSM, PGD, C&W, AutoAttack, etc.)
2. Which is best for my emerging systems context?
3. How to ensure adversarial examples transfer to physical/deployed settings?
4. Implementation using Foolbox, CleverHans, or ART
5. Evaluation metrics (success rate, perturbation size, detectability)

Consider deployment-specific constraints: quantized models, limited API access, 
real-time requirements."

Defense Implementation:

"Design an adversarial defense for [emerging system ML model]:

Defense type: [adversarial training, detection, preprocessing, certified defense]

Help with:
1. Implementation strategy
2. Robustness evaluation methodology (AutoAttack, adaptive attacks)
3. Performance trade-offs (accuracy vs. robustness)
4. Deployment considerations (latency, model size, resource constraints)

Provide code structure and key algorithms."

Category 3: Covert Channel Projects

Example: Covert communication through IoT devices, mobile sensors, or emerging system interfaces

AI Support Strategy:

"Design a covert channel using [sensor/network/storage]:

Medium: [what carries hidden information]
Capacity needed: [bits per second]
Detectability: [should evade what detection methods?]

Help me:
1. Design encoding scheme (how to embed data)
2. Calculate theoretical channel capacity
3. Implement encoder/decoder
4. Evaluate against detection methods
5. Propose countermeasures

Balance covertness with capacity."

Category 4: Backdoor Detection Projects

Example: Detecting backdoors in ML models or firmware

AI Support Strategy:

"I'm analyzing [ML model / firmware] for backdoors:

System: [description]
Backdoor types suspected: [trigger-based, data poisoning, etc.]

Guide me through:
1. Backdoor detection techniques (neural cleanse, activation clustering, etc.)
2. Implementation of detection algorithm
3. Evaluation: how to generate backdoored models for testing?
4. False positive/negative analysis
5. Remediation approaches if backdoor found"

Category 5: Security in VR/AR Systems

Example: Privacy attacks, visual side-channels, or motion-based inference in VR/AR applications

AI Support Strategy:

"Analyze security/privacy risks in VR/AR systems:

AR/VR application: [description]
Sensors used: [camera, IMU, GPS, eye tracker, hand tracker, etc.]

Identify:
1. Privacy-sensitive information leakage (gaze, body motion, environment)
2. Possible attack vectors (sensor spoofing, visual side-channels, motion inference)
3. Defense mechanisms (differential privacy, secure computation, rendering obfuscation)
4. Evaluation methodology
5. User study design (if involving human subjects)

Consider unique challenges of AR/VR: real-time requirements, continuous sensing, 
immersive environments, multimodal data streams."

Category 6: LLM and AI Agent Security Projects

Example: Prompt injection attacks, jailbreaking defenses, or secure LLM-agent architectures

AI Support Strategy:

"I want to study security of LLM-based systems:

Target system: [chatbot, autonomous agent, tool-using LLM, multimodal model]
Attack type: [prompt injection, jailbreaking, data extraction, tool misuse]

Help me:
1. Design attack taxonomy for this system type
2. Implement attack scenarios in a controlled sandbox
3. Evaluate defense mechanisms (input filtering, output monitoring, guardrails)
4. Measure attack success rates and defense effectiveness
5. Analyze trade-offs between security and utility

Consider: indirect prompt injection, multi-turn attacks, tool-call exploitation."

Defense Implementation:

"Design a defense for LLM-based [agent/chatbot/system]:

Threat: [prompt injection, data exfiltration, harmful output generation]

Help with:
1. Input sanitization and validation strategies
2. Output monitoring and filtering approaches
3. Architectural defenses (sandboxing, privilege separation)
4. Evaluation using established red-teaming benchmarks
5. Measuring false-positive impact on legitimate use

Provide implementation approach and evaluation framework."

Category 7: Federated Learning Privacy Projects

Example: Privacy attacks on federated learning or privacy-preserving aggregation defenses

AI Support Strategy:

"I want to study privacy in federated learning systems:

FL setup: [number of clients, model type, data distribution]
Attack type: [gradient inversion, membership inference, model poisoning, free-riding]

Help me:
1. Implement the federated learning baseline using Flower or PySyft
2. Design and implement the privacy attack
3. Evaluate information leakage quantitatively
4. Implement defenses (differential privacy, secure aggregation, clipping)
5. Measure utility vs. privacy trade-offs

Consider: non-IID data, communication efficiency, heterogeneous clients."

Category 8: Robotic Systems Security Projects

Example: Sensor spoofing attacks, secure teleoperation, or adversarial manipulation policies

AI Support Strategy:

"I want to study security of robotic systems:

Robot system: [manipulator, mobile robot, drone, teleoperation system]
Attack surface: [sensor inputs, control signals, perception pipeline, communication]

Help me:
1. Identify vulnerabilities in the robot perception-action pipeline
2. Design attack scenarios (sensor spoofing, adversarial objects, command injection)
3. Implement safety-preserving defenses
4. Evaluate in simulation (Isaac Sim, Gazebo, PyBullet)
5. Discuss real-world deployment implications

Consider: real-time safety constraints, human-robot interaction, physical consequences."

Academic Integrity in Security Research

Acceptable AI Use

Encouraged:

  • Understanding attack techniques and defense mechanisms
  • Analyzing academic papers and prior work
  • Debugging your security code
  • Designing threat models and evaluation protocols
  • Generating test cases and synthetic attack data
  • Improving technical writing clarity
  • Creating visualizations of security concepts

Acceptable with Attribution:

  • Using AI to design initial attack/defense architecture (cite in methodology)
  • Adapting AI-suggested algorithms for your specific context
  • Using AI for data analysis pipeline (note in methods)

Unacceptable AI Use

Strictly Prohibited:

  • Asking AI to generate complete exploits for deployment
  • Using AI to attack unauthorized systems
  • Having AI write your entire implementation without understanding
  • Fabricating experimental results or vulnerability reports
  • Copying literature reviews without reading original papers
  • Generating fake security claims or metrics

The Security Research Responsibility Principle

Unlike general software development, security research has heightened ethical requirements:

  1. Your work could directly harm people if misused - design with this in mind
  2. Vulnerabilities you discover affect real users - handle responsibly
  3. Security claims must be rigorously validated - your reputation matters
  4. The security community relies on trust - maintain integrity

AI helps you work faster, but cannot replace your ethical judgment.


Common Pitfalls in Security Projects

Pitfall 1: Unrealistic Threat Models

Problem: Assuming adversary capabilities that don't match reality

Solution:

"Evaluate if my threat model is realistic:

Threat model: [your assumptions]
Adversary: [capabilities you assume]

Is this realistic for:
1. Typical attackers (script kiddies, criminals, etc.)?
2. Motivated attackers (APT, nation-states)?
3. Practical attack scenarios?

Suggest adjustments to make it more grounded without oversimplifying."

Pitfall 2: Overfitting to Your Testbed

Problem: Attack works in your lab but nowhere else

Solution:

  • Test on multiple devices, OS versions, configurations
  • Consider environmental variation (noise, interference, user behavior)
  • Validate assumptions about target system
"What assumptions am I making that might not generalize?

My setup: [testbed description]
Real-world deployment: [how systems actually used]

Identify brittleness in my approach."

Pitfall 3: Ignoring Defenses

Problem: Demonstrating attack without considering existing countermeasures

Solution:

"What defenses already exist against [my attack]?

Are they:
1. Deployed in practice?
2. Effective?
3. Bypassable?

Position my work relative to existing defenses."

Pitfall 4: Inadequate Evaluation

Problem: Cherry-picking results, not testing edge cases

Solution:

  • Pre-register evaluation protocol before running experiments
  • Test against adaptive adversaries
  • Report both positive and negative results
  • Use standard security metrics and datasets

Pitfall 5: Irresponsible Disclosure

Problem: Publicly releasing full exploit details without vendor coordination

Solution:

  • Follow coordinated disclosure timelines (typically 90 days)
  • Provide enough detail for reproducibility but not trivial exploitation
  • Consider embargo until patches available
  • Consult instructor and institution

Weekly Checkpoint Questions

End of Week 9:

"Review my security project proposal:
[paste proposal]

Evaluate:
1. Is the threat model well-defined and realistic?
2. Is the technical approach feasible in 8 weeks?
3. Are there ethical concerns I haven't addressed?
4. What's the main risk to project success?
5. How does this compare to prior work?

Be critical but constructive."

End of Week 11:

"I've designed my [attack/defense] architecture:
[paste architecture]

Before I start coding:
1. Are there fatal flaws in the approach?
2. What should I prototype first to validate feasibility?
3. What dependencies or external factors could block me?
4. Is my testbed adequate?
5. What monitoring/logging do I need?"

End of Week 12:

"Implementation status: [X]% complete. Current issues:
[describe problems]

Help me:
1. Prioritize remaining work
2. Debug current blockers
3. Identify what I can defer to future work
4. Ensure I have a working demo by Week 13 end

Be realistic about the timeline."

End of Week 13:

"My [attack/defense] is working. What evaluation should I prioritize in Week 14?

Current capabilities: [what works]
Time available: 7 days

Design evaluation that:
1. Validates core claims
2. Tests robustness
3. Compares with baselines
4. Produces results for paper and demo

Focus on quality over quantity."

End of Week 14:

"Evaluation results: [summarize findings]

For presentation and paper:
1. What are the key takeaways?
2. How do I position this relative to prior work?
3. What limitations must I acknowledge?
4. What future work naturally follows?
5. How do I make this compelling in 10 minutes?

Help me tell the story effectively."

Final Recommendations

Balancing Red Team and Blue Team Thinking

Good security projects demonstrate both:

  • Offense: "Here's how to attack..."
  • Defense: "...and here's how to defend against it"

Even if your project is attack-focused, propose defenses. Even if defense-focused, consider evasion.

Prioritize Reproducibility

Security research builds on trust. Make your work reproducible:

  • Document your environment precisely
  • Release code/datasets when possible (ethically)
  • Provide detailed implementation notes
  • Enable others to verify your claims

Communicate Clearly

Security is complex - clarity matters:

  • Avoid unnecessary jargon
  • Use diagrams liberally
  • Provide intuition before technical details
  • Test explanations on non-security peers

Embrace Negative Results

Not all attacks succeed. Not all defenses work perfectly. That's valuable!

  • Report what didn't work and why
  • Explain lessons learned
  • Position as "ruling out" approaches

Stay Current

Security evolves rapidly:

  • Check for new attacks/defenses published during your project
  • Monitor relevant CVEs and security bulletins
  • Update related work section as you progress

Conclusion

Security research is challenging, impactful, and carries significant responsibility. AI tools can accelerate your work, but you remain responsible for:

  • Ethical conduct - ensuring your research doesn't cause harm
  • Technical rigor - validating all claims thoroughly
  • Academic integrity - proper attribution and honest reporting
  • Security judgment - understanding broader implications

Use AI as a force multiplier for your own expertise, not a substitute for deep thinking about security.

Your project will be evaluated on (100 points total):

  • Problem Identification & Solution Performance (30 pts):
    • Clear, well-defined security problem with real-world relevance (10 pts)
    • Innovative solution with practical deployability (10 pts)
    • Performance benchmarking and comparative analysis (10 pts)
  • Design Implementation & Evaluation (40 pts):
    • Fully functional, well-documented, modular security system (20 pts)
    • Realistic scenario-based testing with threat modeling (10 pts)
    • Comprehensive evaluation: robustness, performance, privacy, reliability (10 pts)
  • Project Report/Paper Submission (30 pts):
    • Clear report structure covering threat model through conclusion (10 pts)
    • Technical depth with reproducible detail and security analysis (10 pts)
    • Articulated contributions, broader implications, and future directions (10 pts)

Bonus Opportunities:

  • Innovation bonus for novel, high-impact security ideas
  • Early submission bonus (earlier = more points)

Good luck, and hack responsibly!


Questions or Concerns?

  • Consult instructor before pursuing risky research
  • Review institutional IRB/ethics policies
  • Check legal boundaries for security testing
  • When in doubt, ask before acting
  • Remember: intent matters, but impact matters more

Resource: USENIX Security Research Ethics Guidelines (https://www.usenix.org/conferences/author-resources/research-ethics)