Agentic AI
Data Analysis
Conversational UI

AI Smart Contract Analyzer

Breaking down complex DeFi transactions with step-by-step narratives

Year

2024-2025

Role

Product Strategy, End-to-End UX, UI Craft

At a Glance

SCREEN’s AI Agent explains complex DeFi transactions by turning raw smart contract data into clear, step-by-step narratives. Investigators can see what happened, which functions were exploited, and how funds moved, cutting investigation time in half.

As the founding designer of SCREEN, I led the design and integration of the AI Agent into SCREEN, owning the end-to-end UX, collaborating with engineers to embed prompts, working with investigators to collect real case questions, and iterating on outputs to balance clarity with technical accuracy.

Impact

65%

faster investigations

Problem

In DeFi investigations, a single transaction can trigger dozens of smart contracts and internal calls. Investigators still struggled to understand the logic and flow behind complex attacks.

Solution

An AI agent that turns complex DeFi data into human-readable stories, breaking down each function call and token transfer to explain what happened and why.

Context

Why Investigating Smart Contracts Is So Hard

Over the years, I kept seeing the same struggle when sitting with our clients. Even with SCREEN’s ability to analyze smart contracts and visualize internal transactions, investigators often needed one of our in-house DeFi experts to guide them through the transaction data. As attacks grew more complex, the process became overwhelming and slow, making it easy to miss how the hack actually happened.

“ A single exploit could take a couple of hours to full wrap my head around this transaction”

-DeFi Investigator

Example of a complex transaction involved in a DeFi hack.

Challenge

🌀😵‍💫

Too Much Data at Once

A single exploit could involve 20+ contracts and 80+ internal calls. To understand all the steps of the attack, investigators had to dig through overwhelming data and often missing key evidence.

📄 ➡️ 🤷🏼‍♂️

Language Barrier

Most investigators aren’t comfortable reading Solidity, Ethereum’s smart contract language. Without clear comments or prior knowledge, it’s hard to quickly understand what each function does.

Goal

Make complex transactions understandable

Break down multi-step DeFi exploits and internal flows into clear, digestible activity.

Summarize and highlight key exploits

Provide high-level narratives while pinpointing the specific functions and mechanisms attackers used.

Explain code in plain English

Translate smart contract source code and functions into clear language investigators can follow.

Design Approach

Speaking the Investigator’s Language

When I first tested SCREEN’s data with AI models, the answers were inconsistent. A vague question led to a vague answer, and non-technical investigators naturally asked in the middle of piecing together complex cases. I didn’t want to force them to “prompt like an engineer.”

So I worked with clients to collect the kinds of questions they were really asking and then mapped those into the structure of our platform. That way, users could stay natural in how they asked, while on the backend I worked with engineers to prepare the system with the right context and data. It wasn’t about training people to prompt better, it was about shaping the agent so the right evidence, code, and transaction details were always included before the AI answered.

Questions collected from clients + Explorations on AI Output Template

Iterations

Approach 1: Prompt Library

This option built on our earlier reasoning-step design. Investigators could start with a pre-prompt we provided, add required inputs like a transaction hash or address, and save it as a reusable Prompt Book. Power users liked the flexibility, it felt like building their own mini-agent and reduced repetitive work across cases.

But this flexibility came with trade-offs. Investigators needed to learn how to tune prompts, and results varied widely depending on how each person set theirs up. Even with reasoning steps in place, some clients forgot inputs or reused prompts in ways that didn’t fit, which led to vague answers. Supporting dozens of user-authored prompt books also created higher maintenance for engineers and made it hard to guarantee consistent, compliant outputs.

Approach 2: Pre-Built Agents (Chosen!!)

Prompts and reasoning were embedded directly into the agent. Investigators simply picked Documentation, Contract, or Transaction analysis, provided the input, and got a structured answer every time. Clients testing this version said it felt more predictable and gave them confidence the agent “knew what it was doing.” For engineers, it meant more upfront work to build separate pipelines, but easier governance, auditability, and quality control in the long run.

Approach 1

Build reusable Prompt Books to reduce repetitive work

Enforces required inputs → more accurate results.

❌ Heavy maintenance for engineers

❌ Most users need little customization

❌ Slower to ship for MVP

Approach 2

Three built-in reasonings: Docs / Contract / Transaction.

Easy to use

Fast to ship, easy to maintain

❌ Less customizable

❌ Needs follow-ups for depth

Final Design

The Launch

First Launch, First Learnings

We launched our first beta AI Agent with select clients to gather real-world feedback. The agent was able to ingest SCREEN’s transaction data and explain what was happening, but in early sessions it often took several follow-up questions to arrive at a format that felt clear without losing critical steps. This beta phase gave us valuable signals about where to tighten the flow and how to make explanations more consistent.

Reflection

Designing for Reliable AI Behavior

The biggest challenge I found was maintaining consistency. The same input could produce different results depending on how it was phrased or how the model interpreted context. If I had more time, I would have tested the AI agent on a wider range of DeFi cases to see how it handled different transaction patterns and data types, and focused more on helping users provide structured, contextual input the model could act on reliably.

This experience taught me that designing with AI isn’t just about how it responds, it’s about how users express their intent. Once that part was right, the system became much more predictable and powerful.


© 2025 Olivia Xu